The increased use of artificial intelligence in the workplace has led to a rapid increase in data collection and challenges a company’s ability to protect sensitive data.
A report released in May by computer security firm Cyberhaven, titled “Chamber Culprits,” sheds light on AI adoption trends and their relationship to increased risk. Cyberhaven’s analysis used a dataset of usage patterns from three million employees to assess the use of AI and its impact on the business environment.
The rate of growth of artificial intelligence is comparable to previous transformative changes such as the Internet and cloud computing. According to Cyberhaven CEO Howard Ting, as new challenges are pursued by early adopters of the cloud, today’s companies must also deal with the complexities created by the widespread use of artificial intelligence. “Our research on the uses and risks of AI shows not only the impact of these technologies, but also new problems similar to those faced by major technological developments in the past,” he said. told TechNewsWorld.
Findings warn of potential misuse of AI
Cubicle Culprits report highlights the rapid adoption of AI in the workplace and by end users, above enterprise IT. This situation, in turn, exposes risky “shadow AI” accounts, as well as various sensitive corporate data.
The products of the three AI technology giants – OpenAI, Google and Microsoft – are actively using AI. Their results are 96% of the use of intelligence in the workplace.
According to the study, global workers will feed sensitive business data to artificial intelligence tools, increasing by an alarming 485% from March 2023 to March 2024. We are only at the beginning of adoption. Only 4.7% of employees in financial companies, 2.8% in the medical and life sciences fields, and 0.6% in manufacturing companies use artificial intelligence.
A whopping 73.8% of ChatGPT usage is done by non-business accounts. “Unlike enterprise versions, these accounts combine shared data with public samples, which poses a significant risk to the security of sensitive data.”
“A significant portion of sensitive corporate data is transferred to non-corporate accounts. This includes half of source code [50.8%], research and development resources [55.3%] and HR documents and employees [49.0%].
The data shared by these non-corporate accounts is in public samples. The percentage of non-business account usage is higher for Gemini ( 94.4%) and Bard (95.9%).
Uncontrollable AI data bleeding
This feature is highly vulnerable. Non-business accounts don’t have strong security measures to protect that data, Ting said.
The pace of application of artificial intelligence is rapidly reaching new areas and those related to sensitive data. About 27 percent of data workers involved in AI tools are sensitive, compared to 10.7 percent last year.
For example, 82.8 percent of legal documents where employees accessed AI tools went to non-company accounts, and the information may become public.
Ting cautioned that adding written content to content generated by artificial intelligence tools would increase the risk. Embedding AI-generated source code outside of coding tools can pose a risk of vulnerability.
Some companies don’t know how to stop the flow of unauthorized and sensitive data being exported to AI tools that are beyond the reach of IT. They rely on existing data security tools that scan data content to identify its type.
“What’s missing is the context of where the data came from, who interacted with it, and where it was stored. Consider the example of an employee entering code into a personal AI wallet to help with debugging. Is this source code from a repository? Customer data from a SaaS application?”
Data flow management can
protect Ting, if done properly, by training employees on data flow issues a good part of the solution. in the security sense
“But the videos that employees have to watch twice a year are forgotten.
Cyberhaven found that malicious activity was reduced by 90% when employees received a pop-up message guiding them through risky actions, such as attaching source code to in a personal ChatGPT account.
The company’s Data Detection and Response (DDR) technology understands how data moves and uses that context to protect sensitive data. The tech also knows the difference between a business account and a personal account for ChatGPT.
This feature allows companies to implement policies that prevent employees from pasting sensitive data into personal accounts and allowing that data to flow to corporate accounts.
Wondering Who Shaves
Cyberhaven analyzed the spread of risk in the workplace, including remote, onsite and hybrid. The researchers found that the location of the employee in relation to the release of the data presented a security risk.
“Our research revealed a surprising shift in content. Once the most secure sector, office workers are now leading the way in enterprise data mining.”
In fact, office workers are 77% more likely to mine sensitive data than their peers. However, it’s 510% more when office workers come in from other locations than when they stay on-site, Ting says. , this is the worst time for corporate data..