Employees are leaking data over GenAI tools, here’s what enterprises need to do

Employees are leaking data over GenAI tools, here’s what enterprises need to do

While celebrities and newspapers like The New York Times and Scarlett Johansson are legally challenging OpenAI, the poster child of the generative AI revolution, it seems like employees have already cast their vote. ChatGPT and similar productivity and innovation tools are surging in popularity. Half of employees use ChatGPT, according to GlassDoor, and 15% paste company and customer data into GenAI applications, according to the “GenAI Data Exposure Risk Report” by LayerX.

For organizations, the use of ChatGPT, Claude, Gemini and similar tools is a blessing. These machines make their employees more productive, innovative and creative. But they might also turn into a wolf in sheep’s clothing. Numerous CISOs are worried about the data loss risks to the enterprise. Luckily, things move fast in the tech industry, and there are already solutions for preventing data loss through ChatGPT and all other GenAI tools, and making enterprises the fastest and most productive versions of themselves.

Gen AI: The information security dilemma

With ChatGPT and all other GenAI tools, the sky’s the limit to what employees can achieve for the business — from drafting emails to designing complex products to solving intricate legal or accounting problems. And yet, organizations face a dilemma with generative AI applications. While the productivity benefits are straightforward, there are also data loss risks.

Employees get fired up over the potential of generative AI tools, but they aren’t vigilant when using it. When employees use GenAI tools to process or generate content and reports, they also share sensitive information, like product code, customer data, financial information and internal communications.

Picture a developer attempting to fix bugs in code. Instead of pouring over endless lines of code, they can paste it into ChatGPT and ask it to find the bug. ChatGPT will save them time, but might also store proprietary source code. This code might then be used for training the model, meaning a competitor might find it from future prompting. Or, it could just be stored in OpenAI’s servers, potentially getting leaked if security measures are breached.

Another scenario is a financial analyst putting in the company’s numbers, asking for help with analysis or forecasting. Or, a sales person or customer service representative typing in sensitive customer information, asking for help with crafting personalized emails. In all these examples, data that would otherwise be heavily protected by the enterprise is freely shared with unknown external sources, and can easily flow to malevolent and ill-meaning perpetrators.

“I want to be a business enabler, but I need to think of protecting my organization’s data,” said a Chief Security Information Officer (CISO) of a large enterprise, who wishes to remain anonymous. “ChatGPT is the new cool kid on the block, but I can’t control which data employees are sharing with it. Employees get frustrated, the board gets frustrated, but we have patents pending, sensitive code, we’re planning to IPO in the next two years — that’s not information we can afford to risk.”

This CISO’s concern is grounded in data. A recent report by LayerX has found that 4% of employees paste sensitive data into GenAI on a weekly basis. This includes internal business data, source code, PII, customer data and more. When typed or pasted into ChatGPT, this data is essentially exfiltrated, through the hands of the employees themselves.

Without proper security solutions in place that control such data loss, organizations have to choose: Productivity and innovation, or security? With GenAI being the fastest adopted technology in history, pretty soon organizations won’t be able to say “no” to employees who want to accelerate and innovate with gen AI. That would be like saying “no” to the cloud. Or email…

The new browser security solution

A new category of security vendors is on a mission to enable the adoption of GenAI without closing the security risks associated with using it. These are the browser security solutions. The idea is that employees interact with GenAI tools via the browser or via extensions they download to their browser, so that is where the risk is. By monitoring the data employees type into the GenAI app, browser security solutions which are deployed on the browser, can pop up warnings to employees, educating them about the risk, or if needed, they can block the pasting of sensitive information into GenAI tools in real time.

“Since GenAI tools are highly favored by employees, the securing technology needs to be just as benevolent and accessible,” says Or Eshed, CEO and co-founder of LayerX, an enterprise browser extension company. “Employees are unaware of the fact their actions are risky, so security needs to make sure their productivity isn’t blocked and that they are educated about any risky actions they take, so they can learn instead of becoming resentful. Otherwise, security teams will have a hard time implementing GenAI data loss prevention and other security controls. But if they succeed, it’s a win-win-win.”

The tech behind this capability is based on a granular analysis of employee actions and browsing events, which are scrutinized to detect sensitive information and potentially malicious activities. Instead of hindering business progress or getting employees rattled about their workplace putting spokes in their productivity wheels, the idea is to keep everyone happy, and working, while making sure no sensitive information is typed or pasted into any GenAI tools, which means happier boards and shareholders as well. And of course, happy information security teams.

History repeats itself

Every technological innovation has had its share of backlash. That is the nature of humans and business. But history shows that organizations that embraced innovation tended to outplay and outcompete other players who tried to keep things as they were.

This does not call for naivety or a “free for all” approach. Rather, it requires looking at innovation from 360׳ and to devise a plan that covers all the bases and addresses data loss risks. Fortunately, enterprises  are not alone in this endeavor. They have the support of a new category of security vendors that are offering solutions to prevent data loss through GenAI. 

VentureBeat newsroom and editorial staff were not involved in the creation of this content. 

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *