Robert Lemos at DARKReading recently reported on a worrying trend, writes By Stu Sjouwerman, CEO at KnowBe4. The title said it all: more than 4% of employees have put sensitive corporate data into a large language model like ChatGPT, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes.Here’s a short extract of the story:

“Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn’t in place for the service.

In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information to the LLM.

In one case, an executive cut and pasted the firm’s 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient’s name and their medical condition and asked ChatGPT to craft a letter to the patient’s insurance company.

And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven.

‘There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps,” he says. “And how that plays out [remains to be seen] — I think, we’re in pregame; we’re not even in the first inning.'”

Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this.

Share This