Security
Headlines
HeadlinesLatestCVEs

Headline

Shadow AI, Data Exposure Plague Workplace Chatbot Use

Productivity has a downside: A shocking number of employees share sensitive or proprietary data with the generational AI platforms they use, without letting their bosses know.

DARKReading
#intel#perl#acer#samsung#auth

Source: Marcos Alvarado via Alamy Stock Photo

Generational AI chatbots are popping up in everything from email clients to HR tools these days, offering a friendly and smooth path toward better enterprise productivity. But there’s a problem: All too often, workers aren’t thinking about the data security of the prompts they’re using to elicit chatbot responses.

In fact, more than a third (38%) of employees share sensitive work information with AI tools without their employer’s permission, according to a survey this week by the US National Cybersecurity Alliance (NCA). And that’s a problem.

The NCA survey (which polled 7,000 people globally) found that Gen Z and millennial workers are more likely to share sensitive work information without getting permission: A full 46% and 43%, respectively, admitted to the practice, as opposed to 26% and 14% of Gen X and baby boomers, respectively.

Real-World Consequences From Sharing Data With Chatbots

The issue is that most of the most prevalent chatbots capture whatever information users put into prompts, which could be things like proprietary earnings data, top-secret design plans, sensitive emails, customer data, and more — and send it back to the large language models (LLMs), where it’s used to train the next generation of GenAI.

And that means that someone could later access that data using the right prompts, because it’s part of the retrievable data lake now. Or, perhaps the data is kept for internal LLM use, but its storage isn’t set up properly. The dangers of this — as Samsung found out in one high-profile incident — are relatively well understood by security pros — but by everyday workers, not so much.

ChatGPT’s creator, OpenAI, warns in its user guide, “We are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.” But it’s hard for the average worker to constantly be thinking about data exposure. Lisa Plaggemier, executive director of NCA, notes one case that illustrates how the risk can easily translate into real-world attacks.

“A financial services firm integrated a GenAI chatbot to assist with customer inquiries,” Plaggemier tells Dark Reading. “Employees inadvertently input client financial information for context, which the chatbot then stored in an unsecured manner. This not only led to a significant data breach, but also enabled attackers to access sensitive client information, demonstrating how easily confidential data can be compromised through the improper use of these tools.”

Galit Lubetzky Sharon, CEO at Wing, offers another real-life example (without naming names).

“An employee, for whom English was a second language, at a multinational company, took an assignment working in the US,” she says. “In order to improve his written communications with his US based colleagues, he innocently started using Grammarly to improve his written communications. Not knowing that the application was allowed to train on the employee’s data, the employee sometimes used Grammarly to improve communications around confidential and proprietary data. There was no malicious intent, but this scenario highlights the hidden risks of AI.”

A Lack of Training & the Rise of “Shadow AI”

One reason for the high percentages of people willing to roll the dice is almost certainly a lack of training. While the Samsungs of the world might swoop into action on locking down AI use, the NCA survey found that 52% of employed participants have not yet received any training on safe AI use, while just 45% of the respondents who actively use AI have.

“This statistic suggests that many organizations may underestimate the importance of training, perhaps due to budget constraints, or a lack of understanding about the potential risks,” Plaggemier says. And meanwhile, she adds, “This data underscores the gap between recognizing potential dangers and having the knowledge to mitigate them. Employees may understand that risks exist, but the lack of proper education leaves them vulnerable to the severity of these threats, especially in environments where productivity often takes precedence over security.”

Worse, this knowledge gap contributes to the rise of “shadow AI,” where unapproved tools are used outside the organization’s security framework.

“As employees prioritize efficiency, they may adopt these tools without fully grasping the long-term consequences for data security and compliance, leaving organizations vulnerable to significant risks,” Plaggemier warns.

It’s Time for Enterprises to Implement GenAI Best Practices

It’s clear that prioritizing immediate business needs over long-term security strategies can leave companies vulnerable. But when it comes to rolling out AI before security is ready, the golden allure of all those productivity enhancements — sanctioned or not — may often prove too strong to resist.

“As AI systems become more common, it’s essential for organizations to view training not just as a compliance requirement but as a vital investment in protecting their data and brand integrity,” Plaggemier says. “To effectively reduce risk exposure, companies should implement clear guidelines around the use of GenAI tools, including what types of information can and cannot be shared.”

Morgan Wright, chief security adviser at SentinelOne, advocates starting the guidelines-development process with first principles: “The biggest risk is not defining what problem you’re solving through chatbots,” he notes. “Understanding what is to be solved helps create the right policies and operational guardrails to protect privacy and intellectual property. It’s emblematic of the old saying, 'When all you have is a hammer, all the world is a nail.’”

There are also technology steps that organizations should take to shore up AI risks.

“Establishing strict access controls and monitoring the use of these tools can also help mitigate risks,” Plaggemier adds. “Implementing data masking techniques can protect sensitive information from being input into GenAI platforms. Regular audits and the use of AI monitoring tools can also ensure compliance and detect any unauthorized attempts to access sensitive data.”

There are other ideas out there, too. “Some companies have restricted the amount of data input into a query (like 1,024 characters),” Wright says. “It could also involve segmenting off parts of the organization dealing with sensitive data. But for now, there is no clear solution or approach that can solve this thorny issue to everyone’s satisfaction.”

The danger to companies can also be exacerbated thanks to GenAI capabilities being added to third-party software-as-a-solution (SaaS) applications, Wing’s Sharon warns — this is an area that is too often overlooked.

“As new capabilities are added, even to very reputable SaaS applications, the terms and conditions of those applications is often updated, and 99% of users don’t pay attention to those terms,” she explains. “It is not unusual for applications to set as the default that they can use data to train their AI models.”

She notes that an emerging category of SaaS security tools called SaaS Security Posture Management (SSPM) is developing ways to monitor which applications use AI and even monitor changes to things like terms and conditions.

“Tools like this are helpful for IT teams to assess risks and make changes in policy or even access on a continuous basis,” she says.

About the Author

Tara Seals has 20+ years of experience as a journalist, analyst and editor in the cybersecurity, communications and technology space. Prior to Dark Reading, Tara was Editor in Chief at Threatpost, and prior to that, the North American news lead for Infosecurity Magazine. She also spent 13 years working for Informa (formerly Virgo Publishing), as executive editor and editor-in-chief at publications focused on both the service provider and the enterprise arenas. A Texas native, she holds a B.A. from Columbia University, lives in Western Massachusetts with her family and is on a never-ending quest for good Mexican food in the Northeast.

DARKReading: Latest News

Cross-Site Scripting Is 2024's Most Dangerous Software Weakness