With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.

For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools in their day-to-day routines.

The most vocal supporters of generative AI often see it as a panacea for all efficiency and productivity-related woes. On the opposite extreme, hardline detractors see it as a privacy and security nightmare, not to mention a major economic and social burden in light of the job losses it’s widely expected to result in. Elon Musk, despite investing heavily in the industry himself, recently described a future where AI would replace all jobs, leading to a future where work is “optional.”

The truth, for now at least, lies somewhere between these opposing viewpoints. On one hand, any business trying to avoid the generative AI revolution risks becoming irrelevant. On the other, those that aggressively pursue its implementation with little regard for the security and privacy issues it presents risk leaving themselves open to falling foul of legislation like the EU’s AI Act.

In any case, generative AI is here to stay, regardless of our views on it. With that realization comes the risk of the unsanctioned or inadequately governed use of AI in the workplace. Enter the next frontier of information security: Shadow AI.

Shadow AI: The new threat on the block

Security leaders are already familiar with the better-known concept of shadow IT, which refers to the use of any IT resource outside of the purview or consent of the IT department. Shadow IT first became a major risk factor when companies migrated to the cloud, even more so during the shift to remote and hybrid work models. Fortunately, by now, most IT departments have managed to get the problem under control, but now there’s a new threat to think about —shadow AI.

Shadow AI borrows from the same core concept of shadow IT, and it’s driven by the frenzied rush to adopt AI — especially generative AI — tools in the workplace. At the lower level, workers are starting to use popular LLMs like ChatGPT to assist with everything from writing corporate emails to addressing customer support queries. Shadow AI happens when they use unsanctioned tools or use cases without looping in the IT department.

Shadow AI can also be a problem at a much higher and more technical level. Many businesses are now developing their own LLMs and other generative AI models. However, although these may be fully sanctioned by the IT department, that’s not necessarily the case for all of the tools, people and processes that support the development, implementation and maintenance of such projects.

For example, if the model training process isn’t adequately governed, it could be open to data poisoning, a risk that’s arguably even greater if you’re building on top of open-source models. If shadow AI factors in at any part of the project lifecycle, there’s a serious risk of compromising the entire project.

Explore AI cybersecurity solutions

It’s time to get a handle on AI governance

Almost every business already uses generative AI or plans to do so in the next few years but, according to one recent report, just one in 25 companies have fully integrated AI throughout their organizations. Clearly, while adoption rates have soared, governance has lagged a long way behind. Without that governance and strategic alignment, there’s a lack of guidance and visibility, leading to a meteoric rise of shadow AI.

All too often, disruptive new technologies lead to knee-jerk responses. That’s especially the case with generative AI in cash-strapped organizations, which often view it primarily as a way to cut costs — and lay off workers. Needless to say, however, the potential costs of shadow AI are orders of magnitude greater. To name a few, these include generating false information, developing code with AI-generated bugs, or exposing sensitive information via models trained on “private” chats, as is the case with ChatGPT by default.

We’ve already seen some major blunders at the hands of shadow AI, and we’ll likely see a lot more in the years ahead. In one case, a law firm was fined $5,000 for submitting fictitious legal research generated by ChatGPT in an aviation injury claim. Last year, Samsung banned the use of the popular LLM after employees leaked sensitive code over it. It’s vital to remember that most publicly available models use recorded chats for training future iterations. This may potentially lead to any sensitive information from chats resurfacing later in response to a user prompt.

As employees — with or without the knowledge of their IT departments — input more and more information into LLMs, generative AI has become one of the biggest data exfiltration channels of all. Naturally, that’s a major internal security and compliance threat, and one that doesn’t necessarily have anything to do with external threat actors. Imagine, for example, an employee copying and pasting sensitive research and development material into a third-party AI tool or potentially breaking privacy laws like GDPR by uploading personally identifiable information.

Shore-up cyber defenses against shadow AI

Because of these risks, it’s crucial that all AI tools fall under the same level of governance and scrutiny as any other business communications platform. Training and awareness also play a central role, especially since there’s a widespread assumption that publicly available models like ChatGPT, Claude and Copilot are safe. The truth is they’re not a safe place for sensitive information, especially if you’re using them with default settings.

Above all, leaders must understand that using AI responsibly is a business problem, not just a technical challenge. After all, generative AI democratizes the use of advanced technology in the workplace to the extent that any knowledge worker can get value from it. But that also means, in their hurry to make their lives easier, there’s a huge risk of the unsanctioned use of AI at work spiraling out of control. No matter where you stand in the great debate around AI, if you’re a business leader, it’s essential that you extend your governance policies to cover the use of all internal and external AI tools.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today