With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.

For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools in their day-to-day routines.

The most vocal supporters of generative AI often see it as a panacea for all efficiency and productivity-related woes. On the opposite extreme, hardline detractors see it as a privacy and security nightmare, not to mention a major economic and social burden in light of the job losses it’s widely expected to result in. Elon Musk, despite investing heavily in the industry himself, recently described a future where AI would replace all jobs, leading to a future where work is “optional.”

The truth, for now at least, lies somewhere between these opposing viewpoints. On one hand, any business trying to avoid the generative AI revolution risks becoming irrelevant. On the other, those that aggressively pursue its implementation with little regard for the security and privacy issues it presents risk leaving themselves open to falling foul of legislation like the EU’s AI Act.

In any case, generative AI is here to stay, regardless of our views on it. With that realization comes the risk of the unsanctioned or inadequately governed use of AI in the workplace. Enter the next frontier of information security: Shadow AI.

Shadow AI: The new threat on the block

Security leaders are already familiar with the better-known concept of shadow IT, which refers to the use of any IT resource outside of the purview or consent of the IT department. Shadow IT first became a major risk factor when companies migrated to the cloud, even more so during the shift to remote and hybrid work models. Fortunately, by now, most IT departments have managed to get the problem under control, but now there’s a new threat to think about —shadow AI.

Shadow AI borrows from the same core concept of shadow IT, and it’s driven by the frenzied rush to adopt AI — especially generative AI — tools in the workplace. At the lower level, workers are starting to use popular LLMs like ChatGPT to assist with everything from writing corporate emails to addressing customer support queries. Shadow AI happens when they use unsanctioned tools or use cases without looping in the IT department.

Shadow AI can also be a problem at a much higher and more technical level. Many businesses are now developing their own LLMs and other generative AI models. However, although these may be fully sanctioned by the IT department, that’s not necessarily the case for all of the tools, people and processes that support the development, implementation and maintenance of such projects.

For example, if the model training process isn’t adequately governed, it could be open to data poisoning, a risk that’s arguably even greater if you’re building on top of open-source models. If shadow AI factors in at any part of the project lifecycle, there’s a serious risk of compromising the entire project.

Explore AI cybersecurity solutions

It’s time to get a handle on AI governance

Almost every business already uses generative AI or plans to do so in the next few years but, according to one recent report, just one in 25 companies have fully integrated AI throughout their organizations. Clearly, while adoption rates have soared, governance has lagged a long way behind. Without that governance and strategic alignment, there’s a lack of guidance and visibility, leading to a meteoric rise of shadow AI.

All too often, disruptive new technologies lead to knee-jerk responses. That’s especially the case with generative AI in cash-strapped organizations, which often view it primarily as a way to cut costs — and lay off workers. Needless to say, however, the potential costs of shadow AI are orders of magnitude greater. To name a few, these include generating false information, developing code with AI-generated bugs, or exposing sensitive information via models trained on “private” chats, as is the case with ChatGPT by default.

We’ve already seen some major blunders at the hands of shadow AI, and we’ll likely see a lot more in the years ahead. In one case, a law firm was fined $5,000 for submitting fictitious legal research generated by ChatGPT in an aviation injury claim. Last year, Samsung banned the use of the popular LLM after employees leaked sensitive code over it. It’s vital to remember that most publicly available models use recorded chats for training future iterations. This may potentially lead to any sensitive information from chats resurfacing later in response to a user prompt.

As employees — with or without the knowledge of their IT departments — input more and more information into LLMs, generative AI has become one of the biggest data exfiltration channels of all. Naturally, that’s a major internal security and compliance threat, and one that doesn’t necessarily have anything to do with external threat actors. Imagine, for example, an employee copying and pasting sensitive research and development material into a third-party AI tool or potentially breaking privacy laws like GDPR by uploading personally identifiable information.

Shore-up cyber defenses against shadow AI

Because of these risks, it’s crucial that all AI tools fall under the same level of governance and scrutiny as any other business communications platform. Training and awareness also play a central role, especially since there’s a widespread assumption that publicly available models like ChatGPT, Claude and Copilot are safe. The truth is they’re not a safe place for sensitive information, especially if you’re using them with default settings.

Above all, leaders must understand that using AI responsibly is a business problem, not just a technical challenge. After all, generative AI democratizes the use of advanced technology in the workplace to the extent that any knowledge worker can get value from it. But that also means, in their hurry to make their lives easier, there’s a huge risk of the unsanctioned use of AI at work spiraling out of control. No matter where you stand in the great debate around AI, if you’re a business leader, it’s essential that you extend your governance policies to cover the use of all internal and external AI tools.

More from Artificial Intelligence

ChatGPT 4 can exploit 87% of one-day vulnerabilities

2 min read - Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to determine the answer. The conclusion: They are very effective.ChatGPT 4 quickly exploited one-day vulnerabilitiesDuring the study, the team used 15 one-day vulnerabilities that occurred in…

Vulnerability management empowered by AI

3 min read - Vulnerability management involves an ongoing cycle of identifying, prioritizing and mitigating vulnerabilities within software applications, networks and computer systems. This proactive strategy is essential for safeguarding an organization’s digital assets and maintaining its security and integrity.To make the process simpler and easier, we need to involve artificial intelligence (AI). Let's examine how AI is effective for vulnerability management and how it can be implemented.Artificial intelligence in vulnerability managementUsing AI will take vulnerability management to the next level. AI not only…

The dangers of anthropomorphizing AI: An infosec perspective

4 min read - The generative AI revolution is showing no signs of slowing down. Chatbots and AI assistants have become an integral part of the business world, whether for training employees, answering customer queries or something else entirely. We’ve even given them names and genders and, in some cases, distinctive personalities.There are two very significant trends happening in the world of generative AI. On the one hand, the desperate drive to humanize them continues, sometimes recklessly and with little regard for the consequences.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today