As the adoption of generative AI (GenAI) soars, so too does the risk of insider threats. This puts even more pressure on businesses to rethink security and confidentiality policies.

In just a few years, artificial intelligence (AI) has radically changed the world of work. 61% of knowledge workers now use GenAI tools — particularly OpenAI’s ChatGPT — in their daily routines. At the same time, business leaders, often partly driven by a fear of missing out, are investing billions in tools powered by GenAI. It’s not just chatbots they’re investing in either, but image synthesizers, voice cloning software and even deepfake video technology for creating virtual avatars.

We’re still some way off from GenAI becoming indistinguishable from humans. Even if  — or perhaps when — that actually happens, then the ethical and cyber risks that come with it will continue to grow. After all, when it becomes impossible to tell whether or not someone or something is real, the risk of people being unwittingly manipulated by machines surges.

GenAI and the risk of data leaks

Much of the conversation about security in the era of GenAI concerns its implications in social engineering and other external threats. But infosec professionals must not overlook how the technology can greatly expand insider threat attack surface, too.

Given the rush to adopt GenAI tools, many companies have already found themselves getting in trouble. Just last year, Samsung reportedly banned the use of GenAI tools in the workplace after employees were suspected of sharing sensitive data in conversations with OpenAI’s ChatGPT.

By default, OpenAI records and archives all conversations, potentially for use in training future generations of the large language model (LLM). Because of this, sensitive information, such as corporate secrets, could potentially resurface later on in response to a user prompt. Back in December, researchers were testing ChatGPT’s susceptibility to leaking data when they uncovered a simple technique to extract the LLM’s training data, thereby proving the concept. OpenAI might have patched this vulnerability since, but it’s unlikely it’ll be the last.

With the unsanctioned use of GenAI in business growing fast, IT must step in to seek the right balance between innovation and cyber risk. Security teams might already be familiar with the term Shadow IT, but the new threat on the block is Shadow AI or the use of AI outside the organization’s governance. To prevent that from happening, IT teams need to revisit their policies and take every possible step to reinforce the responsible use of these tools.

Learn more about AI cybersecurity

Proprietary AI systems carry unique risks

An obvious way to address these threats might be to build a proprietary AI solution tailored to the specific business use case. Businesses may build a model from scratch or, more likely, start with an open-source foundation model. Neither option is without risk. However, while the risks that come with open-source models tend to be higher, those concerning proprietary AI systems are a little more nuanced —and every bit as serious.

As AI-powered functions gain traction in business software applications, they also become a more appetizing target for malicious actors — including internal ones. Data poisoning, where attackers tamper with the data used to train AI models, is one such example. The insider threat is real, too, especially if the data in question is widely accessible throughout the organization, as is often the case with customer service chats, product descriptions or brand guidelines. If you’re using such data to train a proprietary AI model, then you need to make sure its integrity hasn’t been compromised, either intentionally or unintentionally.

Malicious insiders with access to proprietary AI models may also attempt to reverse engineer them. For instance, someone with inside knowledge might be able to bypass audit trails since proprietary systems often have custom logging and monitoring solutions that might not be as secure as their mainstream counterparts.

Secure your AI software supply chains

The exploitation of model vulnerabilities presents a serious risk. Whereas open-source models may be patched quickly through community involvement, the same can’t be said of the hidden flaws that a proprietary model might have. To mitigate these risks, it’s vital that IT leaders secure their AI software supply chains. Transparency and oversight are the only ways to ensure that innovation in AI doesn’t add unacceptable risk to your business.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today