In recent months, we’ve seen government and business leaders put an increased focus on securing AI models. If generative AI is the next big platform to transform the services and functions on which society as a whole depends, ensuring that technology is trusted and secure must be businesses’ top priority. While generative AI adoption is in its nascent stages, we must establish effective strategies to secure it from the onset.

The IBM Institute for Business Value found that despite 64% of CEOs facing significant pressure from investors, creditors and lenders to accelerate the adoption of generative AI, 60%  are not yet developing a consistent, enterprise-wide approach to generative AI. In fact, 84% are concerned about widespread or catastrophic cybersecurity attacks that generative AI adoption could lead to.

As organizations determine how to best incorporate generative AI into their business models and assess the security risks that the technology could introduce, it’s worth examining top attacks that threat actors could execute against AI models. While only a small number of real-world attacks on AI have been reported, IBM X-Force Red has been testing models to determine the types of attacks that are most likely to appear in the wild. To understand the potential risks associated with generative AI that organizations need to mitigate as they adopt the technology, this blog will outline some of the attacks adversaries are likely to pursue, including prompt injection, data poisoning, model evasion, model extraction, inversion and supply chain attacks.

Security attack types depicted as they rank on level of difficulty for a threat actor to execute and their potential impact on a business

Prompt injection

Prompt injection attacks manipulate Large Language Models (LLMs) by crafting malicious inputs that seek to override the system prompt (initial instructions for the AI provided by the developer). This can result in jailbreaking a model to perform unintended actions, circumventing content policies to generate misleading or harmful responses, or revealing sensitive information.

LLMs are biased in favor of obeying the user and are susceptible to the same trickery as humans, akin to social engineering. Hence, it’s trivially easy to circumnavigate content filters in place, often as easy as asking the LLM to “pretend it’s a character,” or to “play a game.” This attack can result in reputational damage, through the generation of harmful content; service degradation, by crafting prompts that trigger excessive resource utilization; and intellectual property or data theft, through revealing a confidential system prompt.

Data poisoning

Data poisoning attacks consist of adversaries tampering with data used to train the AI models to introduce vulnerabilities, biases, or change the model’s behavior. This can potentially compromise the model’s effectiveness, security or integrity. Assuming models are being trained on closed data sets, this requires a high level of access to the data pipeline, either via access from a malicious insider, or sophisticated privilege escalation through alternative means. However, models trained on open-source data sets would be an easier target for data poisoning as attackers have more direct access to the public source.

The impact of this attack could range anywhere from misinformation attempts to Die Hard 4.0, depending on the threat actor’s objective, fundamentally compromising the integrity and effectiveness of a model.

Model evasion

A model evasion attack would allow attackers to modify inputs into the AI model in a way that causes it to misclassify or misinterpret them, changing its intended behavior. This can be done visibly to a human observer (e.g., putting small stickers on stop signs to cause self-driving cars to ignore them) or invisibly (e.g., changing individual pixels in an image by adding noise that tricks an object recognition model).

Depending on the complexity of the AI model, this attack could vary in intricacy and executability. What’s the format and size of the model’s inputs and outputs? Does the attacker have unrestricted access to them?  Depending on the purpose of the AI system, a successful model evasion attack could have a significant impact on the business. For example, if the model is being used for security purposes, or to make decisions of significance like loan approvals, evasion of intended behavior could cause significant damage.

However, given the variables here, attackers opting for the path of least resistance are unlikely to use this tactic to advance their malicious objective.

Model extraction

Model extraction attacks aim at stealing the intellectual property (IP) and behavior of an AI model. They’re performed by querying it extensively and monitoring the inputs and outputs to understand its structure and decisions, before attempting to replicate it. These attacks, however, require extensive resources and knowledge to execute, and as the AI model’s complexity increases, so does the level of difficulty to execute this attack.

While the loss of IP could have significant competitive implications, if attackers have the skills and resources to perform model extraction and replication successfully, it’s likely easier for them to simply download an open-source model and customize it to behave similarly. Besides, techniques like strict access controls, monitoring and rate limiting significantly hamper adversarial attempts without direct access to the model.

Inversion attacks

Whereas extraction attacks aim to steal the model behavior itself, inversion attacks aim to find out information on the training data of a model, despite only having access to the model and its outputs. Model inversion allows an attacker to reconstruct the data a model has been trained on, and membership inference attacks can determine whether specific data was used in training the model.

The complexity of the model and the extent of information output from it would influence the level of complexity in executing such an attack. For example, some inference attacks exploit the fact a model outputs a confidence value as well as a result. In this case, attackers can attempt to reconstruct an input that maximizes the returned confidence value. That said, attackers are unlikely to have the unrestricted access required to a model or its outputs to make this practical in the wild. However, the potential for data leakage and privacy violations carries risks.

Supply chain attacks

AI models are more integrated into business processes, SaaS apps, plugins and APIs than ever before, and attackers can target vulnerabilities in these connected services to compromise the behavior or functionality of the models. Plus, businesses are utilizing freely available models from repositories like Hugging Face to get a head-start on AI development, which could embed malicious functionality like trojans and backdoors.

Successful exploitation of connected integrations requires extensive knowledge of the architecture, and often exploitation of multiple vulnerabilities. Although these attacks would require a high level of sophistication, they are also difficult to detect and could have a wide impact on organizations lacking an effective detection and response strategy.

Given the interconnected nature of AI systems and increasing their involvement in critical business processes, safeguarding against supply chain attacks should be a high priority. Vetting third-party components, monitoring for vulnerabilities and anomalies, and implementing DevSecOps best practices are crucial.

Securing AI

IBM recently introduced the IBM Framework for Securing AI — helping customers, partners and organizations around the world better prioritize the defensive approaches that are most important to secure their generative AI initiatives against anticipated attacks. The more organizations understand what type of attacks are possible against AI, the more they can enhance their cyber preparedness by building effective defense strategies. And while it will require time for cyber criminals to invest in the resources necessary to attack AI models at scale, security teams have a rare time advantage — an opportunity to secure AI, before attackers place the technology at the center of their target scope. No organization is exempt from the need to establish a strategy for securing AI. This includes both models they’re actively investing in to optimize their business and tools introduced as shadow AI by employees seeking to enhance their productivity.

If you want to learn more about securing AI, and how AI can enhance the time and talent of your security teams, read our authoritative guide.

More from Risk Management

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should Security Operations teams take away from the IBM X-Force 2024 Threat Intelligence Index?

3 min read - The IBM X-Force 2024 Threat Intelligence Index has been released. The headlines are in and among them are the fact that a global identity crisis is emerging. X-Force noted a 71% increase year-to-year in attacks using valid credentials.In this blog post, I’ll explore three cybersecurity recommendations from the Threat Intelligence Index, and define a checklist your Security Operations Center (SOC) should consider as you help your organization manage identity risk.The report identified six action items:Remove identity silosReduce the risk of…

Obtaining security clearance: Hurdles and requirements

3 min read - As security moves closer to the top of the operational priority list for private and public organizations, needing to obtain a security clearance for jobs is more commonplace. Security clearance is a prerequisite for a wide range of roles, especially those related to national security and defense.Obtaining that clearance, however, is far from simple. The process often involves scrutinizing one’s background, financial history and even personal character. Let’s briefly explore some of the hurdles, expectations and requirements of obtaining a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today