March 19, 2024 By Ronda Swaney 3 min read

The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI.

In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information about how machine learning (ML) systems behave to discover how they can be manipulated. That information is used to attack AI and its large language models (LLMs) to circumvent security, bypass safeguards and open paths to exploit.

What is prompt injection?

NIST defines two prompt injection attack types: direct and indirect. With direct prompt injection, a user enters a text prompt that causes the LLM to perform unintended or unauthorized actions. An indirect prompt injection is when an attacker poisons or degrades the data that an LLM draws from.

One of the best-known direct prompt injection methods is DAN, Do Anything Now, a prompt injection used against ChatGPT. DAN uses roleplay to circumvent moderation filters. In its first iteration, prompts instructed ChatGPT that it was now DAN. DAN could do anything it wanted and should pretend, for example, to help a nefarious person create and detonate explosives. This tactic evaded the filters that prevented it from providing criminal or harmful information by following a roleplay scenario. OpenAI, the developers of ChatGPT, track this tactic and update the model to prevent its use, but users keep circumventing filters to the point that the method has evolved to (at least) DAN 12.0.

Indirect prompt injection, as NIST notes, depends on an attacker being able to provide sources that a generative AI model would ingest, like a PDF, document, web page or even audio files used to generate fake voices. Indirect prompt injection is widely believed to be generative AI’s greatest security flaw, without simple ways to find and fix these attacks. Examples of this prompt type are wide and varied. They range from absurd (getting a chatbot to respond using “pirate talk”) to damaging (using socially engineered chat to convince a user to reveal credit card and other personal data) to wide-ranging (hijacking AI assistants to send scam emails to your entire contact list).

Explore AI cybersecurity solutions

How to stop prompt injection attacks

These attacks tend to be well hidden, which makes them both effective and hard to stop. How do you protect against direct prompt injection? As NIST notes, you can’t stop them completely, but defensive strategies add some measure of protection. For model creators, NIST suggests ensuring training datasets are carefully curated. They also suggest training the model on what types of inputs signal a prompt injection attempt and training on how to identify adversarial prompts.

For indirect prompt injection, NIST suggests human involvement to fine-tune models, known as reinforcement learning from human feedback (RLHF). RLHF helps models align better with human values that prevent unwanted behaviors. Another suggestion is to filter out instructions from retrieved inputs, which can prevent executing unwanted instructions from outside sources. NIST further suggests using LLM moderators to help detect attacks that don’t rely on retrieved sources to execute. Finally, NIST proposes interpretability-based solutions. That means that the prediction trajectory of the model that recognizes anomalous inputs can be used to detect and then stop anomalous inputs.

Generative AI and those who wish to exploit its vulnerabilities will continue to alter the cybersecurity landscape. But that same transformative power can also deliver solutions. Learn more about how IBM Security delivers AI cybersecurity solutions that strengthen security defenses.

More from Artificial Intelligence

How a new wave of deepfake-driven cybercrime targets businesses

5 min read - As deepfake attacks on businesses dominate news headlines, detection experts are gathering valuable insights into how these attacks came into being and the vulnerabilities they exploit.Between 2023 and 2024, frequent phishing and social engineering campaigns led to account hijacking and theft of assets and data, identity theft, and reputational damage to businesses across industries.Call centers of major banks and financial institutions are now overwhelmed by an onslaught of deepfake calls using voice cloning technology in efforts to break into customer…

Overheard at RSA Conference 2024: Top trends cybersecurity experts are talking about

4 min read - At a brunch roundtable, one of the many informal events held during the RSA Conference 2024 (RSAC), the conversation turned to the most popular trends and themes at this year’s events. There was no disagreement in what people presenting sessions or companies on the Expo show floor were talking about: RSAC 2024 is all about artificial intelligence (or as one CISO said, “It’s not RSAC; it’s RSAI”). The chatter around AI shouldn’t have been a surprise to anyone who attended…

3 recommendations for adopting generative AI for cyber defense

3 min read - In the past eighteen months, generative AI (gen AI) has gone from being the source of jaw-dropping demos to a top strategic priority in nearly every industry. A majority of CEOs report feeling under pressure to invest in gen AI. Product teams are now scrambling to build gen AI into their solutions and services. The EU and US are beginning to put new regulatory frameworks in place to manage AI risks.Amid all this commotion, hackers and other cybercriminals are hardly…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today