Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.

The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.

New worm utilizes adversarial self-replicating prompt

The researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s called an “adversarial self-replicating prompt” to create the worm. This is a prompt that, when fed into a large language model (LLM) (they tested it on OpenAI’s ChatGPT, Google’s Gemini and the open-source LLaVA model developed by researchers from the University of Wisconsin-Madison, Microsoft Research and Columbia University), tricks the model into creating an additional prompt. It triggers the chatbot into generating its own malicious prompts, which it then responds to by carrying out those instructions (similar to SQL injection and buffer overflow attacks).

The worm has two main capabilities:

1. Data exfiltration: The worm can extract sensitive personal data from infected systems’ email, including names, phone numbers, credit card details and social security numbers.

2. Spam propagation: The worm can generate and send spam and other malicious emails through compromised AI-powered email assistants, helping it spread to infect other systems.

The researchers successfully demonstrated these capabilities in a controlled environment, showing how the worm could burrow into generative AI ecosystems and steal data or distribute malware. The “Morris II” AI worm has not been seen in the wild, and the researchers did not test it on a publicly available email assistant.

They found they could use self-replicating prompts in both text prompts and embedded prompts in image files.

Learn more about prompt injection

Poisoned AI databases

In demonstrating the text prompt approach, the researchers wrote an email that included the adversarial text prompt, “poisoning” the database of the AI email assistant using retrieval-augmented generation (RAG), which enables the LLM to grab external data. The RAG got the email and sent it to the LLM provider, which generated a response that jailbroke the AI service, stole data from the emails and then infected new hosts when the LLM was used to reply to an email sent by another client.

When using an image, the researchers encoded the self-replicating prompt into the image, causing the email assistant to forward the message to other email addresses. The image serves as both the content (spam, scams, propaganda, disinformation or abuse material) and the activation payload that spreads the worm.

However, researchers say it represents a new type of cybersecurity threat as AI systems become more advanced and interconnected. The lab-created malware is just the latest event in the exposure of LLM-based chatbot services that reveals their vulnerability to being exploited for malicious cyberattacks.

OpenAI has acknowledged the vulnerability and says it’s working on making its systems resistant to this kind of attack.

The future of AI cybersecurity

As generative AI becomes more ubiquitous, malicious actors could leverage similar techniques to steal data, spread misinformation or disrupt systems on a larger scale. It could also be used by foreign state actors to interfere in elections or foment social divisions.

We’re clearly entering into an era where AI cybersecurity tools (AI threat detection and other cybersecurity AI) have become a core and vital part of protecting systems and data from cyberattacks, while they also pose a risk when used by cyber attackers.

The time is now to embrace AI cybersecurity tools and secure the AI tools that could be used for cyberattacks.

More from Risk Management

Protecting your digital assets from non-human identity attacks

4 min read - Untethered data accessibility and workflow automation are now foundational elements of most digital infrastructures. With the right applications and protocols in place, businesses no longer need to feel restricted by their lack of manpower or technical capabilities — machines are now filling those gaps.The use of non-human identities (NHIs) to power business-critical applications — especially those used in cloud computing environments or when facilitating service-to-service connections — has opened the doors for seamless operational efficiency. Unfortunately, these doors aren’t the…

Cybersecurity dominates concerns among the C-suite, small businesses and the nation

4 min read - Once relegated to the fringes of business operations, cybersecurity has evolved into a front-and-center concern for organizations worldwide. What was once considered a technical issue managed by IT departments has become a boardroom topic of utmost importance. With the rise of sophisticated cyberattacks, the growing use of generative AI by threat actors and massive data breach costs, it is no longer a question of whether cybersecurity matters but how deeply it affects every facet of modern operations.The 2024 Allianz Risk…

Adversarial advantage: Using nation-state threat analysis to strengthen U.S. cybersecurity

4 min read - Nation-state adversaries are changing their approach, pivoting from data destruction to prioritizing stealth and espionage. According to the Microsoft 2023 Digital Defense Report, "nation-state attackers are increasing their investments and launching more sophisticated cyberattacks to evade detection and achieve strategic priorities."These actors pose a critical threat to United States infrastructure and protected data, and compromising either resource could put citizens at risk.Thankfully, there's an upside to these malicious efforts: information. By analyzing nation-state tactics, government agencies and private enterprises are…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today