Researchers have created a new, never-seen-before kind of malware they call the “Morris II” worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.

The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.

New worm utilizes adversarial self-replicating prompt

The researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s called an “adversarial self-replicating prompt” to create the worm. This is a prompt that, when fed into a large language model (LLM) (they tested it on OpenAI’s ChatGPT, Google’s Gemini and the open-source LLaVA model developed by researchers from the University of Wisconsin-Madison, Microsoft Research and Columbia University), tricks the model into creating an additional prompt. It triggers the chatbot into generating its own malicious prompts, which it then responds to by carrying out those instructions (similar to SQL injection and buffer overflow attacks).

The worm has two main capabilities:

1. Data exfiltration: The worm can extract sensitive personal data from infected systems’ email, including names, phone numbers, credit card details and social security numbers.

2. Spam propagation: The worm can generate and send spam and other malicious emails through compromised AI-powered email assistants, helping it spread to infect other systems.

The researchers successfully demonstrated these capabilities in a controlled environment, showing how the worm could burrow into generative AI ecosystems and steal data or distribute malware. The “Morris II” AI worm has not been seen in the wild, and the researchers did not test it on a publicly available email assistant.

They found they could use self-replicating prompts in both text prompts and embedded prompts in image files.

Learn more about prompt injection

Poisoned AI databases

In demonstrating the text prompt approach, the researchers wrote an email that included the adversarial text prompt, “poisoning” the database of the AI email assistant using retrieval-augmented generation (RAG), which enables the LLM to grab external data. The RAG got the email and sent it to the LLM provider, which generated a response that jailbroke the AI service, stole data from the emails and then infected new hosts when the LLM was used to reply to an email sent by another client.

When using an image, the researchers encoded the self-replicating prompt into the image, causing the email assistant to forward the message to other email addresses. The image serves as both the content (spam, scams, propaganda, disinformation or abuse material) and the activation payload that spreads the worm.

However, researchers say it represents a new type of cybersecurity threat as AI systems become more advanced and interconnected. The lab-created malware is just the latest event in the exposure of LLM-based chatbot services that reveals their vulnerability to being exploited for malicious cyberattacks.

OpenAI has acknowledged the vulnerability and says it’s working on making its systems resistant to this kind of attack.

The future of AI cybersecurity

As generative AI becomes more ubiquitous, malicious actors could leverage similar techniques to steal data, spread misinformation or disrupt systems on a larger scale. It could also be used by foreign state actors to interfere in elections or foment social divisions.

We’re clearly entering into an era where AI cybersecurity tools (AI threat detection and other cybersecurity AI) have become a core and vital part of protecting systems and data from cyberattacks, while they also pose a risk when used by cyber attackers.

The time is now to embrace AI cybersecurity tools and secure the AI tools that could be used for cyberattacks.

More from Risk Management

How TikTok is reframing cybersecurity efforts

4 min read - You might think of TikTok as the place to go to find out new recipes and laugh at silly videos. And as a cybersecurity professional, TikTok’s potential data security issues are also likely to come to mind. However, in recent years, TikTok has worked to promote cybersecurity through its channels and programs. To highlight its efforts, TikTok celebrated Cybersecurity Month by promoting its cybersecurity focus and sharing cybersecurity TikTok creators.Global Bug Bounty program with HackerOneDuring Cybersecurity Month, the social media…

Roundup: The top ransomware stories of 2024

2 min read - The year 2024 saw a marked increase in the competence, aggression and unpredictability of ransomware attackers. Nearly all the key numbers are up — more ransomware gangs, bigger targets and higher payouts. Malicious ransomware groups also focus on critical infrastructure and supply chains, raising the stakes for victims and increasing the motivation to cooperate.Here are the biggest ransomware stories of 2024.Ransomware payments reach record highRansomware payments surged to record highs in 2024. In the first half of the year, victims…

83% of organizations reported insider attacks in 2024

4 min read - According to Cybersecurity Insiders' recent 2024 Insider Threat Report, 83% of organizations reported at least one insider attack in the last year. Even more surprising than this statistic is that organizations that experienced 11-20 insider attacks saw an increase of five times the amount of attacks they did in 2023 — moving from just 4% to 21% in the last 12 months.With insider threats on the rise, it’s critical for businesses to recognize the real dangers that originate from inside…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today