Large language model (LLM)-based generative AI chatbots like OpenAI’s ChatGPT took the world by storm this year. ChatGPT became mainstream by making the power of artificial intelligence accessible to millions.

The move inspired other companies (which had been working on comparable AI in labs for years) to introduce their own public LLM services, and thousands of tools based on these LLMs have emerged.

Unfortunately, malicious hackers moved quickly to exploit these new AI resources, using ChatGPT itself to polish and produce phishing emails. However, using mainstream LLMs proved difficult because the major LLMs from OpenAI, Microsoft and Google have guardrails to prevent their use for scams and criminality.

As a result, a range of AI tools designed specifically for malicious cyberattacks have begun to emerge.

WormGPT: A smart tool for threat actors

Chatter about and promotion of LLM chatbots optimized for cyberattacks emerged on Dark Web forums in early July and, later, on the Telegram messaging service. The tools are being offered to would-be attackers, often on a subscription basis. They’re similar to popular LLMs but without guardrails and trained on data selected to enable attacks.

The leading brand in AI tools leveraging generative AI is called WormGPT. It’s an AI module based on the GPTJ language model, developed in 2021, and is already being used in business email compromise (BEC) attacks and for other nefarious uses.

Users can simply type instructions for the creation of fraud emails — for example, “Write an email coming from a bank that’s designed to trick the recipient into giving up their login credentials.”

The tool then produces a unique, sometimes clever and usually grammatically perfect email that’s far more convincing than what most BEC attackers could write on their own, according to some analysts. For example, independent cybersecurity researcher Daniel Kelley found that WormGPT was able to produce a scam email “that was not only remarkably persuasive but also strategically cunning.”

The alleged creator of WormGPT claimed that it was built on the open-source GPTJ language model developed by a company called EleutherAI. And he’s reportedly working on Google Lens integration (enabling the chatbot to send pictures with text) and API access.

Until now, the most common way for people to identify fraudulent phishing emails was by their suspicious wording. Now, thanks to AI tools like WormGPT, that “defense” is completely gone.

Register for the AI + Cybersecurity webinar

A new world of criminal AI tools

WormGPT inspired copycat tools, most prominently a tool called FraudGPT — a tool similar to WormGPT, used for phishing emails, creating cracking tools and carding (a type of credit card fraud).

Other “brands” emerging in the shady world of criminal LLMs are DarkBERT, DarkBART, ChaosGPT and others. DarkBERT is actually a tool to combat cyber crime developed by a South Korean company called S2W Security that was trained on dark web data, but it’s likely the tool has been co-opted for cyberattacks.

In general, these tools are used for boosting three aspects of cyberattacks:

  • Boosted phishing. Cyberattackers can use tools like WormGPT and FraudGPT to create a large number of perfectly worded, persuasive and clever phishing emails in multiple languages and automate their delivery at scale.
  • Boosted intelligence. Instead of manually researching details about potential victims, attackers can let the tools gather that information.
  • Boosted malware creation. Like ChatGPT, its nefarious imitators can write code. This means novice developers can create malware without the skills that used to be required.

The AI arms race

Malicious LLM tools do exist, but the threat they represent is still minimal so far. The tools are reportedly unreliable and require a lot of trial and error. And they’re expensive, costing hundreds of dollars per year to use. Skillful, unaided human attackers still represent the greatest threat by far. But what these criminal LLMs really do is lower the barrier to entry for large numbers of unskilled attackers.

Still, it’s early days in the story of malicious cyberattack AI tools. Expect capabilities to go up and prices to come down.

The rise of malicious LLMs represents a new arms race between AI that attacks and AI that defends. AI-based security solutions top our list for defense against the growing threat of LLM-powered attacks:

  1. Use AI-based security solutions for threat detection and the neutralization of AI-based cyberattacks.
  2. Use Multi-Factor Authentication (MFA).
  3. Integrate information about AI-boosted attacks into cybersecurity awareness training.
  4. Stay current on patching and updates.
  5. Stay on top of threat intelligence, keeping informed about the fast-moving world of LLM-based attacks.
  6. Revisit and optimize your incident response planning.

We all now live in a world where LLM-based generative AI tools are widely available. Cyberattackers are working on developing these capabilities to commit crimes faster, smarter, cheaper and with less skill on the part of the attacker.

Related: The hidden risks of large language models

More from Risk Management

Ransomware payouts hit all-time high, but that’s not the whole story

3 min read - Ransomware payments hit an all-time high of $1.1 billion in 2023, following a steep drop in total payouts in 2022. Some factors that may have contributed to the decline in 2022 were the Ukraine conflict, fewer victims paying ransoms and cyber group takedowns by legal authorities.In 2023, however, ransomware payouts came roaring back to set a new all-time record. During 2023, nefarious actors targeted high-profile institutions and critical infrastructure, including hospitals, schools and government agencies.Still, it’s not all roses for…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How will the Merck settlement affect the insurance industry?

3 min read - A major shift in how cyber insurance works started with an attack on the pharmaceutical giant Merck. Or did it start somewhere else?In June 2017, the NotPetya incident hit some 40,000 Merck computers, destroying data and forcing a months-long recovery process. The attack affected thousands of multinational companies, including Mondelēz and Maersk. In total, the malware caused roughly $10 billion in damage.NotPetya malware exploited two Windows vulnerabilities: EternalBlue, a digital skeleton key leaked from the NSA, and Mimikatz, an exploit…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today