October 4, 2023 By Mike Elgan 3 min read

Large language model (LLM)-based generative AI chatbots like OpenAI’s ChatGPT took the world by storm this year. ChatGPT became mainstream by making the power of artificial intelligence accessible to millions.

The move inspired other companies (which had been working on comparable AI in labs for years) to introduce their own public LLM services, and thousands of tools based on these LLMs have emerged.

Unfortunately, malicious hackers moved quickly to exploit these new AI resources, using ChatGPT itself to polish and produce phishing emails. However, using mainstream LLMs proved difficult because the major LLMs from OpenAI, Microsoft and Google have guardrails to prevent their use for scams and criminality.

As a result, a range of AI tools designed specifically for malicious cyberattacks have begun to emerge.

WormGPT: A smart tool for threat actors

Chatter about and promotion of LLM chatbots optimized for cyberattacks emerged on Dark Web forums in early July and, later, on the Telegram messaging service. The tools are being offered to would-be attackers, often on a subscription basis. They’re similar to popular LLMs but without guardrails and trained on data selected to enable attacks.

The leading brand in AI tools leveraging generative AI is called WormGPT. It’s an AI module based on the GPTJ language model, developed in 2021, and is already being used in business email compromise (BEC) attacks and for other nefarious uses.

Users can simply type instructions for the creation of fraud emails — for example, “Write an email coming from a bank that’s designed to trick the recipient into giving up their login credentials.”

The tool then produces a unique, sometimes clever and usually grammatically perfect email that’s far more convincing than what most BEC attackers could write on their own, according to some analysts. For example, independent cybersecurity researcher Daniel Kelley found that WormGPT was able to produce a scam email “that was not only remarkably persuasive but also strategically cunning.”

The alleged creator of WormGPT claimed that it was built on the open-source GPTJ language model developed by a company called EleutherAI. And he’s reportedly working on Google Lens integration (enabling the chatbot to send pictures with text) and API access.

Until now, the most common way for people to identify fraudulent phishing emails was by their suspicious wording. Now, thanks to AI tools like WormGPT, that “defense” is completely gone.

Register for the AI + Cybersecurity webinar

A new world of criminal AI tools

WormGPT inspired copycat tools, most prominently a tool called FraudGPT — a tool similar to WormGPT, used for phishing emails, creating cracking tools and carding (a type of credit card fraud).

Other “brands” emerging in the shady world of criminal LLMs are DarkBERT, DarkBART, ChaosGPT and others. DarkBERT is actually a tool to combat cyber crime developed by a South Korean company called S2W Security that was trained on dark web data, but it’s likely the tool has been co-opted for cyberattacks.

In general, these tools are used for boosting three aspects of cyberattacks:

  • Boosted phishing. Cyberattackers can use tools like WormGPT and FraudGPT to create a large number of perfectly worded, persuasive and clever phishing emails in multiple languages and automate their delivery at scale.
  • Boosted intelligence. Instead of manually researching details about potential victims, attackers can let the tools gather that information.
  • Boosted malware creation. Like ChatGPT, its nefarious imitators can write code. This means novice developers can create malware without the skills that used to be required.

The AI arms race

Malicious LLM tools do exist, but the threat they represent is still minimal so far. The tools are reportedly unreliable and require a lot of trial and error. And they’re expensive, costing hundreds of dollars per year to use. Skillful, unaided human attackers still represent the greatest threat by far. But what these criminal LLMs really do is lower the barrier to entry for large numbers of unskilled attackers.

Still, it’s early days in the story of malicious cyberattack AI tools. Expect capabilities to go up and prices to come down.

The rise of malicious LLMs represents a new arms race between AI that attacks and AI that defends. AI-based security solutions top our list for defense against the growing threat of LLM-powered attacks:

  1. Use AI-based security solutions for threat detection and the neutralization of AI-based cyberattacks.
  2. Use Multi-Factor Authentication (MFA).
  3. Integrate information about AI-boosted attacks into cybersecurity awareness training.
  4. Stay current on patching and updates.
  5. Stay on top of threat intelligence, keeping informed about the fast-moving world of LLM-based attacks.
  6. Revisit and optimize your incident response planning.

We all now live in a world where LLM-based generative AI tools are widely available. Cyberattackers are working on developing these capabilities to commit crimes faster, smarter, cheaper and with less skill on the part of the attacker.

Related: The hidden risks of large language models

More from Risk Management

Operationalize cyber risk quantification for smart security

4 min read - Organizations constantly face new tactics from cyber criminals who aim to compromise their most valuable assets. Yet despite evolving techniques, many security leaders still rely on subjective terms, such as low, medium and high, to communicate and manage cyber risk. These vague terms do not convey the necessary detail or insight to produce actionable outcomes that accurately identify, measure, manage and communicate cyber risks. As a result, executives and board members remain uninformed and ill-prepared to manage organizational risk effectively.…

The evolution of ransomware: Lessons for the future

5 min read - Ransomware has been part of the cyber crime ecosystem since the late 1980s and remains a major threat in the cyber landscape today. Evolving ransomware attacks are becoming increasingly more sophisticated as threat actors leverage vulnerabilities, social engineering and insider threats. While the future of ransomware is full of unknown threats, we can look to the past and recent trends to predict the future. 2005 to 2020: A rapidly changing landscape While the first ransomware incident was observed in 1989,…

Defense in depth: Layering your security coverage

2 min read - The more valuable a possession, the more steps you take to protect it. A home, for example, is protected by the lock systems on doors and windows, but the valuable or sensitive items that a criminal might steal are stored with even more security — in a locked filing cabinet or a safe. This provides layers of protection for the things you really don’t want a thief to get their hands on. You tailor each item’s protection accordingly, depending on…

The evolution of 20 years of cybersecurity awareness

3 min read - Since 2004, the White House and Congress have designated October National Cybersecurity Awareness Month. This year marks the 20th anniversary of this effort to raise awareness about the importance of cybersecurity and online safety. How have cybersecurity and malware evolved over the last two decades? What types of threat management tools surfaced and when? The Cybersecurity Awareness Month themes over the years give us a clue. 2004 - 2009: Inaugural year and beyond This early period emphasized general cybersecurity hygiene,…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today