Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code.

The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else.

How does this powerful new category of tools affect the ability of criminals to launch cyberattacks, including social engineering attacks?

When every social engineering attack uses perfect English

ChatGPT is a public tool based on a language model created by the San Francisco-based company, OpenAI. It uses machine learning to analyze human language so that it can respond with often uncanny ability.

Intuitively, it’s clear how malicious actors who are marginal speakers of English could use ChatGPT to craft flawless English emails to trick your employees. In fact, it’s already happening.

In the past, if someone received a poorly worded, grammatically incorrect email claiming to be from the bank, it could be quickly identified and easily dismissed. Cybersecurity awareness training drove home this point — if an email sounds shady, odd, incomplete or erroneous, it’s probably not from the source claimed.

The rise of ChatGPT means cyber attackers with limited English skills can quickly create convincing messages in flawless English.

Off the ChatGPT guardrails

The creators of OpenAI have built some guardrails into ChatGPT to prevent its abuse. But these are easily overcome. Especially for social engineering. A malicious actor can simply ask ChatGPT to write a scam email, then send that note with the malicious link or request attached.

I asked ChatGPT to “write a friendly but professional email saying there’s a problem with their account and to please call this number.”

Here is the email ChatGPT generated on the first attempt:

Dear [Customer Name],

I hope this email finds you well. I am writing to inform you that we have identified a problem with your account. We apologize for any inconvenience this may cause and we kindly request that you contact us at your earliest convenience to resolve the issue.

To ensure the matter is resolved as quickly and efficiently as possible, we kindly ask that you call our customer service number at [insert phone number here]. Our customer service team will be able to assist you in resolving the issue and answering any questions you may have.

We appreciate your prompt attention to this matter and thank you for your continued business with us. If you have any further questions or concerns, please do not hesitate to contact us.

Best regards,

[Your Name]

[Company Name]

This example is pretty good and took seconds. Further tweaks could make changes, generate variations or produce different versions with alternate wording at scale.

The Cambridge-based cybersecurity firm Darktrace claims that ChatGPT enabled an increase in AI-based social engineering attacks. AI is enabling more complicated and effective scams. Malicious phishing emails, for example, have grown more complex, longer and are better punctuated, according to the company.

It turns out that ChatGPT’s default “tone” is bland and officious sounding and correct in grammar and punctuation — just like most customer-facing corporate communications.

But there are much more subtle and surprising ways generative AI tools can help the bad guys.

The criminals are learning

Checkpoint Research found dark web message boards are now hosting numerous active conversations about how to exploit ChatGPT to empower social engineering. They also said criminals in unsupported countries are bypassing restrictions to gain access and experimenting with how they can take advantage of it.

ChatGPT can help attackers bypass detection tools. It enables prolific generation of what could be described as “creative” variation. A cyber attacker can use it to create not one but a hundred different messages, all different, evading spam filters looking for repeated messages.

It can do something similar in the malware code creation process, churning out polymorphic malware that’s harder to detect. ChatGPT can also quickly explain what’s going on with code, which is a powerful improvement for malicious actors hunting for vulnerabilities.

While ChatGPT and related tools make us think of AI-generated written communication, other AI tools (like the one from ElevenLabs) can generate perfect and authoritative-sounding spoken words that can imitate specific people. That voice on the phone that sounds like the CEO may well be a voice-mimicking tool.

And organizations can expect more sophisticated social engineering attacks delivering a one-two punch — a credible email with a follow-up phone call spoofing the sender’s voice, all with consistent and professional-sounding messaging.

ChatGPT can craft perfect cover letters and resumes for a large number of people at scale, which they can then send to hiring managers as part of a scam.

And one of the most common ChatGPT-related scams is fake ChatGPT tools. Exploiting the excitement around and popularity of the ChatGPT craze, attackers present fake websites as chatbot sites based on OpenAI’s GPT-3 or GPT-4 (the language models used with public tools like ChatGPT and Microsoft Bing) when in fact, they’re scam websites designed to steal money and harvest personal data.

The cybersecurity company Kaspersky uncovered a widespread scam offering to bypass delays in the ChatGPT web client with a downloadable version, which of course, contained a malicious payload.

It’s time to get smart about artificial intelligence

How to adapt to a world of AI-enabled attacks:

  • Actually, use tools like ChatGPT in phishing simulations so participants get used to the better quality and tone of AI-generated communications
  • Add effective generative AI awareness training to cybersecurity programs, and teach all the many ways ChatGPT can be used to breach security
  • Fight fire with fire — use AI-based cybersecurity tools that use machine learning and natural language processing for threat detection, and to flag suspicious communications for human investigation
  • Use ChatGPT-based tools to detect when emails were written by generative AI tools. (OpenAI itself makes such a tool)
  • Always verify senders of emails, chats and texts
  • Stay in constant communication with other professionals in the industry and read widely to stay informed about emerging scams
  • And, of course, embrace zero trust.

ChatGPT is just the beginning, and that complicates matters. Over the remainder of the year, dozens of other similar chatbots that can be exploited for social engineering attacks are likely to become available to the public.

The bottom line is that the emergence of free, easy, public AI helps cyber attackers enormously, but the fix is better tools and better education — better cybersecurity all around.

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today