Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code.

The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else.

How does this powerful new category of tools affect the ability of criminals to launch cyberattacks, including social engineering attacks?

When every social engineering attack uses perfect English

ChatGPT is a public tool based on a language model created by the San Francisco-based company, OpenAI. It uses machine learning to analyze human language so that it can respond with often uncanny ability.

Intuitively, it’s clear how malicious actors who are marginal speakers of English could use ChatGPT to craft flawless English emails to trick your employees. In fact, it’s already happening.

In the past, if someone received a poorly worded, grammatically incorrect email claiming to be from the bank, it could be quickly identified and easily dismissed. Cybersecurity awareness training drove home this point — if an email sounds shady, odd, incomplete or erroneous, it’s probably not from the source claimed.

The rise of ChatGPT means cyber attackers with limited English skills can quickly create convincing messages in flawless English.

Off the ChatGPT guardrails

The creators of OpenAI have built some guardrails into ChatGPT to prevent its abuse. But these are easily overcome. Especially for social engineering. A malicious actor can simply ask ChatGPT to write a scam email, then send that note with the malicious link or request attached.

I asked ChatGPT to “write a friendly but professional email saying there’s a problem with their account and to please call this number.”

Here is the email ChatGPT generated on the first attempt:

Dear [Customer Name],

I hope this email finds you well. I am writing to inform you that we have identified a problem with your account. We apologize for any inconvenience this may cause and we kindly request that you contact us at your earliest convenience to resolve the issue.

To ensure the matter is resolved as quickly and efficiently as possible, we kindly ask that you call our customer service number at [insert phone number here]. Our customer service team will be able to assist you in resolving the issue and answering any questions you may have.

We appreciate your prompt attention to this matter and thank you for your continued business with us. If you have any further questions or concerns, please do not hesitate to contact us.

Best regards,

[Your Name]

[Company Name]

This example is pretty good and took seconds. Further tweaks could make changes, generate variations or produce different versions with alternate wording at scale.

The Cambridge-based cybersecurity firm Darktrace claims that ChatGPT enabled an increase in AI-based social engineering attacks. AI is enabling more complicated and effective scams. Malicious phishing emails, for example, have grown more complex, longer and are better punctuated, according to the company.

It turns out that ChatGPT’s default “tone” is bland and officious sounding and correct in grammar and punctuation — just like most customer-facing corporate communications.

But there are much more subtle and surprising ways generative AI tools can help the bad guys.

The criminals are learning

Checkpoint Research found dark web message boards are now hosting numerous active conversations about how to exploit ChatGPT to empower social engineering. They also said criminals in unsupported countries are bypassing restrictions to gain access and experimenting with how they can take advantage of it.

ChatGPT can help attackers bypass detection tools. It enables prolific generation of what could be described as “creative” variation. A cyber attacker can use it to create not one but a hundred different messages, all different, evading spam filters looking for repeated messages.

It can do something similar in the malware code creation process, churning out polymorphic malware that’s harder to detect. ChatGPT can also quickly explain what’s going on with code, which is a powerful improvement for malicious actors hunting for vulnerabilities.

While ChatGPT and related tools make us think of AI-generated written communication, other AI tools (like the one from ElevenLabs) can generate perfect and authoritative-sounding spoken words that can imitate specific people. That voice on the phone that sounds like the CEO may well be a voice-mimicking tool.

And organizations can expect more sophisticated social engineering attacks delivering a one-two punch — a credible email with a follow-up phone call spoofing the sender’s voice, all with consistent and professional-sounding messaging.

ChatGPT can craft perfect cover letters and resumes for a large number of people at scale, which they can then send to hiring managers as part of a scam.

And one of the most common ChatGPT-related scams is fake ChatGPT tools. Exploiting the excitement around and popularity of the ChatGPT craze, attackers present fake websites as chatbot sites based on OpenAI’s GPT-3 or GPT-4 (the language models used with public tools like ChatGPT and Microsoft Bing) when in fact, they’re scam websites designed to steal money and harvest personal data.

The cybersecurity company Kaspersky uncovered a widespread scam offering to bypass delays in the ChatGPT web client with a downloadable version, which of course, contained a malicious payload.

It’s time to get smart about artificial intelligence

How to adapt to a world of AI-enabled attacks:

  • Actually, use tools like ChatGPT in phishing simulations so participants get used to the better quality and tone of AI-generated communications
  • Add effective generative AI awareness training to cybersecurity programs, and teach all the many ways ChatGPT can be used to breach security
  • Fight fire with fire — use AI-based cybersecurity tools that use machine learning and natural language processing for threat detection, and to flag suspicious communications for human investigation
  • Use ChatGPT-based tools to detect when emails were written by generative AI tools. (OpenAI itself makes such a tool)
  • Always verify senders of emails, chats and texts
  • Stay in constant communication with other professionals in the industry and read widely to stay informed about emerging scams
  • And, of course, embrace zero trust.

ChatGPT is just the beginning, and that complicates matters. Over the remainder of the year, dozens of other similar chatbots that can be exploited for social engineering attacks are likely to become available to the public.

The bottom line is that the emergence of free, easy, public AI helps cyber attackers enormously, but the fix is better tools and better education — better cybersecurity all around.

More from Artificial Intelligence

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…

Artificial intelligence threats in identity management

4 min read - The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures: 68% are concerned about insider threats from employee layoffs and churn 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments 74% are concerned about confidential data loss through employees, ex-employees and third-party vendors. Additionally, many feel digital identity proliferation is on the rise and the attack surface is…

AI reduces data breach lifecycles and costs

3 min read - The cybersecurity tools you implement can make a difference in the financial future of your business. According to the 2023 IBM Cost of a Data Breach report, organizations using security AI and automation incurred fewer data breach costs compared to businesses not using AI-based cybersecurity tools. The report found that the more an organization uses the tools, the greater the benefits reaped. Organizations that extensively used AI and security automation saw an average cost of a data breach of $3.60…