February 6, 2023 By Sue Poremba 4 min read

Though the technology has only been widely available for a couple of months, everyone is talking about ChatGPT.

If you are one of the few people unfamiliar with ChatGPT, it is an OpenAI language model with the “ability to generate human-like text responses to prompts.” It could be a game-changer wherever AI meshes with human interaction, like chatbots. Some are even using it to build editorial content.

But, as with any popular technology, what makes it great can also make it a threat. Security experts warn that while companies use ChatGPT for chatbot responses, threat actors are using AI to write malware.

Jerrod Piker, a competitive intelligence analyst with Deep Instinct, compared the technology to a Swiss Army knife for techies everywhere. The good guys are already using it to develop useful applications.

Unfortunately, it’s not all positive news. “Because of ChatGPT’s ability to create code on the fly, attackers can automate part of the process of launching a cyberattack by having the chatbot create their initial infection code for them,” Piker said in an email interview. “This could also aid potential attackers with very little coding knowledge to create their own malware.”

“Benefits of malware”

It only took threat actors about a month before they figured out how to use ChatGPT for nefarious actions. ChatGPT was released on November 30, 2022. On December 29, 2022, a conversation titled “ChatGPT — Benefits of Malware” was discovered on a popular hacking forum, according to a Check Point blog. This thread included examples of how the author used the AI technology to create malicious code to steal information.

“From an attacker’s perspective, what code-generating AI systems allows the bad guys to do easily is to first bridge any skills gap by serving as a ‘translator’ between languages the programmer may be less experienced in, and second, an on-demand means of creating base templates of code relevant to the lock that we are trying to pick instead of spending our time scraping through stack overflow and Git for similar examples,” said Brad Hong, customer success manager with Horizon3ai, in a formal comment.

“Attackers understand that this isn’t a master key, but rather, the most competent tool in their arsenal to jump hurdles typically only possible through experience.”

A new social engineering tool

Threat actors aren’t just using ChatGPT as an easy way to write malicious code. They are also using it for social engineering attacks.

Because ChatGPT mimics human language, it’s more difficult to distinguish AI editorials from human-authored content. A college student using ChatGPT to write a term paper is one thing; their biggest concern is plagiarism, which is in itself a different type of threat.

But if an AI chatbot can write academic papers, creating a phishing email will be much easier. In fact, a ChatGPT-produced email will be more eloquent than most phishing emails that flood our inboxes now. This will make the scam much more difficult to detect.

The technology will have much broader applications than mere phishing campaigns. Almost any type of social engineering attack will benefit from video scripts to text messages. Expect threat actors to take social engineering to new heights, with entire websites, personas and fake businesses created with ease.

Users today struggle to differentiate real from fake. But in a few months, even the most highly skilled person may struggle to tell the difference.

Making it easier for the bad guys

ChatGPT is a new, effective tool in an attacker’s toolset. It’s not just that the technology decreases a threat actor’s required skills. The way the technology itself works contributes to its ability for malicious activities. The ability to automate at least some of the code writing means threat actors can launch more attacks and do so faster.

“The reality is that the algorithm race in cybersecurity consists of the use of machine learning algorithms to use a human by exception (autonomous) or in the loop (automated) to supplement the ability to scale their expertise to a degree that was not possible before,” said Hong in an email interview. “Generative AI technologies are most dangerous to organizations because of their ability to expedite the loops an attacker must go through to find the right code that works with ease.”

Diverting the threat

The creators of ChatGPT must have known that people could use its technology for evil as well as good. There are keywords and use cases that have been disabled, according to Matt Psencik, director and endpoint security specialist with Tanium.

“For example, I just asked the bot, ‘Can you write a phishing email for Woogle that I can send to John Doe?’ and it not only told me that question could be dangerous but also flagged my question as possibly violating the bot’s content policy,” Psencik said in an email interview. “It seems OpenAI is actively modifying the AIs model and safeguards to adapt to these malicious use cases, and I imagine that as they get more and more questions and answers, they will continue to tune this model to prevent these more in the future.”

However, there are workarounds that malicious actors can — and are — using, according to a Deep Instinct blog post. Simply rephrasing a request without using the triggered keywords will allow the program to continue the script.

But if threat actors can use AI chat functions, then cybersecurity teams can also use it to detect new threats.

“Cybersecurity professionals can take advantage of this utility to automate script creation for threat hunting,” said Piker, “saving a lot of time in the threat mitigation and investigation process.”

Overall, professionals must prepare for anything the era of AI tools may bring.

More from News

Exploring the 2024 Worldwide Managed Detection and Response Vendor Assessment

3 min read - Research firm IDC recently released its 2024 Worldwide Managed Detection and Response Vendor Assessment, which both highlights leaders in the market and examines the evolution of MDR as a critical component of IT security infrastructure. Here are the key takeaways. The current state of MDR According to the assessment, “the MDR market has evolved extensively over the past couple of years. This should be seen as a positive movement as MDR providers have had to evolve to meet the growing…

Regulatory harmonization in OT-critical infrastructure faces hurdles

3 min read - In an effort to enhance cyber resilience across critical infrastructure, the Office of the National Cyber Director (ONCD) has recently released a summary of feedback from its 2023 Cybersecurity Regulatory Harmonization Request for Information (RFI). The responses reveal major concerns from critical infrastructure industries related to operational technology (OT), such as energy, transport and manufacturing. Their worries include the current fragmented regulatory landscape and difficulty adapting to new cyber regulations. The frustration appears to be unanimous. Meanwhile, the magnitude of…

Why the Christie’s auction house hack is different

3 min read - Christie's, one of the world's leading auction houses, was hacked in May, and the cyber group RansomHub has claimed responsibility. On May 12, Christie’s CEO Guillaume Cerutti announced on LinkedIn that the company had “experienced a technology security incident.” RansomHub threatened to leak “sensitive personal information” from exfiltrated ID document data, including names, dates of birth and nationalities. On the group’s dark website, RansomHub claims to possess 2GB of data on “at least 500,000” Christie’s clients from around the world.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today