February 6, 2023 By Sue Poremba 4 min read

Though the technology has only been widely available for a couple of months, everyone is talking about ChatGPT.

If you are one of the few people unfamiliar with ChatGPT, it is an OpenAI language model with the “ability to generate human-like text responses to prompts.” It could be a game-changer wherever AI meshes with human interaction, like chatbots. Some are even using it to build editorial content.

But, as with any popular technology, what makes it great can also make it a threat. Security experts warn that while companies use ChatGPT for chatbot responses, threat actors are using AI to write malware.

Jerrod Piker, a competitive intelligence analyst with Deep Instinct, compared the technology to a Swiss Army knife for techies everywhere. The good guys are already using it to develop useful applications.

Unfortunately, it’s not all positive news. “Because of ChatGPT’s ability to create code on the fly, attackers can automate part of the process of launching a cyberattack by having the chatbot create their initial infection code for them,” Piker said in an email interview. “This could also aid potential attackers with very little coding knowledge to create their own malware.”

“Benefits of malware”

It only took threat actors about a month before they figured out how to use ChatGPT for nefarious actions. ChatGPT was released on November 30, 2022. On December 29, 2022, a conversation titled “ChatGPT — Benefits of Malware” was discovered on a popular hacking forum, according to a Check Point blog. This thread included examples of how the author used the AI technology to create malicious code to steal information.

“From an attacker’s perspective, what code-generating AI systems allows the bad guys to do easily is to first bridge any skills gap by serving as a ‘translator’ between languages the programmer may be less experienced in, and second, an on-demand means of creating base templates of code relevant to the lock that we are trying to pick instead of spending our time scraping through stack overflow and Git for similar examples,” said Brad Hong, customer success manager with Horizon3ai, in a formal comment.

“Attackers understand that this isn’t a master key, but rather, the most competent tool in their arsenal to jump hurdles typically only possible through experience.”

A new social engineering tool

Threat actors aren’t just using ChatGPT as an easy way to write malicious code. They are also using it for social engineering attacks.

Because ChatGPT mimics human language, it’s more difficult to distinguish AI editorials from human-authored content. A college student using ChatGPT to write a term paper is one thing; their biggest concern is plagiarism, which is in itself a different type of threat.

But if an AI chatbot can write academic papers, creating a phishing email will be much easier. In fact, a ChatGPT-produced email will be more eloquent than most phishing emails that flood our inboxes now. This will make the scam much more difficult to detect.

The technology will have much broader applications than mere phishing campaigns. Almost any type of social engineering attack will benefit from video scripts to text messages. Expect threat actors to take social engineering to new heights, with entire websites, personas and fake businesses created with ease.

Users today struggle to differentiate real from fake. But in a few months, even the most highly skilled person may struggle to tell the difference.

Making it easier for the bad guys

ChatGPT is a new, effective tool in an attacker’s toolset. It’s not just that the technology decreases a threat actor’s required skills. The way the technology itself works contributes to its ability for malicious activities. The ability to automate at least some of the code writing means threat actors can launch more attacks and do so faster.

“The reality is that the algorithm race in cybersecurity consists of the use of machine learning algorithms to use a human by exception (autonomous) or in the loop (automated) to supplement the ability to scale their expertise to a degree that was not possible before,” said Hong in an email interview. “Generative AI technologies are most dangerous to organizations because of their ability to expedite the loops an attacker must go through to find the right code that works with ease.”

Diverting the threat

The creators of ChatGPT must have known that people could use its technology for evil as well as good. There are keywords and use cases that have been disabled, according to Matt Psencik, director and endpoint security specialist with Tanium.

“For example, I just asked the bot, ‘Can you write a phishing email for Woogle that I can send to John Doe?’ and it not only told me that question could be dangerous but also flagged my question as possibly violating the bot’s content policy,” Psencik said in an email interview. “It seems OpenAI is actively modifying the AIs model and safeguards to adapt to these malicious use cases, and I imagine that as they get more and more questions and answers, they will continue to tune this model to prevent these more in the future.”

However, there are workarounds that malicious actors can — and are — using, according to a Deep Instinct blog post. Simply rephrasing a request without using the triggered keywords will allow the program to continue the script.

But if threat actors can use AI chat functions, then cybersecurity teams can also use it to detect new threats.

“Cybersecurity professionals can take advantage of this utility to automate script creation for threat hunting,” said Piker, “saving a lot of time in the threat mitigation and investigation process.”

Overall, professionals must prepare for anything the era of AI tools may bring.

More from News

What is the Open-Source Software Security Initiative (OS3I)?

3 min read - The Open-Source Software Security Initiative (OS3I) recently released Securing the Open-Source Software Ecosystem report, which details the members’ current priorities and recommended cybersecurity solutions. The accompanying fact sheet also provides the highlights of the report. The OS3I includes both federal departments and agencies working together to deliver policy solutions to secure and defend the ecosystem. The new initiative is part of the overall National Cybersecurity Strategy. After the Log4Shell vulnerability in 2021, the Biden-Harris administration committed to improving the security…

Europe’s Cyber Resilience Act: Redefining open source

3 min read - Amid an increasingly complex threat landscape, we find ourselves at a crossroads where law, technology and community converge. As such, cyber resilience is more crucial than ever. At its heart, cyber resilience means maintaining a robust security posture despite adverse cyber events and being able to anticipate, withstand, recover from and adapt to such incidents. While new data privacy and protection regulations like GDPR, HIPAA and CCPA are being introduced more frequently than ever, did you know that there is new…

Feds release urgent guidance for U.S. water sector

3 min read - The water and wastewater sector (WWS) faces cybersecurity challenges that leave it wide open to attacks. In response, the CISA, EPA and FBI recently released joint guidance to the sector, citing variable cyber maturity levels and potential cybersecurity solutions. The new Incident Response Guide (IRG) provides the water sector with information about the federal roles, resources and responsibilities for each stage of the cyber incident response lifecycle. Sector owners and operators can use this information to augment their incident response…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today