February 6, 2023 By Sue Poremba 4 min read

Though the technology has only been widely available for a couple of months, everyone is talking about ChatGPT.

If you are one of the few people unfamiliar with ChatGPT, it is an OpenAI language model with the “ability to generate human-like text responses to prompts.” It could be a game-changer wherever AI meshes with human interaction, like chatbots. Some are even using it to build editorial content.

But, as with any popular technology, what makes it great can also make it a threat. Security experts warn that while companies use ChatGPT for chatbot responses, threat actors are using AI to write malware.

Jerrod Piker, a competitive intelligence analyst with Deep Instinct, compared the technology to a Swiss Army knife for techies everywhere. The good guys are already using it to develop useful applications.

Unfortunately, it’s not all positive news. “Because of ChatGPT’s ability to create code on the fly, attackers can automate part of the process of launching a cyberattack by having the chatbot create their initial infection code for them,” Piker said in an email interview. “This could also aid potential attackers with very little coding knowledge to create their own malware.”

“Benefits of malware”

It only took threat actors about a month before they figured out how to use ChatGPT for nefarious actions. ChatGPT was released on November 30, 2022. On December 29, 2022, a conversation titled “ChatGPT — Benefits of Malware” was discovered on a popular hacking forum, according to a Check Point blog. This thread included examples of how the author used the AI technology to create malicious code to steal information.

“From an attacker’s perspective, what code-generating AI systems allows the bad guys to do easily is to first bridge any skills gap by serving as a ‘translator’ between languages the programmer may be less experienced in, and second, an on-demand means of creating base templates of code relevant to the lock that we are trying to pick instead of spending our time scraping through stack overflow and Git for similar examples,” said Brad Hong, customer success manager with Horizon3ai, in a formal comment.

“Attackers understand that this isn’t a master key, but rather, the most competent tool in their arsenal to jump hurdles typically only possible through experience.”

A new social engineering tool

Threat actors aren’t just using ChatGPT as an easy way to write malicious code. They are also using it for social engineering attacks.

Because ChatGPT mimics human language, it’s more difficult to distinguish AI editorials from human-authored content. A college student using ChatGPT to write a term paper is one thing; their biggest concern is plagiarism, which is in itself a different type of threat.

But if an AI chatbot can write academic papers, creating a phishing email will be much easier. In fact, a ChatGPT-produced email will be more eloquent than most phishing emails that flood our inboxes now. This will make the scam much more difficult to detect.

The technology will have much broader applications than mere phishing campaigns. Almost any type of social engineering attack will benefit from video scripts to text messages. Expect threat actors to take social engineering to new heights, with entire websites, personas and fake businesses created with ease.

Users today struggle to differentiate real from fake. But in a few months, even the most highly skilled person may struggle to tell the difference.

Making it easier for the bad guys

ChatGPT is a new, effective tool in an attacker’s toolset. It’s not just that the technology decreases a threat actor’s required skills. The way the technology itself works contributes to its ability for malicious activities. The ability to automate at least some of the code writing means threat actors can launch more attacks and do so faster.

“The reality is that the algorithm race in cybersecurity consists of the use of machine learning algorithms to use a human by exception (autonomous) or in the loop (automated) to supplement the ability to scale their expertise to a degree that was not possible before,” said Hong in an email interview. “Generative AI technologies are most dangerous to organizations because of their ability to expedite the loops an attacker must go through to find the right code that works with ease.”

Diverting the threat

The creators of ChatGPT must have known that people could use its technology for evil as well as good. There are keywords and use cases that have been disabled, according to Matt Psencik, director and endpoint security specialist with Tanium.

“For example, I just asked the bot, ‘Can you write a phishing email for Woogle that I can send to John Doe?’ and it not only told me that question could be dangerous but also flagged my question as possibly violating the bot’s content policy,” Psencik said in an email interview. “It seems OpenAI is actively modifying the AIs model and safeguards to adapt to these malicious use cases, and I imagine that as they get more and more questions and answers, they will continue to tune this model to prevent these more in the future.”

However, there are workarounds that malicious actors can — and are — using, according to a Deep Instinct blog post. Simply rephrasing a request without using the triggered keywords will allow the program to continue the script.

But if threat actors can use AI chat functions, then cybersecurity teams can also use it to detect new threats.

“Cybersecurity professionals can take advantage of this utility to automate script creation for threat hunting,” said Piker, “saving a lot of time in the threat mitigation and investigation process.”

Overall, professionals must prepare for anything the era of AI tools may bring.

More from News

FYSA – Adobe Cold Fusion Path Traversal Vulnerability

2 min read - Summary Adobe has released a security bulletin (APSB24-107) addressing an arbitrary file system read vulnerability in ColdFusion, a web application server. The vulnerability, identified as CVE-2024-53961, can be exploited to read arbitrary files on the system, potentially leading to unauthorized access and data exposure. Threat Topography Threat Type: Arbitrary File System Read Industries Impacted: Technology, Software, and Web Development Geolocation: Global Environment Impact: Web servers running ColdFusion 2021 and 2023 are vulnerable Overview X-Force Incident Command is monitoring the disclosure…

Ransomware attack on Rhode Island health system exposes data of hundreds of thousands

3 min read - Rhode Island is grappling with the fallout of a significant ransomware attack that has compromised the personal information of hundreds of thousands of residents enrolled in the state’s health and social services programs. Officials confirmed the attack on the RIBridges system—the state’s central platform for benefits like Medicaid and SNAP—after hackers infiltrated the system on December 5, planting malicious software and threatening to release sensitive data unless a ransom is paid. Governor Dan McKee, addressing the media, called the attack…

FBI, CISA issue warning for cross Apple-Android texting

3 min read - CISA and the FBI recently released a joint statement that the People's Republic of China (PRC) is targeting commercial telecommunications infrastructure as part of a significant cyber espionage campaign. As a result, the agencies released a joint guide, Enhanced Visibility and Hardening Guidance for Communications Infrastructure, with best practices organizations and agencies should adopt to protect against this espionage threat. According to the statement, PRC-affiliated actors compromised networks at multiple telecommunication companies. They stole customer call records data as well…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today