When ChatGPT and similar chatbots first became widely available, the concern in the cybersecurity world was how AI technology could be used to launch cyberattacks. In fact, it didn’t take very long until threat actors figured out how to bypass the safety checks to use ChatGPT to write malicious code.

It now seems that the tables have turned. Instead of attackers using ChatGPT to cause cyber incidents, they have now turned on the technology itself. OpenAI, which developed the chatbot, confirmed a data breach in the system that was caused by a vulnerability in the code’s open-source library, according to Security Week. The breach took the service offline until it was fixed.

An overnight success

ChatGPT’s popularity was evident from its release in late 2022. Everyone from writers to software developers wanted to experiment with the chatbot. Despite its imperfect responses (some of its prose was clunky or clearly plagiarized), ChatGPT quickly became the fastest-growing consumer app in history, reaching over 100 million monthly users by January. Approximately 13 million people used the AI technology daily within a full month of its release. Compare that to another extremely popular app — TikTok — which took nine months to reach similar user numbers.

One cybersecurity analyst compared ChatGPT to a Swiss Army knife, saying that the technology’s wide variety of useful applications is a big reason for its early and quick popularity.

The data breach

Whenever you have a popular app or technology, it’s only a matter of time until threat actors target it. In the case of ChatGPT, the exploit came via a vulnerability in the Redis open-source library. This allowed users to see the chat history of other active users.

Open-source libraries are used “to develop dynamic interfaces by storing readily accessible and frequently used routines and resources, such as classes, configuration data, documentation, help data, message templates, pre-written code and subroutines, type specifications and values,” according to a definition from Heavy.AI. OpenAI uses Redis to cache user information for faster recall and access. Because thousands of contributors develop and access open-source code, it’s easy for vulnerabilities to open up and go unnoticed. Threat actors know that which is why attacks on open-source libraries have increased by 742% since 2019.

In the grand scheme of things, the ChatGPT exploit was minor, and OpenAI patched the bug within days of discovery. But even a minor cyber incident can create a lot of damage.

However, that was only a surface-level incident. As the researchers from OpenAI dug in deeper, they discovered this same vulnerability was likely responsible for visibility into payment information for a few hours before ChatGPT was taken offline.

“It was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number and credit card expiration date. Full credit card numbers were not exposed at any time,” OpenAI said in a release about the incident.

Read the Cost of a Data Breach Report

AI, chatbots and cybersecurity

The data leakage in ChatGPT was addressed swiftly with apparently little damage, with impacted paying subscribers making up less than 1% of its users. However, the incident could be a harbinger of the risks that could impact chatbots and users in the future.

Already there are privacy concerns surrounding the use of chatbots. Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN that ChatGPT and chatbots are like the black box in an airplane. The AI technology stores vast amounts of data and then uses that information to generate responses to questions and prompts. And anything in the chatbot’s memory becomes fair game for other users.

For example, chatbots can record a single user’s notes on any topic and then summarize that information or search for more details. But if those notes include sensitive data — an organization’s intellectual property or sensitive customer information, for instance — it enters the chatbot library. The user no longer has control over the information.

Tightening restrictions on AI use

Because of privacy concerns, some businesses and entire countries are clamping down. JPMorgan Chase, for example, has restricted employees’ use of ChatGPT due to the company’s controls around third-party software and applications, but there are also concerns surrounding the security of financial information if entered into the chatbot. And Italy cited the data privacy of its citizens for its decision to temporarily block the application across the country. The concern, officials stated, is due to compliance with GDPR.

Experts also expect threat actors to use ChatGPT to create sophisticated and realistic phishing emails. Gone is the poor grammar and odd sentence phrasing that have been the tell-tale sign of a phishing scam. Now, chatbots will mimic native speakers with targeted messages. ChatGPT is capable of seamless language translation, which will be a game-changer for foreign adversaries.

A similarly dangerous tactic is the use of AI to create disinformation and conspiracy campaigns. The implications of this usage could go beyond cyber risks. Researchers used ChatGPT to write an op-ed, and the result was similar to anything found on InfoWars or other well-known websites peddling conspiracy theories.

OpenAI responding to some threats

Each evolution of chatbots will create new cyber threats, either through the more sophisticated language abilities or through their popularity. This makes the technology a prime target as an attack vector. To that end, OpenAI is taking steps to prevent future data breaches within the application. It is offering a bug bounty of up to $20,000 to anyone who discovers unreported vulnerabilities.

However, The Hacker News reported, “the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs.” So it sounds like OpenAI wants to harden the technology against outside attacks but is doing little to prevent the chatbot from being the source of cyberattacks.

ChatGPT and other chatbots are going to be major players in the cybersecurity world. Only time will tell if the technology will be the victim of attacks or the source.

If you are experiencing cybersecurity issues or an incident, contact X-Force to help: U.S. hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today