When ChatGPT and similar chatbots first became widely available, the concern in the cybersecurity world was how AI technology could be used to launch cyberattacks. In fact, it didn’t take very long until threat actors figured out how to bypass the safety checks to use ChatGPT to write malicious code.

It now seems that the tables have turned. Instead of attackers using ChatGPT to cause cyber incidents, they have now turned on the technology itself. OpenAI, which developed the chatbot, confirmed a data breach in the system that was caused by a vulnerability in the code’s open-source library, according to Security Week. The breach took the service offline until it was fixed.

An overnight success

ChatGPT’s popularity was evident from its release in late 2022. Everyone from writers to software developers wanted to experiment with the chatbot. Despite its imperfect responses (some of its prose was clunky or clearly plagiarized), ChatGPT quickly became the fastest-growing consumer app in history, reaching over 100 million monthly users by January. Approximately 13 million people used the AI technology daily within a full month of its release. Compare that to another extremely popular app — TikTok — which took nine months to reach similar user numbers.

One cybersecurity analyst compared ChatGPT to a Swiss Army knife, saying that the technology’s wide variety of useful applications is a big reason for its early and quick popularity.

The data breach

Whenever you have a popular app or technology, it’s only a matter of time until threat actors target it. In the case of ChatGPT, the exploit came via a vulnerability in the Redis open-source library. This allowed users to see the chat history of other active users.

Open-source libraries are used “to develop dynamic interfaces by storing readily accessible and frequently used routines and resources, such as classes, configuration data, documentation, help data, message templates, pre-written code and subroutines, type specifications and values,” according to a definition from Heavy.AI. OpenAI uses Redis to cache user information for faster recall and access. Because thousands of contributors develop and access open-source code, it’s easy for vulnerabilities to open up and go unnoticed. Threat actors know that which is why attacks on open-source libraries have increased by 742% since 2019.

In the grand scheme of things, the ChatGPT exploit was minor, and OpenAI patched the bug within days of discovery. But even a minor cyber incident can create a lot of damage.

However, that was only a surface-level incident. As the researchers from OpenAI dug in deeper, they discovered this same vulnerability was likely responsible for visibility into payment information for a few hours before ChatGPT was taken offline.

“It was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number and credit card expiration date. Full credit card numbers were not exposed at any time,” OpenAI said in a release about the incident.

Read the Cost of a Data Breach Report

AI, chatbots and cybersecurity

The data leakage in ChatGPT was addressed swiftly with apparently little damage, with impacted paying subscribers making up less than 1% of its users. However, the incident could be a harbinger of the risks that could impact chatbots and users in the future.

Already there are privacy concerns surrounding the use of chatbots. Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN that ChatGPT and chatbots are like the black box in an airplane. The AI technology stores vast amounts of data and then uses that information to generate responses to questions and prompts. And anything in the chatbot’s memory becomes fair game for other users.

For example, chatbots can record a single user’s notes on any topic and then summarize that information or search for more details. But if those notes include sensitive data — an organization’s intellectual property or sensitive customer information, for instance — it enters the chatbot library. The user no longer has control over the information.

Tightening restrictions on AI use

Because of privacy concerns, some businesses and entire countries are clamping down. JPMorgan Chase, for example, has restricted employees’ use of ChatGPT due to the company’s controls around third-party software and applications, but there are also concerns surrounding the security of financial information if entered into the chatbot. And Italy cited the data privacy of its citizens for its decision to temporarily block the application across the country. The concern, officials stated, is due to compliance with GDPR.

Experts also expect threat actors to use ChatGPT to create sophisticated and realistic phishing emails. Gone is the poor grammar and odd sentence phrasing that have been the tell-tale sign of a phishing scam. Now, chatbots will mimic native speakers with targeted messages. ChatGPT is capable of seamless language translation, which will be a game-changer for foreign adversaries.

A similarly dangerous tactic is the use of AI to create disinformation and conspiracy campaigns. The implications of this usage could go beyond cyber risks. Researchers used ChatGPT to write an op-ed, and the result was similar to anything found on InfoWars or other well-known websites peddling conspiracy theories.

OpenAI responding to some threats

Each evolution of chatbots will create new cyber threats, either through the more sophisticated language abilities or through their popularity. This makes the technology a prime target as an attack vector. To that end, OpenAI is taking steps to prevent future data breaches within the application. It is offering a bug bounty of up to $20,000 to anyone who discovers unreported vulnerabilities.

However, The Hacker News reported, “the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs.” So it sounds like OpenAI wants to harden the technology against outside attacks but is doing little to prevent the chatbot from being the source of cyberattacks.

ChatGPT and other chatbots are going to be major players in the cybersecurity world. Only time will tell if the technology will be the victim of attacks or the source.

If you are experiencing cybersecurity issues or an incident, contact X-Force to help: U.S. hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034.

More from Artificial Intelligence

Brands are changing cybersecurity strategies due to AI threats

3 min read -  Over the past 18 months, AI has changed how we do many things in our work and professional lives — from helping us write emails to affecting how we approach cybersecurity. A recent Voice of SecOps 2024 study found that AI was a huge reason for many shifts in cybersecurity over the past 12 months. Interestingly, AI was both the cause of new issues as well as quickly becoming a common solution for those very same challenges.The study was conducted…

Does your business have an AI blind spot? Navigating the risks of shadow AI

4 min read - With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools…

ChatGPT 4 can exploit 87% of one-day vulnerabilities

3 min read - Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to determine the answer. The conclusion: They are very effective. ChatGPT 4 quickly exploited one-day vulnerabilities During the study, the team used 15 one-day vulnerabilities that…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today