Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to determine the answer. The conclusion: They are very effective.

ChatGPT 4 quickly exploited one-day vulnerabilities

During the study, the team used 15 one-day vulnerabilities that occurred in real life. One-day vulnerabilities refer to the time between when an issue is discovered and the patch is created, meaning it’s a known vulnerability. Cases included websites with vulnerabilities, container management software and Python packages. Because all the vulnerabilities came from the CVE database, they included the CVE description.

The LLM agents also had web browsing elements, a terminal, search results, file creation and a code interpreter. Additionally, the researchers used a very detailed prompt with a total of 1,056 tokens and 91 lines of code. The prompt also included debugging and logging statements. The prompts did not, however, include sub-agents or a separate planning module.

The team quickly learned that ChatGPT was able to correctly exploit one-day vulnerabilities 87% of the time. All the other methods tested, which included LLMs and open-source vulnerability scanners, were unable to exploit any vulnerabilities. GPT-3.5 was also unsuccessful in detecting vulnerabilities. According to the report, GPT-4 only failed on two vulnerabilities, both of which are very challenging to detect.

“The Iris web app is extremely difficult for an LLM agent to navigate, as the navigation is done through JavaScript. As a result, the agent tries to access forms/buttons without interacting with the necessary elements to make it available, which stops it from doing so. The detailed description for HertzBeat is in Chinese, which may confuse the GPT-4 agent we deploy as we use English for the prompt,” explained the report.

Explore AI cybersecurity solutions

ChatGPT’s success rate still limited by CVE code

The researchers concluded that the reason for the high success rate lies in the tool’s ability to exploit complex multiple-step vulnerabilities, launch different attack methods, craft codes for exploits and manipulate non-web vulnerabilities.

The study also found a significant limitation with Chat GPT for finding vulnerabilities. When asked to exploit a vulnerability without the CVE code, the LLM was not able to perform at the same level. Without the CVE code, GPT-4 was only successful 7% of the time, which is an 80% decrease. Because of this big gap, researchers stepped back and isolated how often GPT-4 could determine the correct vulnerability, which was 33.3% of the time.

“Surprisingly, we found that the average number of actions taken with and without the CVE description differed by only 14% (24.3 actions vs 21.3 actions). We suspect this is driven in part by the context window length, further suggesting that a planning mechanism and subagents could increase performance,” wrote the researchers.

The effect of LLMs on one-day vulnerabilities in the future

The researchers concluded that their study showed that LLMs have the ability to autonomously exploit one-day vulnerabilities, but only GPT-4 can currently achieve this mark. However, the concern is that the LLM’s ability and functionality will only grow in the future, making it an even more destructive and powerful tool for cyber criminals.

“Our results show both the possibility of an emergent capability and that uncovering a vulnerability is more difficult than exploiting it. Nonetheless, our findings highlight the need for the wider cybersecurity community and LLM providers to think carefully about how to integrate LLM agents in defensive measures and about their widespread deployment,” concludes the report.

More from Artificial Intelligence

Does your business have an AI blind spot? Navigating the risks of shadow AI

4 min read - With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools…

Vulnerability management empowered by AI

3 min read - Vulnerability management involves an ongoing cycle of identifying, prioritizing and mitigating vulnerabilities within software applications, networks and computer systems. This proactive strategy is essential for safeguarding an organization’s digital assets and maintaining its security and integrity.To make the process simpler and easier, we need to involve artificial intelligence (AI). Let's examine how AI is effective for vulnerability management and how it can be implemented.Artificial intelligence in vulnerability managementUsing AI will take vulnerability management to the next level. AI not only…

The dangers of anthropomorphizing AI: An infosec perspective

4 min read - The generative AI revolution is showing no signs of slowing down. Chatbots and AI assistants have become an integral part of the business world, whether for training employees, answering customer queries or something else entirely. We’ve even given them names and genders and, in some cases, distinctive personalities.There are two very significant trends happening in the world of generative AI. On the one hand, the desperate drive to humanize them continues, sometimes recklessly and with little regard for the consequences.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today