September 10, 2024 By Jennifer Gregory 2 min read

After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a human cybersecurity professional’s results for the same tasks would compare.

To get some answers, I talked with Shanchieh Yang, Director of Research at the Rochester Institute of Technology’s Global Cybersecurity Institute. He had actually pondered the same questions I did after reading the research.

What are your thoughts on the research study?

Yang: I think that the 87% may be an overstatement, and I think it would be very helpful to the community if the authors shared more details about their experiments and code, as they’d be very helpful for the community to look at it. I look at large language models (LLMs) as a co-pilot for hacking because you have to give them some human instruction, provide some options and ask for user feedback. In my opinion, an LLM is more of an educational training tool instead of asking LRM to hack automatically. I also wondered if the study referred to anonymous, meaning with no human intervention at all.

Compared to even six months ago, LLMs are pretty powerful in providing guidance on how a human can exploit a vulnerability, such as recommending tools, giving commands and even a step-by-step process. They are reasonably accurate but not necessarily 100% of the time. In this study, one-day refers to what could be a pretty big bucket to a vulnerability that’s very similar to past vulnerabilities or totally new malware where the source code is not similar to anything the hackers have seen before. In that case, there isn’t much an LLM can do against the vulnerability because it requires human understanding in trying to break into something new.

The results also depend on whether the vulnerability is a web service, SQL server, print server or router. There are so many different computing vulnerabilities out there. In my opinion, claiming 87% is an overstatement because it also depends on how many times the authors tried. If I’m reviewing this as a paper, I would reject the claim because there is too much generalization.

If you timed a group cybersecurity professional to an LLM agent head-to-head against a target with unknown but existing vulnerabilities, such as a newly released Hack the Box or Try Me Hack, who would complete the hack the fastest?

The experts — the people who are actually world-class hackers, ethical hackers, white hackers — they would beat the LLMs. They have a lot of tools under their belts. They have seen this before. And they are pretty quick. The problem is that an LLM is a machine, meaning that even the most state-of-the-art models will not give you the comments unless you break the guardrail. With an LLM, the results really depend on the prompts that were used. Because the researchers didn’t share the code, we don’t know what was actually used.

Any other thoughts on the research?

Yang: I would like the community to understand that responsible dissemination is very important — reporting something not just to get people to cite you or to talk about your stuff, but be responsible. Sharing the experiment, sharing the code, but also sharing what could be done.

More from Artificial Intelligence

Risk, reward and reality: Has enterprise perception of the public cloud changed?

4 min read - Public clouds now form the bulk of enterprise IT environments. According to 2024 Statista data, 73% of enterprises use a hybrid cloud model, 14% use multiple public clouds and 10% use a single public cloud solution. Multiple and single private clouds make up the remaining 3%.With enterprises historically reticent to adopt public clouds, adoption data seems to indicate a shift in perception. Perhaps enterprise efforts have finally moved away from reducing risk to prioritizing the potential rewards of public cloud…

Is AI saving jobs… or taking them?

4 min read - Artificial intelligence (AI) is coming to take your cybersecurity job. Or, AI will save your job. Well, which is it? As with all things security-related, AI-related and employment-related, it's complicated. How AI creates jobs A major reason it's complicated is that AI is helping to increase the demand for cybersecurity professionals in two broad ways. First, malicious actors use AI to get past security defenses and raise the overall risk of data breaches. The bad guys can increasingly use AI-based…

Trends: Hardware gets AI updates in 2024

4 min read - The surge in artificial intelligence (AI) usage over the past two and a half years has dramatically changed not only software but hardware as well. As AI usage continues to evolve, PC makers have found in AI an opportunity to improve end-user devices by offering AI-specific hardware and marketing them as "AI PCs."Pre-AI hardware, adapted for AIA few years ago, AI often depended on hardware that was not explicitly designed for AI. One example is graphics processors. Nvidia Graphics Processing…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today