September 10, 2024 By Jennifer Gregory 2 min read

After reading about the recent cybersecurity research by Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang, I had questions. While initially impressed that ChatGPT 4 can exploit the vast majority of one-day vulnerabilities, I started thinking about what the results really mean in the grand scheme of cybersecurity. Most importantly, I wondered how a human cybersecurity professional’s results for the same tasks would compare.

To get some answers, I talked with Shanchieh Yang, Director of Research at the Rochester Institute of Technology’s Global Cybersecurity Institute. He had actually pondered the same questions I did after reading the research.

What are your thoughts on the research study?

Yang: I think that the 87% may be an overstatement, and I think it would be very helpful to the community if the authors shared more details about their experiments and code, as they’d be very helpful for the community to look at it. I look at large language models (LLMs) as a co-pilot for hacking because you have to give them some human instruction, provide some options and ask for user feedback. In my opinion, an LLM is more of an educational training tool instead of asking LRM to hack automatically. I also wondered if the study referred to anonymous, meaning with no human intervention at all.

Compared to even six months ago, LLMs are pretty powerful in providing guidance on how a human can exploit a vulnerability, such as recommending tools, giving commands and even a step-by-step process. They are reasonably accurate but not necessarily 100% of the time. In this study, one-day refers to what could be a pretty big bucket to a vulnerability that’s very similar to past vulnerabilities or totally new malware where the source code is not similar to anything the hackers have seen before. In that case, there isn’t much an LLM can do against the vulnerability because it requires human understanding in trying to break into something new.

The results also depend on whether the vulnerability is a web service, SQL server, print server or router. There are so many different computing vulnerabilities out there. In my opinion, claiming 87% is an overstatement because it also depends on how many times the authors tried. If I’m reviewing this as a paper, I would reject the claim because there is too much generalization.

If you timed a group cybersecurity professional to an LLM agent head-to-head against a target with unknown but existing vulnerabilities, such as a newly released Hack the Box or Try Me Hack, who would complete the hack the fastest?

The experts — the people who are actually world-class hackers, ethical hackers, white hackers — they would beat the LLMs. They have a lot of tools under their belts. They have seen this before. And they are pretty quick. The problem is that an LLM is a machine, meaning that even the most state-of-the-art models will not give you the comments unless you break the guardrail. With an LLM, the results really depend on the prompts that were used. Because the researchers didn’t share the code, we don’t know what was actually used.

Any other thoughts on the research?

Yang: I would like the community to understand that responsible dissemination is very important — reporting something not just to get people to cite you or to talk about your stuff, but be responsible. Sharing the experiment, sharing the code, but also sharing what could be done.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today