The threat landscape is expanding, and regulatory requirements are multiplying. For the enterprise, the challenges just to keep up are only mounting.

In addition, there’s the cybersecurity skills gap. According to the (ISC)2 2022 Cybersecurity Workforce Study, the global cybersecurity workforce gap has increased by 26.2%, which means 3.4 million more workers are needed to help protect data and prevent threats.

Leveraging AI-based tools is unquestionably necessary for modern organizations. But how far can tools like ChatGPT take us with regard to boosting cybersecurity and addressing the skills gap?

ChatGPT is dominating the tech news cycle. Some can’t get enough, but others are sick of hearing about it. But what about AI in cybersecurity? Is it any different?

While ChatGPT certainly has numerous use cases, there are some notable shortcomings that enterprises must understand before they dive head-first.

Transformers: More than the toys and movies

First, a bit of background on large language models, which have undergone a remarkable transformation over the last few years.

Early models relied on basic statistical methods to generate text based on the probability of word sequences. As machine learning improved, more advanced models like recurrent neural networks (RNNs) and long short-term memory (LSTM) networks emerged — offering better contextual understanding and text-generation functions.

But the turning point of natural language processing (NLP) was the introduction of transformer architectures in 2017. That’s where OpenAI’s popular GPT comes in. The T in GPT stands for Transformer, and GPT means Generative Pre-trained Transformer. These models are trained on massive amounts of data that enable what we get when we use GPT: highly coherent and contextually relevant text.

Models like ChatGPT work because they benefit from large training datasets, more robust architectures and improved training techniques.

Larger datasets, better models, better results?

Madhu Shashanka, co-founder and scientist for Concentric AI and former managing director for Charles Schwab’s Data Science and Machine Learning team, pointed out that the general rule of thumb is that the larger these models, the better they’re going to be. “But ‘better for what?’ is the question we need to ask,” he said.

For the organization that wants to work with and train ChatGPT as an effective cybersecurity tool, Shashanka suggests you think twice. “That’s not going to work, and nobody’s going to stand behind it,” he said. “It’s up to you to do whatever you want with it. People are finding all kinds of things, and your mileage will vary, and you’ll have to train on your own data. People are doing all kinds of crazy stuff. The point is, it’s not a product, and it just becomes another project.”

Shashanka is not saying that ChatGPT use should be discouraged. In fact, he is optimistic about how it can automate processes and procedures, especially for the SOC team. However, as for cybersecurity in a broader sense, there are limitations. “It depends on what you mean by security,” he said. “There are several layers to security. Cybersecurity is much more than visibility. So at the visibility level, you need to understand the data layer for data visibility and then visibility around access. You need to apply classification labels. And on top of that, you need remediation so you can fix permissions and adjust your risk posture management.”

That risk posture must be kept up to date, he added, since the risk and permission environment is changing dynamically.

AI in cybersecurity: Can it really work?

AI and cybersecurity should make a natural fit, but the unfortunate reality is that most AI initiatives in cybersecurity fail. According to Shashanka, most AI-focused cybersecurity companies fail as well.

What is the root cause of these failures? Is it as simple as a fundamental lack of understanding about the power of AI and how to leverage it?

“That’s part of it,” said Shashanka. “I think in large companies, the reason these efforts fail is a disconnect between the business need and what value AI actually brings to the table.”

In Shashanka’s experience, the two sides typically don’t interact well. People with deep knowledge of the business needs cannot speak the same language as those with AI expertise.

“So that gulf is why most of these projects fall apart,” he said. “More than understanding AI, I think understanding the business needs is harder for the AI people than the other way around.”

These fundamental issues aren’t the only challenges companies should consider before leveraging ChatGPT. Addressing biases and ethical concerns, ensuring data privacy and security (remember OpenAI’s data leak?) and balancing automation with human expertise are critical concerns.

What’s most interesting for experts like Shashanka is that when it comes to leveraging ChatGPT for cybersecurity, it’s become a great way for non-experts to interact with large language models. “It’s revolutionary because the people who truly understand the business needs, to some extent, can bypass the AI geeks and just go right to ChatGPT.”

Discussions about leveraging AI in cybersecurity are happening, frequently at a very high level. But these boardroom discussions are nothing new, Shashanka said.

“It’s the classic case of build it or buy it. If these things are not in your company’s core business expertise, you probably shouldn’t be doing it. It’s as simple as that.”

The final word

There’s no debate about whether ChatGPT will play a role in cybersecurity. That genie is not going back in the bottle. But circling back to the beginning, we know that if there’s one sector that’s understaffed and under-resourced, cybersecurity is probably on the top of that list.

“It’s not like cybersecurity teams have all the time and bandwidth to play around with GPT and start building their own models,” he said.

Bottom line: Know what ChatGPT can do and what its limitations are, and leverage it within those guardrails. Everything else should be left to the experts who can stand behind their product.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today