The threat landscape is expanding, and regulatory requirements are multiplying. For the enterprise, the challenges just to keep up are only mounting.

In addition, there’s the cybersecurity skills gap. According to the (ISC)2 2022 Cybersecurity Workforce Study, the global cybersecurity workforce gap has increased by 26.2%, which means 3.4 million more workers are needed to help protect data and prevent threats.

Leveraging AI-based tools is unquestionably necessary for modern organizations. But how far can tools like ChatGPT take us with regard to boosting cybersecurity and addressing the skills gap?

ChatGPT is dominating the tech news cycle. Some can’t get enough, but others are sick of hearing about it. But what about AI in cybersecurity? Is it any different?

While ChatGPT certainly has numerous use cases, there are some notable shortcomings that enterprises must understand before they dive head-first.

Transformers: More Than the Toys and Movies

First, a bit of background on large language models, which have undergone a remarkable transformation over the last few years.

Early models relied on basic statistical methods to generate text based on the probability of word sequences. As machine learning improved, more advanced models like recurrent neural networks (RNNs) and long short-term memory (LSTM) networks emerged — offering better contextual understanding and text-generation functions.

But the turning point of natural language processing (NLP) was the introduction of transformer architectures in 2017. That’s where OpenAI’s popular GPT comes in. The T in GPT stands for Transformer, and GPT means Generative Pre-trained Transformer. These models are trained on massive amounts of data that enable what we get when we use GPT: highly coherent and contextually relevant text.

Models like ChatGPT work because they benefit from large training datasets, more robust architectures and improved training techniques.

Larger Datasets, Better Models, Better Results?

Madhu Shashanka, co-founder and scientist for Concentric AI and former managing director for Charles Schwab’s Data Science and Machine Learning team, pointed out that the general rule of thumb is that the larger these models, the better they’re going to be. “But ‘better for what?’ is the question we need to ask,” he said.

For the organization that wants to work with and train ChatGPT as an effective cybersecurity tool, Shashanka suggests you think twice. “That’s not going to work, and nobody’s going to stand behind it,” he said. “It’s up to you to do whatever you want with it. People are finding all kinds of things, and your mileage will vary, and you’ll have to train on your own data. People are doing all kinds of crazy stuff. The point is, it’s not a product, and it just becomes another project.”

Shashanka is not saying that ChatGPT use should be discouraged. In fact, he is optimistic about how it can automate processes and procedures, especially for the SOC team. However, as for cybersecurity in a broader sense, there are limitations. “It depends on what you mean by security,” he said. “There are several layers to security. Cybersecurity is much more than visibility. So at the visibility level, you need to understand the data layer for data visibility and then visibility around access. You need to apply classification labels. And on top of that, you need remediation so you can fix permissions and adjust your risk posture management.”

That risk posture must be kept up to date, he added, since the risk and permission environment is changing dynamically.

AI in Cybersecurity: Can it Really Work?

AI and cybersecurity should make a natural fit, but the unfortunate reality is that most AI initiatives in cybersecurity fail. According to Shashanka, most AI-focused cybersecurity companies fail as well.

What is the root cause of these failures? Is it as simple as a fundamental lack of understanding about the power of AI and how to leverage it?

“That’s part of it,” said Shashanka. “I think in large companies, the reason these efforts fail is a disconnect between the business need and what value AI actually brings to the table.”

In Shashanka’s experience, the two sides typically don’t interact well. People with deep knowledge of the business needs cannot speak the same language as those with AI expertise.

“So that gulf is why most of these projects fall apart,” he said. “More than understanding AI, I think understanding the business needs is harder for the AI people than the other way around.”

These fundamental issues aren’t the only challenges companies should consider before leveraging ChatGPT. Addressing biases and ethical concerns, ensuring data privacy and security (remember OpenAI’s data leak?) and balancing automation with human expertise are critical concerns.

What’s most interesting for experts like Shashanka is that when it comes to leveraging ChatGPT for cybersecurity, it’s become a great way for non-experts to interact with large language models. “It’s revolutionary because the people who truly understand the business needs, to some extent, can bypass the AI geeks and just go right to ChatGPT.”

Discussions about leveraging AI in cybersecurity are happening, frequently at a very high level. But these boardroom discussions are nothing new, Shashanka said.

“It’s the classic case of build it or buy it. If these things are not in your company’s core business expertise, you probably shouldn’t be doing it. It’s as simple as that.”

The Final Word

There’s no debate about whether ChatGPT will play a role in cybersecurity. That genie is not going back in the bottle. But circling back to the beginning, we know that if there’s one sector that’s understaffed and under-resourced, cybersecurity is probably on the top of that list.

“It’s not like cybersecurity teams have all the time and bandwidth to play around with GPT and start building their own models,” he said.

Bottom line: Know what ChatGPT can do and what its limitations are, and leverage it within those guardrails. Everything else should be left to the experts who can stand behind their product.

More from Artificial Intelligence

Now Social Engineering Attackers Have AI. Do You? 

4 min read - Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code. The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else. How does this powerful new category of tools affect the ability of criminals to launch cyberattacks, including social engineering attacks? When Every Social Engineering Attack Uses Perfect English ChatGPT is a public tool based on a…

4 min read

Why Robot Vacuums Have Cameras (and What to Know About Them)

4 min read - Robot vacuum cleaner products are by far the largest category of consumer robots. They roll around on floors, hoovering up dust and dirt so we don’t have to, all while avoiding obstacles. The industry leader, iRobot, has been cleaning up the robot vacuum market for two decades. Over this time, the company has steadily gained fans and a sterling reputation, including around security and privacy. And then, something shocking happened. Someone posted on Facebook a picture of a woman sitting…

4 min read

ChatGPT Confirms Data Breach, Raising Security Concerns

4 min read - When ChatGPT and similar chatbots first became widely available, the concern in the cybersecurity world was how AI technology could be used to launch cyberattacks. In fact, it didn’t take very long until threat actors figured out how to bypass the safety checks to use ChatGPT to write malicious code. It now seems that the tables have turned. Instead of attackers using ChatGPT to cause cyber incidents, they have now turned on the technology itself. OpenAI, which developed the chatbot,…

4 min read

How AIoT Will Reshape the Security Industry in 2023

5 min read - The Internet of Things (IoT) has been around since 1990 — ever since John Romkey created a toaster that could be switched on over the internet. Today, 66% of North American homes have at least one IoT device, such as a smart speaker, bulb or watch. But for all their conveniences, many IoT devices are limited in functionality and performance. Moreover, they have notable security flaws that could compromise public safety, consumer data or entire company databases. The key to…

5 min read