When it comes to cybersecurity, some of us have more trust in machines than humans.

In fact, more than 25 percent of the 10,000 respondents to a global survey from Palo Alto Networks and YouGov said they’d prefer cybersecurity AI over people to manage security. While that statistic may be mind-boggling, it’s understandable for several reasons. Some may not fully grasp the role that both man and machine play in protecting their data. In addition, the threat landscape is so complex these days, it’s no wonder some people are grasping at any solution that doesn’t involve the human element.

Ever since I entered the security industry right after Y2K, there’s been a disconnect between the message conveyed by technology professionals and enterprise employees. So what can be done to bridge the gap? Does the onus fall on AI to clear up the confusion? Can AI don the proverbial Superman cape to solve all the threats facing the enterprise?

Getting to the root of why enterprise employees are so confused about cybersecurity is probably a good start. To do so, I sought the help of Dr. Jessica Barker, a renowned leader in understanding the human nature of cybersecurity and the head of the YouGov study. Barker strives to help companies understand what they do well and where they may be struggling in terms of communicating with their workforce.

“What I’m really interested in is how we can better communicate messages around cybersecurity to be more engaging and more impactful,” Barker said. “We need to get people motivated to listen to what we’re saying, to engage in some of the behavior change that we recommend.”

Confidence or Complacency? That Is the Question

One statistic from the study that stood out most for Dr. Barker was that 67 percent of respondents were convinced their actions were consistent with good online security practices.

“I didn’t expect there to be such a high level of confidence around people thinking they’re doing all they can,” said Barker. “There are many different potential dimensions of that: Either people do feel really confident, or people feel they’re doing all they can to be more prepared but there’s more they’d like to do — and for whatever reason, they’re not able.”

Remember, your policies shouldn’t cause any employee stress. After analyzing the data, Barker isn’t certain that people have enough information with which to go on, and believes their answers are based on limited information.

“On one hand, people feel confident, but on the other hand, they would like to be more informed about cybersecurity,” she added. “That was quite fascinating.”

According to Barker — and I’d have to agree here — we as an industry tend to overwhelm people with the technical details. She thinks we focus too much on technology and assume that people outside the industry are going to be as interested in the technical details as we are. Instead, Barker suggests we shape our messaging with the audience in mind and to communicate from several points of view, whether that’s from a psychological, economical, educational or even marketing point of view.

Another contributing factor to the disconnect is the ever-changing threat landscape. For instance, today’s accepted good practice may be completely outdated in a few years.

“Take passwords, for example,” she said. “For a long time, we were telling people that they have to regularly change their password, and that that was the best practice. And now, of course, because we have taken a more human approach to passwords, we actually understand that telling people to regularly change their password is not good advice. So that can be confusing for people.”

If the experts can’t even agree, how can we expect users to get on board? When structures and systems are set up to be difficult, we’re creating an environment for users where they feel that cybersecurity takes up too much time and extra effort.

But can AI turn the tide and alter perception? What role should AI play in managing security?

A Primer on Incorporating Cybersecurity AI

Having spoken with many AI experts, I’ve learned that AI is only as good as the information you’re feeding it. Without diversity of thought, and with too many biases, the data may be problematic.

For the enterprise planning on leveraging some form of AI to manage security, Barker advises that the first step is to ensure diversity from both the development and testing teams.

“Of course, you need to make sure that the security testing element has built security in from the start of the system,” she explained. “As long as that is done thoughtfully and ethically, it will help navigate the unknown.”

For Barker, it’s critical the enterprise understands what AI can be helpful for so it can be embraced ethically and inclusively. The way to go about that is to include as many opinions, views and kinds of expertise as possible — which should include underrepresented groups and people with different backgrounds, even those outside of technology. With input from different professions and departments, Barker predicts smoother roads ahead for the cybersecurity highway that accommodates vehicles for both man and machine.

When you think about it, AI is already incorporated into our lives. Think of all the smart assistants and technology built into the productivity apps and services we use in the enterprise. There’s a lot of AI behind it, and perhaps the respondents of the survey answered positively about AI without realizing why.

The truth is that most people aren’t aware of the ways AI is already being used. When it works well, it’s seamless and behind the scenes.

Takeaways for the Enterprise: Communication Rules

Based on Barker’s extensive experience raising awareness and performing outreach about security, she’s found that people ask a lot of questions, want a lot of help and want even more advice.

“They want to understand security more, and I don’t get the impression that people are actually confident with it. So the results were quite overwhelming,” she said. Moving forward, Barker is altering the way she communicates with people and strongly encourages the same tactic for the enterprise.

“If we tend to go into awareness training or whatever it might be, we absolutely can’t talk to people as if we’re the experts and they’re not,” she explained. “Because if people feel that they’re doing all they can, then an approach like that is just going to be undermining and patronizing.”

Anyone responsible for awareness training or promoting security in their organization should understand that the people they’re speaking to already feel they’re doing all they can, so how we shape our message to respect that feeling is critical.

Where do you begin? Barker explained that when she began the study, she knew she’d need to determine what people “get” when it comes to the technology, where their level of understanding is, what they’re comfortable with, and what will or won’t be helpful to them. That’s probably a solid starting point.

When developing security awareness training, vulnerability management programs or any sort of messaging, if you’re not communicating properly, you’re probably in trouble. No amount of AI is going to help.

Sure, AI can provide a huge boost to the cybersecurity industry. We all benefit by putting our trust in both man and machine. But first, we need to solve the communication issue, which transcends the enterprise and should include the security industry. The closer we are to being on the same page, the more our risk level should diminish — and cybersecurity AI will get even better.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today