When it comes to cybersecurity, some of us have more trust in machines than humans.

In fact, more than 25 percent of the 10,000 respondents to a global survey from Palo Alto Networks and YouGov said they’d prefer cybersecurity AI over people to manage security. While that statistic may be mind-boggling, it’s understandable for several reasons. Some may not fully grasp the role that both man and machine play in protecting their data. In addition, the threat landscape is so complex these days, it’s no wonder some people are grasping at any solution that doesn’t involve the human element.

Ever since I entered the security industry right after Y2K, there’s been a disconnect between the message conveyed by technology professionals and enterprise employees. So what can be done to bridge the gap? Does the onus fall on AI to clear up the confusion? Can AI don the proverbial Superman cape to solve all the threats facing the enterprise?

Getting to the root of why enterprise employees are so confused about cybersecurity is probably a good start. To do so, I sought the help of Dr. Jessica Barker, a renowned leader in understanding the human nature of cybersecurity and the head of the YouGov study. Barker strives to help companies understand what they do well and where they may be struggling in terms of communicating with their workforce.

“What I’m really interested in is how we can better communicate messages around cybersecurity to be more engaging and more impactful,” Barker said. “We need to get people motivated to listen to what we’re saying, to engage in some of the behavior change that we recommend.”

Confidence or Complacency? That Is the Question

One statistic from the study that stood out most for Dr. Barker was that 67 percent of respondents were convinced their actions were consistent with good online security practices.

“I didn’t expect there to be such a high level of confidence around people thinking they’re doing all they can,” said Barker. “There are many different potential dimensions of that: Either people do feel really confident, or people feel they’re doing all they can to be more prepared but there’s more they’d like to do — and for whatever reason, they’re not able.”

Remember, your policies shouldn’t cause any employee stress. After analyzing the data, Barker isn’t certain that people have enough information with which to go on, and believes their answers are based on limited information.

“On one hand, people feel confident, but on the other hand, they would like to be more informed about cybersecurity,” she added. “That was quite fascinating.”

According to Barker — and I’d have to agree here — we as an industry tend to overwhelm people with the technical details. She thinks we focus too much on technology and assume that people outside the industry are going to be as interested in the technical details as we are. Instead, Barker suggests we shape our messaging with the audience in mind and to communicate from several points of view, whether that’s from a psychological, economical, educational or even marketing point of view.

Another contributing factor to the disconnect is the ever-changing threat landscape. For instance, today’s accepted good practice may be completely outdated in a few years.

“Take passwords, for example,” she said. “For a long time, we were telling people that they have to regularly change their password, and that that was the best practice. And now, of course, because we have taken a more human approach to passwords, we actually understand that telling people to regularly change their password is not good advice. So that can be confusing for people.”

If the experts can’t even agree, how can we expect users to get on board? When structures and systems are set up to be difficult, we’re creating an environment for users where they feel that cybersecurity takes up too much time and extra effort.

But can AI turn the tide and alter perception? What role should AI play in managing security?

A Primer on Incorporating Cybersecurity AI

Having spoken with many AI experts, I’ve learned that AI is only as good as the information you’re feeding it. Without diversity of thought, and with too many biases, the data may be problematic.

For the enterprise planning on leveraging some form of AI to manage security, Barker advises that the first step is to ensure diversity from both the development and testing teams.

“Of course, you need to make sure that the security testing element has built security in from the start of the system,” she explained. “As long as that is done thoughtfully and ethically, it will help navigate the unknown.”

For Barker, it’s critical the enterprise understands what AI can be helpful for so it can be embraced ethically and inclusively. The way to go about that is to include as many opinions, views and kinds of expertise as possible — which should include underrepresented groups and people with different backgrounds, even those outside of technology. With input from different professions and departments, Barker predicts smoother roads ahead for the cybersecurity highway that accommodates vehicles for both man and machine.

When you think about it, AI is already incorporated into our lives. Think of all the smart assistants and technology built into the productivity apps and services we use in the enterprise. There’s a lot of AI behind it, and perhaps the respondents of the survey answered positively about AI without realizing why.

The truth is that most people aren’t aware of the ways AI is already being used. When it works well, it’s seamless and behind the scenes.

Takeaways for the Enterprise: Communication Rules

Based on Barker’s extensive experience raising awareness and performing outreach about security, she’s found that people ask a lot of questions, want a lot of help and want even more advice.

“They want to understand security more, and I don’t get the impression that people are actually confident with it. So the results were quite overwhelming,” she said. Moving forward, Barker is altering the way she communicates with people and strongly encourages the same tactic for the enterprise.

“If we tend to go into awareness training or whatever it might be, we absolutely can’t talk to people as if we’re the experts and they’re not,” she explained. “Because if people feel that they’re doing all they can, then an approach like that is just going to be undermining and patronizing.”

Anyone responsible for awareness training or promoting security in their organization should understand that the people they’re speaking to already feel they’re doing all they can, so how we shape our message to respect that feeling is critical.

Where do you begin? Barker explained that when she began the study, she knew she’d need to determine what people “get” when it comes to the technology, where their level of understanding is, what they’re comfortable with, and what will or won’t be helpful to them. That’s probably a solid starting point.

When developing security awareness training, vulnerability management programs or any sort of messaging, if you’re not communicating properly, you’re probably in trouble. No amount of AI is going to help.

Sure, AI can provide a huge boost to the cybersecurity industry. We all benefit by putting our trust in both man and machine. But first, we need to solve the communication issue, which transcends the enterprise and should include the security industry. The closer we are to being on the same page, the more our risk level should diminish — and cybersecurity AI will get even better.

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today