For cybersecurity experts, artificial intelligence (AI) can both respond to and predict threats. But because AI security is everywhere, attackers are using it to launch more refined attacks. Each side is seemingly playing catch-up, with no clear winner in sight.

How can defenders stay ahead? To gain context about AI that goes beyond prediction, detection and response, our industry will need to ‘humanize’ the process. We’ve explored some of the technical aspects of AI, like how it can both prevent and launch direct-denial-of-service attacks, for instance. But to get the most out of it in the long run, we’ll need to take a social sciences approach instead.

What AI Security Can’t Do

First, let’s establish what AI and machine learning are. AI, much like its name, represents the higher concept of machines carrying out ‘smart’ tasks. Machine learning (ML) is a subset of AI. It provides data to computers so they can process that data and learn for themselves. Whether it’s AI or machine learning, algorithms are built based on data that determine what patterns are expected and what are considered abnormal.

The best AI requires data scientists, statistics and as much human input as possible. As you train it, AI learns to create results that may not be visible to the human running it. It can even make judgments based on data for which you didn’t train it. This ‘black box’ nature means there’s also a push to make AI that can reveal how it makes decisions.

No matter how well AI trains itself, human oversight and input are key to its success. That’s the takeaway from Julie Carpenter, research fellow in the ethics and emerging sciences group at California Polytechnic State University.

“Every decision you make in AI should have a human in the loop at this point,” she says. “We don’t have any sort of genius AI that understands human context, or human ways of life or sentience. Some sort of oversight is necessary.”

AI Can’t Outthink Us

Carpenter explains that AI’s original goal is to replicate human-like thinking, an attempt that remains true today for most AI products. AI cybersecurity — and AI in general —  is there to serve humans in one way or another, she said. But it still doesn’t understand human context, culture or meaning.

The belief that AI will, sometime in the future, outsmart and outthink us is incorrect, Carpenter said. She also shared her strong doubts about the current state of AI reading emotion. ‘Affective’ AI like this is being used in advertising to try to read consumers’ attitudes toward products and marketing campaigns.

“I don’t think it’s necessarily a good direction for AI to go,” she warned. “How can we teach AI to do something we (ourselves) cannot do — which is perfectly read each other’s emotions?”

How AI Bias Hurts Cybersecurity

Is artificial intelligence a threat? Maybe not in the science fiction sense of machines taking over the world. But it does open up new avenues of attack. And because AI is trained by humans, it can include human bias — or fail to account for human bias. Instead of approaching AI security from an external standpoint (i.e. preventing breaches) we must also consider the impact it might have internally.

Suppose you decide you’re going to start using AI to prevent breaches in your company. In that case, you may not want to worry so much about how to block clever threat actors. Instead, you should worry more about how to keep your own users, customers or employees safe. By using AI security in some form, are you putting them at risk? In today’s threat landscape, where personal devices are on corporate networks with people working from home, enterprise networks are handling much more personal traffic than ever before.

How to Overcome Bias

Carpenter advises that companies look for the broader impacts that go beyond just the intended use of the AI product.

In our industry, protecting personal information is critical. But what happens when AI security glosses over something that may, at first glance, seem harmless but is, in fact, sensitive to certain groups?

Carpenter offers an example. Let’s say a company suffers a data breach in which the only information that leaked was employees’ genders. For many people, that might not be a concern.

“But having someone’s gender hacked and put out there could be a really big deal for a lot of people,” she said. “It could be life changing … devastating … traumatizing … because gender is such a complicated social and cultural issue.”

Depending on what kind of service you handle and what kind of data is linked, you may have different kinds of outcomes.

The Limits on ‘Reading People’

Another potential pitfall for the use of AI in cybersecurity is with advanced biometrics — especially when it comes to specifics like facial expressions. Even looking ahead into the 2040s, Carpenter is skeptical that AI will understand visual cues. The subtleties, nuances and cultural differences are simply too complex.

“It’s going to disregard context, situations and suggestiveness,” she says. “You could have a frown on your face and the AI technology thinks that you’re frustrated or angry. But you pull back the picture, and the person is standing while they’re reading a book, and they’re actually just concentrating. It doesn’t really matter what other biometrics you triangulate it with. It’s a guessing game.”

Remember Ethical Frameworks

One piece of ‘low-hanging-fruit’ companies can take from a user perspective, Carpenter advises, is to look at things like the General Data Protection Regulation (GDPR) and any protocols that talk about the user’s rights and think about an ethical framework built on those rights.

“If you look at things like the rights for the citizen section of the GDPR, it explicitly defines what my rights are as a user and as a data person,” she says. “If my data is incorrect, how do I fix it, how can I get organizations to stop disseminating false data about me? These are the ethical questions that are out there, and things that are user-centered that can be a starting point for discussions in organizations.”

With any type of strategic planning, having the right people in place is a crucial element for success. With AI security, it’s no different.

Checklist for Working With AI

Carpenter insists organizations should have an important initial discussion about AI security and answer several key questions:

  • What are the goals of using AI, even beyond the business goals?
  • How does the organization think of AI as a concept?
  • What should the AI do, and what shouldn’t it do?
  • What is it we’re artificially replicating with AI?
  • Whose intelligence are we artificially replicating?
  • How will this intelligence be used?
  • What do we want the intelligence to do that goes above and beyond its primary functionality?

“There needs to be explicit discussions, smaller discussions and micro discussions between and within the teams and working groups,” she says. “We also need to make decisions about what to include and not to include, what to code and not to code, how to promote the product or not promote their product, who do we give it to and who we are designing it for.”

What’s Next for AI Security?

Carpenter recalls a recent talk with another very large tech company in which she asked how their AI security handles a huge data breach. Beyond its uses, she was curious about what the company learned about the group that carried out the attack.

“We’re not detectives,” the executive told her, “and all we can do is put a cork back in the leak and move on to predicting how they might attack us again.”

This type of reactive, short-term thinking is often the best we can do to keep up with the cycle of prediction, detection and response. Carpenter hopes that in the long term, cybersecurity can leverage people in social sciences more. They could help AI find forensic patterns, cultural patterns, how attacks were happening, who is behind the attacks and what their motivations are. When programmed and put in place correctly, AI security could someday predict and forecast how future events might emerge.

Use Some AI … But Not Too Much

“AI should provide more refined insights, not so much in terms of quantity but in terms of quality,” Carpenter says. “Because you’re looking at this diverse set of rules, and you’re not stuck in an echo chamber with the same ideas and the same concepts. Frankly, if I was working in cybersecurity, and I was working in an organization with everybody throwing around the term AI (too much), I’d be a little concerned.”

Cybersecurity experts, she suggests, must learn to think like social scientists, taking a step back, so everyone in the enterprise is on the same page — increasing communication to help everybody’s plan.

“People from social sciences are specifically trained to help you give AI more understanding,” she says.

Better AI Security By Thinking Like a Human

In fact, it’s difficult not to come away with the perception that winning in cybersecurity is about taking human psychology and social sciences into account in other areas, too. Almost anyone who has instilled a culture of awareness in their enterprise will tell you that they’re much more confident about their security posture.

Learning about, adopting and getting the most out of AI security is no different. The more we understand about the human element and the more we add that understanding into AI input, the better off we’ll be as an industry.

More from Intelligence & Analytics

BlackCat (ALPHV) Ransomware Levels Up for Stealth, Speed and Exfiltration

9 min read - This blog was made possible through contributions from Kat Metrick, Kevin Henson, Agnes Ramos-Beauchamp, Thanassis Diogos, Diego Matos Martins and Joseph Spero. BlackCat ransomware, which was among the top ransomware families observed by IBM Security X-Force in 2022, according to the 2023 X-Force Threat Intelligence Index, continues to wreak havoc across organizations globally this year. BlackCat (a.k.a. ALPHV) ransomware affiliates' more recent attacks include targeting organizations in the healthcare, government, education, manufacturing and hospitality sectors. Reportedly, several of these incidents resulted…

9 min read

Despite Tech Layoffs, Cybersecurity Positions are Hiring

4 min read - It’s easy to read today’s headlines and think that now isn’t the best time to look for a job in the tech industry. However, that’s not necessarily true. When you read deeper into the stories and numbers, cybersecurity positions are still very much in demand. Cybersecurity professionals are landing jobs every day, and IT professionals from other roles may be able to transfer their skills into cybersecurity relatively easily. As cybersecurity continues to remain a top business priority, organizations will…

4 min read

79% of Cyber Pros Make Decisions Without Threat Intelligence

4 min read - In a recent report, 79% of security pros say they make decisions without adversary insights “at least the majority of the time.” Why aren’t companies effectively leveraging threat intelligence? And does the C-Suite know this is going on? It’s not unusual for attackers to stay concealed within an organization’s computer systems for extended periods of time. And if their methods and behavioral patterns are unfamiliar, they can cause significant harm before the security team even realizes a breach has occurred.…

4 min read

Why People Skills Matter as Much as Industry Experience

4 min read - As the project manager at a large tech company, I always went to Jim when I needed help. While others on my team had more technical expertise, Jim was easy to work with. He explained technical concepts in a way anyone could understand and patiently answered my seemingly endless questions. We spent many hours collaborating and brainstorming ideas about product features as well as new processes for the team. But Jim was especially valuable when I needed help with other…

4 min read