For cybersecurity experts, artificial intelligence (AI) can both respond to and predict threats. But because AI security is everywhere, attackers are using it to launch more refined attacks. Each side is seemingly playing catch-up, with no clear winner in sight.

How can defenders stay ahead? To gain context about AI that goes beyond prediction, detection and response, our industry will need to ‘humanize’ the process. We’ve explored some of the technical aspects of AI, like how it can both prevent and launch direct-denial-of-service attacks, for instance. But to get the most out of it in the long run, we’ll need to take a social sciences approach instead.

What AI Security Can’t Do

First, let’s establish what AI and machine learning are. AI, much like its name, represents the higher concept of machines carrying out ‘smart’ tasks. Machine learning (ML) is a subset of AI. It provides data to computers so they can process that data and learn for themselves. Whether it’s AI or machine learning, algorithms are built based on data that determine what patterns are expected and what are considered abnormal.

The best AI requires data scientists, statistics and as much human input as possible. As you train it, AI learns to create results that may not be visible to the human running it. It can even make judgments based on data for which you didn’t train it. This ‘black box’ nature means there’s also a push to make AI that can reveal how it makes decisions.

No matter how well AI trains itself, human oversight and input are key to its success. That’s the takeaway from Julie Carpenter, research fellow in the ethics and emerging sciences group at California Polytechnic State University.

“Every decision you make in AI should have a human in the loop at this point,” she says. “We don’t have any sort of genius AI that understands human context, or human ways of life or sentience. Some sort of oversight is necessary.”

AI Can’t Outthink Us

Carpenter explains that AI’s original goal is to replicate human-like thinking, an attempt that remains true today for most AI products. AI cybersecurity — and AI in general —  is there to serve humans in one way or another, she said. But it still doesn’t understand human context, culture or meaning.

The belief that AI will, sometime in the future, outsmart and outthink us is incorrect, Carpenter said. She also shared her strong doubts about the current state of AI reading emotion. ‘Affective’ AI like this is being used in advertising to try to read consumers’ attitudes toward products and marketing campaigns.

“I don’t think it’s necessarily a good direction for AI to go,” she warned. “How can we teach AI to do something we (ourselves) cannot do — which is perfectly read each other’s emotions?”

How AI Bias Hurts Cybersecurity

Is artificial intelligence a threat? Maybe not in the science fiction sense of machines taking over the world. But it does open up new avenues of attack. And because AI is trained by humans, it can include human bias — or fail to account for human bias. Instead of approaching AI security from an external standpoint (i.e. preventing breaches) we must also consider the impact it might have internally.

Suppose you decide you’re going to start using AI to prevent breaches in your company. In that case, you may not want to worry so much about how to block clever threat actors. Instead, you should worry more about how to keep your own users, customers or employees safe. By using AI security in some form, are you putting them at risk? In today’s threat landscape, where personal devices are on corporate networks with people working from home, enterprise networks are handling much more personal traffic than ever before.

How to Overcome Bias

Carpenter advises that companies look for the broader impacts that go beyond just the intended use of the AI product.

In our industry, protecting personal information is critical. But what happens when AI security glosses over something that may, at first glance, seem harmless but is, in fact, sensitive to certain groups?

Carpenter offers an example. Let’s say a company suffers a data breach in which the only information that leaked was employees’ genders. For many people, that might not be a concern.

“But having someone’s gender hacked and put out there could be a really big deal for a lot of people,” she said. “It could be life changing … devastating … traumatizing … because gender is such a complicated social and cultural issue.”

Depending on what kind of service you handle and what kind of data is linked, you may have different kinds of outcomes.

The Limits on ‘Reading People’

Another potential pitfall for the use of AI in cybersecurity is with advanced biometrics — especially when it comes to specifics like facial expressions. Even looking ahead into the 2040s, Carpenter is skeptical that AI will understand visual cues. The subtleties, nuances and cultural differences are simply too complex.

“It’s going to disregard context, situations and suggestiveness,” she says. “You could have a frown on your face and the AI technology thinks that you’re frustrated or angry. But you pull back the picture, and the person is standing while they’re reading a book, and they’re actually just concentrating. It doesn’t really matter what other biometrics you triangulate it with. It’s a guessing game.”

Remember Ethical Frameworks

One piece of ‘low-hanging-fruit’ companies can take from a user perspective, Carpenter advises, is to look at things like the General Data Protection Regulation (GDPR) and any protocols that talk about the user’s rights and think about an ethical framework built on those rights.

“If you look at things like the rights for the citizen section of the GDPR, it explicitly defines what my rights are as a user and as a data person,” she says. “If my data is incorrect, how do I fix it, how can I get organizations to stop disseminating false data about me? These are the ethical questions that are out there, and things that are user-centered that can be a starting point for discussions in organizations.”

With any type of strategic planning, having the right people in place is a crucial element for success. With AI security, it’s no different.

Checklist for Working With AI

Carpenter insists organizations should have an important initial discussion about AI security and answer several key questions:

  • What are the goals of using AI, even beyond the business goals?
  • How does the organization think of AI as a concept?
  • What should the AI do, and what shouldn’t it do?
  • What is it we’re artificially replicating with AI?
  • Whose intelligence are we artificially replicating?
  • How will this intelligence be used?
  • What do we want the intelligence to do that goes above and beyond its primary functionality?

“There needs to be explicit discussions, smaller discussions and micro discussions between and within the teams and working groups,” she says. “We also need to make decisions about what to include and not to include, what to code and not to code, how to promote the product or not promote their product, who do we give it to and who we are designing it for.”

What’s Next for AI Security?

Carpenter recalls a recent talk with another very large tech company in which she asked how their AI security handles a huge data breach. Beyond its uses, she was curious about what the company learned about the group that carried out the attack.

“We’re not detectives,” the executive told her, “and all we can do is put a cork back in the leak and move on to predicting how they might attack us again.”

This type of reactive, short-term thinking is often the best we can do to keep up with the cycle of prediction, detection and response. Carpenter hopes that in the long term, cybersecurity can leverage people in social sciences more. They could help AI find forensic patterns, cultural patterns, how attacks were happening, who is behind the attacks and what their motivations are. When programmed and put in place correctly, AI security could someday predict and forecast how future events might emerge.

Use Some AI … But Not Too Much

“AI should provide more refined insights, not so much in terms of quantity but in terms of quality,” Carpenter says. “Because you’re looking at this diverse set of rules, and you’re not stuck in an echo chamber with the same ideas and the same concepts. Frankly, if I was working in cybersecurity, and I was working in an organization with everybody throwing around the term AI (too much), I’d be a little concerned.”

Cybersecurity experts, she suggests, must learn to think like social scientists, taking a step back, so everyone in the enterprise is on the same page — increasing communication to help everybody’s plan.

“People from social sciences are specifically trained to help you give AI more understanding,” she says.

Better AI Security By Thinking Like a Human

In fact, it’s difficult not to come away with the perception that winning in cybersecurity is about taking human psychology and social sciences into account in other areas, too. Almost anyone who has instilled a culture of awareness in their enterprise will tell you that they’re much more confident about their security posture.

Learning about, adopting and getting the most out of AI security is no different. The more we understand about the human element and the more we add that understanding into AI input, the better off we’ll be as an industry.

More from Intelligence & Analytics

Email campaigns leverage updated DBatLoader to deliver RATs, stealers

11 min read - IBM X-Force has identified new capabilities in DBatLoader malware samples delivered in recent email campaigns, signaling a heightened risk of infection from commodity malware families associated with DBatLoader activity. X-Force has observed nearly two dozen email campaigns since late June leveraging the updated DBatLoader loader to deliver payloads such as Remcos, Warzone, Formbook, and AgentTesla. DBatLoader malware has been used since 2020 by cybercriminals to install commodity malware remote access Trojans (RATs) and infostealers, primarily via malicious spam (malspam). DBatLoader…

New Hive0117 phishing campaign imitates conscription summons to deliver DarkWatchman malware

8 min read - IBM X-Force uncovered a new phishing campaign likely conducted by Hive0117 delivering the fileless malware DarkWatchman, directed at individuals associated with major energy, finance, transport, and software security industries based in Russia, Kazakhstan, Latvia, and Estonia. DarkWatchman malware is capable of keylogging, collecting system information, and deploying secondary payloads. Imitating official correspondence from the Russian government in phishing emails aligns with previous Hive0117 campaigns delivering DarkWatchman malware, and shows a possible significant effort to induce a sense of urgency as…

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…