We often hear about the positive aspects of artificial intelligence (AI) security — the way it can predict what customers need through data and deliver a custom result. When the darker side of AI is discussed, the conversation often centers on data privacy.

Other conversations in this area veer into science fiction where the AI works of its own volition: “Open the pod bay doors, HAL.”

But a concerning trend is emerging in the real world: an increase in AI-enabled cyberattacks.

A ‘Cold War’ of AI Escalation

Cybersecurity experts are becoming more concerned about AI attacks, both now and in the near future. The Emergence Of Offensive AI, a report from Forrester Consulting, found that 88% of decision-makers in the security industry believe offensive AI is coming. Half of the respondents expect an increase in attacks. In addition, two-thirds of those surveyed expect AI to lead new attacks.

Deloitte’s white paper Smart cyber: How AI can help manage cyber risk says that smart cyber is a spectrum. It starts with robotic process automation, moving to cognitive automation and then evolving to AI. Previously, cyberattacks were on the lower end of the spectrum of simply mimicking human actions. However, now that cyber criminals have moved to fully using AI, their attacks mimic human intelligence.

Deloitte defines this as “machine intelligence that learns unsupervised, but also communicates and interacts seamlessly with humans … as cohorts.”

Defenders and Attackers Are Both Getting ‘Smarter’

The underlying concept of AI security — using data to become smarter and more accurate — is what makes the trend so dangerous. Because the attacks become smarter with each success and each failure, they are harder to predict and stop. Once the threats outpace defenders’ expertise and tools, the attacks quickly become much harder to control. Because of the nature of AI security, we must react quickly to the increasing AI attacks before we as an industry are too late in the game to catch up.

Increased speed and reliability provide businesses with many benefits, such as when they can process large amounts of data in almost real-time. Cyber criminals are now benefiting from this speed as well, most notably in terms of increased 5G coverage. Cyberattacks can learn from themselves much quicker now, as well as use swarm attacks to gain access quickly. The faster speeds also mean that threat actors can work more quickly, which often means not being detected by technology or humans until it’s too late to stop them.

The Challenge With AI Security Attacks

The problem with protecting against AI attacks is the pace of change. Defensive tech is lagging behind, which means that in the very near future it is likely that attackers may truly have the upper hand. Based on the nature of AI security, once that happens, it will be challenging — if not impossible — for defenders to regain control.

One of the biggest selling points for AI security is the way it can understand context, and combine speed with that context. Before, automated cyberattacks couldn’t do that. This made them a bit one-dimensional, or at least limited. By adding context, these attacks are now more powerful and able to launch at a larger scale.

As an industry, we must start by knowing how threat actors are using AI for attacks — especially the types of attacks and the common openings they exploit. We can then start figuring out how to stop them. Spoiler alert: The answer is taking a play out of the attacker’s own playbook. We’ll get to that in a minute. First, we must determine how we can prevent ourselves from getting into a place where the bad guys gain the advantage.

How Cyber Criminals Use AI for Attacks

Threat actors weaponize AI in two main ways: first, to design the attack, and then to conduct the attack. The predictive nature of the tech lends itself to both elements. As the World Economic Forum points out, AI can mimic trusted actors. This means they learn about a real person and then use bots to copy their actions and language.

By using AI, attackers can more quickly spot openings, such as a network without protection or downed firewall, which means that a very short window can be used for an attack. AI enables vulnerabilities to be found that a human couldn’t detect, since a bot can use data from previous attacks to spot very slight changes.

While many businesses use AI to predict customers’ needs, threat actors use the same concept to increase the odds of an attack’s success. By using data collected from other similar users, or even from the exact user targeted, cyber criminals can design an attack likely to work for that specific person. For example, if an employee receives emails from their children’s school in their work email, the bot can launch a phishing attack designed to mimic a school email or link.

AI also can make it harder for defenders to detect the specific bot or attack. By using AI, threat actors can design attacks that create new mutations based on the type of defense launched at the attack. Security experts and tech have to defend against constantly changing bots, which are very hard to stop. As soon as they get close to blocking an attack, a new attack emerges.

How AI Security Can Stop Attacks

As attacks become smarter, the industry must increase our use of sophisticated techniques. A bit ironically, the most effective response to defending against AI attacks is to use AI against them. As the World Economic Forum succinctly put it, only AI can play AI at its own game. By using AI security to protect and defend, your AI systems become smarter and more effective with each attack. Similar to how threat actors use it to predict actions and risks, it can predict the attackers.

When the South Coast Water District facility began using AI security, they quickly discovered two potential issues. An employee was emailing multiple versions of spreadsheets between two monitors set up for remote work, which could have been an opening for data theft. Alarm flags also went up when a laptop that had been offline for over five years connected to the network. They looked into both incidents right away, which shut down any potential threat.

In the past, cybersecurity revolved around protecting the infrastructure and then reacting to threats. By using AI, it moves from proactive to predictive. Companies can use AI for predictive risk intelligence in four ways: risk related decision-making, risk sensing, threat monitoring and detection and automation of risk processes. Using AI security for a single one of these tasks reduces your risk, but the greatest protection comes from using all four types together.

Developing the Future of AI Security

Over the past few years, AI has become a cornerstone for many systems and platforms, ranging from retail to marketing to finance. Soon, if not already, it will be considered a standard feature, not a bonus or differentiation among tools. The same trend is likely to follow with AI in cybersecurity – the predictive tech will become the standard. By combining AI security with zero trust, organizations increase their likelihood of preventing many attacks and quickly defusing any that make it through. If you begin upgrading systems and processes to use AI now, you’ll have a running start and a fair shot at catching up to threat actors.

More from Artificial Intelligence

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today