We often hear about the positive aspects of artificial intelligence (AI) security — the way it can predict what customers need through data and deliver a custom result. When the darker side of AI is discussed, the conversation often centers on data privacy.

Other conversations in this area veer into science fiction where the AI works of its own volition: “Open the pod bay doors, HAL.”

But a concerning trend is emerging in the real world: an increase in AI-enabled cyberattacks.

A ‘Cold War’ of AI Escalation

Cybersecurity experts are becoming more concerned about AI attacks, both now and in the near future. The Emergence Of Offensive AI, a report from Forrester Consulting, found that 88% of decision-makers in the security industry believe offensive AI is coming. Half of the respondents expect an increase in attacks. In addition, two-thirds of those surveyed expect AI to lead new attacks.

Deloitte’s white paper Smart cyber: How AI can help manage cyber risk says that smart cyber is a spectrum. It starts with robotic process automation, moving to cognitive automation and then evolving to AI. Previously, cyberattacks were on the lower end of the spectrum of simply mimicking human actions. However, now that cyber criminals have moved to fully using AI, their attacks mimic human intelligence.

Deloitte defines this as “machine intelligence that learns unsupervised, but also communicates and interacts seamlessly with humans … as cohorts.”

Defenders and Attackers Are Both Getting ‘Smarter’

The underlying concept of AI security — using data to become smarter and more accurate — is what makes the trend so dangerous. Because the attacks become smarter with each success and each failure, they are harder to predict and stop. Once the threats outpace defenders’ expertise and tools, the attacks quickly become much harder to control. Because of the nature of AI security, we must react quickly to the increasing AI attacks before we as an industry are too late in the game to catch up.

Increased speed and reliability provide businesses with many benefits, such as when they can process large amounts of data in almost real-time. Cyber criminals are now benefiting from this speed as well, most notably in terms of increased 5G coverage. Cyberattacks can learn from themselves much quicker now, as well as use swarm attacks to gain access quickly. The faster speeds also mean that threat actors can work more quickly, which often means not being detected by technology or humans until it’s too late to stop them.

The Challenge With AI Security Attacks

The problem with protecting against AI attacks is the pace of change. Defensive tech is lagging behind, which means that in the very near future it is likely that attackers may truly have the upper hand. Based on the nature of AI security, once that happens, it will be challenging — if not impossible — for defenders to regain control.

One of the biggest selling points for AI security is the way it can understand context, and combine speed with that context. Before, automated cyberattacks couldn’t do that. This made them a bit one-dimensional, or at least limited. By adding context, these attacks are now more powerful and able to launch at a larger scale.

As an industry, we must start by knowing how threat actors are using AI for attacks — especially the types of attacks and the common openings they exploit. We can then start figuring out how to stop them. Spoiler alert: The answer is taking a play out of the attacker’s own playbook. We’ll get to that in a minute. First, we must determine how we can prevent ourselves from getting into a place where the bad guys gain the advantage.

How Cyber Criminals Use AI for Attacks

Threat actors weaponize AI in two main ways: first, to design the attack, and then to conduct the attack. The predictive nature of the tech lends itself to both elements. As the World Economic Forum points out, AI can mimic trusted actors. This means they learn about a real person and then use bots to copy their actions and language.

By using AI, attackers can more quickly spot openings, such as a network without protection or downed firewall, which means that a very short window can be used for an attack. AI enables vulnerabilities to be found that a human couldn’t detect, since a bot can use data from previous attacks to spot very slight changes.

While many businesses use AI to predict customers’ needs, threat actors use the same concept to increase the odds of an attack’s success. By using data collected from other similar users, or even from the exact user targeted, cyber criminals can design an attack likely to work for that specific person. For example, if an employee receives emails from their children’s school in their work email, the bot can launch a phishing attack designed to mimic a school email or link.

AI also can make it harder for defenders to detect the specific bot or attack. By using AI, threat actors can design attacks that create new mutations based on the type of defense launched at the attack. Security experts and tech have to defend against constantly changing bots, which are very hard to stop. As soon as they get close to blocking an attack, a new attack emerges.

How AI Security Can Stop Attacks

As attacks become smarter, the industry must increase our use of sophisticated techniques. A bit ironically, the most effective response to defending against AI attacks is to use AI against them. As the World Economic Forum succinctly put it, only AI can play AI at its own game. By using AI security to protect and defend, your AI systems become smarter and more effective with each attack. Similar to how threat actors use it to predict actions and risks, it can predict the attackers.

When the South Coast Water District facility began using AI security, they quickly discovered two potential issues. An employee was emailing multiple versions of spreadsheets between two monitors set up for remote work, which could have been an opening for data theft. Alarm flags also went up when a laptop that had been offline for over five years connected to the network. They looked into both incidents right away, which shut down any potential threat.

In the past, cybersecurity revolved around protecting the infrastructure and then reacting to threats. By using AI, it moves from proactive to predictive. Companies can use AI for predictive risk intelligence in four ways: risk related decision-making, risk sensing, threat monitoring and detection and automation of risk processes. Using AI security for a single one of these tasks reduces your risk, but the greatest protection comes from using all four types together.

Developing the Future of AI Security

Over the past few years, AI has become a cornerstone for many systems and platforms, ranging from retail to marketing to finance. Soon, if not already, it will be considered a standard feature, not a bonus or differentiation among tools. The same trend is likely to follow with AI in cybersecurity – the predictive tech will become the standard. By combining AI security with zero trust, organizations increase their likelihood of preventing many attacks and quickly defusing any that make it through. If you begin upgrading systems and processes to use AI now, you’ll have a running start and a fair shot at catching up to threat actors.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today