We often hear about the positive aspects of artificial intelligence (AI) security — the way it can predict what customers need through data and deliver a custom result. When the darker side of AI is discussed, the conversation often centers on data privacy.

Other conversations in this area veer into science fiction where the AI works of its own volition: “Open the pod bay doors, HAL.”

But a concerning trend is emerging in the real world: an increase in AI-enabled cyberattacks.

A ‘Cold War’ of AI Escalation

Cybersecurity experts are becoming more concerned about AI attacks, both now and in the near future. The Emergence Of Offensive AI, a report from Forrester Consulting, found that 88% of decision-makers in the security industry believe offensive AI is coming. Half of the respondents expect an increase in attacks. In addition, two-thirds of those surveyed expect AI to lead new attacks.

Deloitte’s white paper Smart cyber: How AI can help manage cyber risk says that smart cyber is a spectrum. It starts with robotic process automation, moving to cognitive automation and then evolving to AI. Previously, cyberattacks were on the lower end of the spectrum of simply mimicking human actions. However, now that cyber criminals have moved to fully using AI, their attacks mimic human intelligence.

Deloitte defines this as “machine intelligence that learns unsupervised, but also communicates and interacts seamlessly with humans … as cohorts.”

Defenders and Attackers Are Both Getting ‘Smarter’

The underlying concept of AI security — using data to become smarter and more accurate — is what makes the trend so dangerous. Because the attacks become smarter with each success and each failure, they are harder to predict and stop. Once the threats outpace defenders’ expertise and tools, the attacks quickly become much harder to control. Because of the nature of AI security, we must react quickly to the increasing AI attacks before we as an industry are too late in the game to catch up.

Increased speed and reliability provide businesses with many benefits, such as when they can process large amounts of data in almost real-time. Cyber criminals are now benefiting from this speed as well, most notably in terms of increased 5G coverage. Cyberattacks can learn from themselves much quicker now, as well as use swarm attacks to gain access quickly. The faster speeds also mean that threat actors can work more quickly, which often means not being detected by technology or humans until it’s too late to stop them.

The Challenge With AI Security Attacks

The problem with protecting against AI attacks is the pace of change. Defensive tech is lagging behind, which means that in the very near future it is likely that attackers may truly have the upper hand. Based on the nature of AI security, once that happens, it will be challenging — if not impossible — for defenders to regain control.

One of the biggest selling points for AI security is the way it can understand context, and combine speed with that context. Before, automated cyberattacks couldn’t do that. This made them a bit one-dimensional, or at least limited. By adding context, these attacks are now more powerful and able to launch at a larger scale.

As an industry, we must start by knowing how threat actors are using AI for attacks — especially the types of attacks and the common openings they exploit. We can then start figuring out how to stop them. Spoiler alert: The answer is taking a play out of the attacker’s own playbook. We’ll get to that in a minute. First, we must determine how we can prevent ourselves from getting into a place where the bad guys gain the advantage.

How Cyber Criminals Use AI for Attacks

Threat actors weaponize AI in two main ways: first, to design the attack, and then to conduct the attack. The predictive nature of the tech lends itself to both elements. As the World Economic Forum points out, AI can mimic trusted actors. This means they learn about a real person and then use bots to copy their actions and language.

By using AI, attackers can more quickly spot openings, such as a network without protection or downed firewall, which means that a very short window can be used for an attack. AI enables vulnerabilities to be found that a human couldn’t detect, since a bot can use data from previous attacks to spot very slight changes.

While many businesses use AI to predict customers’ needs, threat actors use the same concept to increase the odds of an attack’s success. By using data collected from other similar users, or even from the exact user targeted, cyber criminals can design an attack likely to work for that specific person. For example, if an employee receives emails from their children’s school in their work email, the bot can launch a phishing attack designed to mimic a school email or link.

AI also can make it harder for defenders to detect the specific bot or attack. By using AI, threat actors can design attacks that create new mutations based on the type of defense launched at the attack. Security experts and tech have to defend against constantly changing bots, which are very hard to stop. As soon as they get close to blocking an attack, a new attack emerges.

How AI Security Can Stop Attacks

As attacks become smarter, the industry must increase our use of sophisticated techniques. A bit ironically, the most effective response to defending against AI attacks is to use AI against them. As the World Economic Forum succinctly put it, only AI can play AI at its own game. By using AI security to protect and defend, your AI systems become smarter and more effective with each attack. Similar to how threat actors use it to predict actions and risks, it can predict the attackers.

When the South Coast Water District facility began using AI security, they quickly discovered two potential issues. An employee was emailing multiple versions of spreadsheets between two monitors set up for remote work, which could have been an opening for data theft. Alarm flags also went up when a laptop that had been offline for over five years connected to the network. They looked into both incidents right away, which shut down any potential threat.

In the past, cybersecurity revolved around protecting the infrastructure and then reacting to threats. By using AI, it moves from proactive to predictive. Companies can use AI for predictive risk intelligence in four ways: risk related decision-making, risk sensing, threat monitoring and detection and automation of risk processes. Using AI security for a single one of these tasks reduces your risk, but the greatest protection comes from using all four types together.

Developing the Future of AI Security

Over the past few years, AI has become a cornerstone for many systems and platforms, ranging from retail to marketing to finance. Soon, if not already, it will be considered a standard feature, not a bonus or differentiation among tools. The same trend is likely to follow with AI in cybersecurity – the predictive tech will become the standard. By combining AI security with zero trust, organizations increase their likelihood of preventing many attacks and quickly defusing any that make it through. If you begin upgrading systems and processes to use AI now, you’ll have a running start and a fair shot at catching up to threat actors.

More from Artificial Intelligence

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

AI hallucinations can pose a risk to your cybersecurity

4 min read - In early 2023, Google’s Bard made headlines for a pretty big mistake, which we now call an AI hallucination. During a demo, the chatbot was asked, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard answered that JWST, which launched in December 2021, took the "very first pictures" of an exoplanet outside our solar system. However, the European Southern Observatory's Very Large Telescope took the first picture of an exoplanet in 2004.What is…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today