Machine learning (ML) has brought us self-driving cars, machine vision, speech recognition, biometric authentication and the ability to unlock the human genome. But it has also given attackers a variety of new attack surfaces and ways to wreak havoc.

Machine learning applications are unlike those that came before them, making it all the more important to understand their risks. What are the potential consequences of an attack on a model that controls networks of connected autonomous vehicles or coordinates access controls for hospital staff? The results of a compromised model can be catastrophic in these scenarios, but there are also more prosaic threats to consider, such as fooling biometric security controls into granting access to unauthorized users.

Machine learning is still in its early stages of development, and the attack vectors are not yet clear. Cyberdefense strategies are also in their nascent stages. While we can’t prevent all forms of attacks, understanding why they occur helps us narrow down our response strategies.

A Structured Approach to Machine Learning Security

Threat modeling is a security optimization process that applies a structured approach to identifying and addressing threats. Machine learning security threat modeling does the same thing for ML models. It’s used at the early stages of building and deploying ML models to identify all possible threats and attack vectors.

There are four fundamental questions to ask.

Who Are the Threat Actors?

Threat actors can range from nation-states to hacktivists to rogue employees. Each category of potential adversaries has different characteristics that require different defense/response strategies. Their reasons for attacking also vary, which is why the “why” and “what” questions described below are so critical.

Why Are They Threats and What Are Their Motivations?

A wide range of factors can influence attackers to target ML systems. Defense strategies should proceed from the CIA triad, which is a three-sided information security governance model that encompasses confidentiality, integrity and availability:

  • Confidentiality is about ensuring that only those with appropriate rights can access information. These protections can guard against an attacker who wants to extract sensitive data by compromising training data.
  • An integrity attack might attempt to influence the model’s behavior, such as returning false positives in a facial recognition system. Protections such as frequent backups, digital signatures and audits ensure that information isn’t altered or tampered with.
  • An availability attack may be aimed at reducing the consistency, performance or access to the machine learning model. Good availability practices, such as maintaining redundant servers and applying data loss prevention tools, make information available when needed.

How Will They Attack?

Machine learning systems open new avenues for attacks that don’t exist in conventional procedural programs. One of these is the evasion or adversarial attack, in which a foe attempts to inject inputs to ML models that are intentionally meant to trigger mistakes. The data may look okay to humans, but subtle variances can cause ML algorithms to go wildly off track.

Such attacks may occur at inference time by exploiting the model’s internal information, typically in one of two ways. In a white box attack, the attacker has some information about the model, obtained either directly or from untrusted actors in the data processing pipeline. In a black box scenario, the assailant knows nothing about the system’s internal workings, but identifies vulnerabilities by repeatedly probing and finding patterns in the results that betray the learning model.

New Threat Vectors

There are two dimensions we can use to classify the “how” of an ML attack: inference and training. In an attack at the inference phase, the enemy has specific information about the model and/or the data used to train it. It isn’t necessary to have direct access to the system to obtain this information. Exploratory techniques, such as side-channel and remote attacks, can permit an adversary to penetrate deployed ML systems by inferring their logic through responses to inputs or by using data poisoning so the attacker can target the hardware directly.

Attacks at the training phase attempt to learn and corrupt the model. Depending on the availability of data, the attacker may use substitute models to test potential inputs before submitting them to the victim.

There are also two ways to alter the model. The injection method modifies existing data by inserting untrusted components, causing the results of the model to become untrustworthy. A particularly dangerous alternative approach is logic corruption, in which the attacker changes the learning algorithm itself. This tactic is extremely dangerous because intruders can effectively take control of systems and direct them to produce whatever output they want.

Machine Learning Attacks

Putting all of these factors together, we can identify three distinct attacks that target different phases of the machine learning life cycle:

  1. Evasion attacks — These attacks are the most common. Typically performed during inference time, evasion attacks are intended to introduce inputs that cause the model to produce incorrect results.
  2. Poisoning attacks — These attacks are carried out during the inference stage and are intended to threaten integrity and availability. Poisoning alters training data sets by inserting, removing or editing decision points to change the boundaries of the target model.
  3. Privacy attacks — These usually happen during the training phase. The intent is not to corrupt the training model, but to retrieve sensitive information.

In addition to the above, there are several attacks that may occur at either or both of the training and inference stages. They go by names such as anchor points, mimicry, model extraction, path finding, least cost, and constrained and gradient descent.

Unfortunately, we can expect new attack types to emerge as ML goes mainstream, but understanding the basic vulnerabilities and prevention tactics is the first step toward combating them.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today