Data poisoning against security software that uses artificial intelligence (AI) and machine learning (ML) is likely the next big cybersecurity risk. According to the RSA 2021 keynote presentation by Johannes Ullrich, dean of research of SANS Technology Institute, it’s a threat we should all keep an eye on.

“One of the most basic threats when it comes to machine learning is one of the attackers actually being able to influence the samples that we are using to train our models,” Ullrich said at RSA.

With this new threat quickly emerging, defenders must learn how to spot data poisoning attacks and how to prevent them. Otherwise, you will make business and cybersecurity decisions based on faulty data.

What Is Data Poisoning?

When attackers tamper with data used to train AI models, it effectively becomes ‘poisoned.’ Because AI relies on that data to learn how to make accurate predictions, the predictions generated by the algorithm will be incorrect.

Threat actors are now messing with data in ways that can be used for cyberattacks. For example, they can do a lot just by changing data for a recommendation engine. From there, they can get someone to download a malware app or click on an infected link.

Data poisoning is so dangerous because it uses AI against us. We are increasingly putting our trust in AI predictions for so many aspects of our personal lives and our work. It does everything from helping us choose a movie to watch to telling us which customers might cancel their service.

As digital transformation sped up due to COVID-19, AI became even more common. Digital transactions and connections are the norm rather than the exception.

Data Poisoning and Cybersecurity Tools

Threat actors are using data poisoning to infiltrate the very tools defenders are using to spot threats, too. First, they can change the data or add data to generate incorrect classifications. In addition, attackers also use data poisoning to generate back doors.

This increase of data poisoning attacks on AI tools means businesses and agencies may hesitate to turn to those tools. It also makes it more challenging for defenders to know what data to trust.

During the keynote, Ullrich said the solution starts with having thorough knowledge of the models used by AI cybersecurity tools. If you don’t understand what protects your data, it becomes challenging to tell whether those techniques and tools are accurate.

Identifying Data Poisoning Attacks

Data poisoning attacks are challenging and time consuming to spot. So, victims often find that when they discover the issue, the damage is already extensive.

In addition, they don’t know what data is real and what data has been manipulated. Often data poisoning attacks are an inside job and committed at a very slow pace. Both make the changes in the data easy to miss.

During the RSA session ‘Evasion, Poisoning, Extraction and Inference: The Tools to Defend and Evaluate’, Abigail Goldsteen of IBM Research recommended cybersecurity professionals turn to Adversarial Robustness 360 Toolbox (ART) to identify, stop and prevent data poisoning attacks. This open-source toolkit allows developers to quickly create, analyze and attack, and then rapidly select the right defense methods for machine learning models.

Using the Tools We Have

So, should you not use AI? At this point, it would not be practical to abandon it completely. Doing so will result in threat actors simply using AI and ML to create attacks that we cannot defend against.

Instead, as defenders, we must not blindly trust the tools and the data we have. Becoming more knowledgeable in how the algorithms work and routinely checking the data for anomalies will help us keep ahead of attacks.

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today