Data poisoning against security software that uses artificial intelligence (AI) and machine learning (ML) is likely the next big cybersecurity risk. According to the RSA 2021 keynote presentation by Johannes Ullrich, dean of research of SANS Technology Institute, it’s a threat we should all keep an eye on.

“One of the most basic threats when it comes to machine learning is one of the attackers actually being able to influence the samples that we are using to train our models,” Ullrich said at RSA.

With this new threat quickly emerging, defenders must learn how to spot data poisoning attacks and how to prevent them. Otherwise, you will make business and cybersecurity decisions based on faulty data.

What is data poisoning?

When attackers tamper with data used to train AI models, it effectively becomes ‘poisoned.’ Because AI relies on that data to learn how to make accurate predictions, the predictions generated by the algorithm will be incorrect.

Threat actors are now messing with data in ways that can be used for cyberattacks. For example, they can do a lot just by changing data for a recommendation engine. From there, they can get someone to download a malware app or click on an infected link.

Data poisoning is so dangerous because it uses AI against us. We are increasingly putting our trust in AI predictions for so many aspects of our personal lives and our work. It does everything from helping us choose a movie to watch to telling us which customers might cancel their service.

As digital transformation sped up due to COVID-19, AI became even more common. Digital transactions and connections are the norm rather than the exception.

Data poisoning and cybersecurity tools

Threat actors are using data poisoning to infiltrate the very tools defenders are using to spot threats, too. First, they can change the data or add data to generate incorrect classifications. In addition, attackers also use data poisoning to generate back doors.

This increase of data poisoning attacks on AI tools means businesses and agencies may hesitate to turn to those tools. It also makes it more challenging for defenders to know what data to trust.

During the keynote, Ullrich said the solution starts with having thorough knowledge of the models used by AI cybersecurity tools. If you don’t understand what protects your data, it becomes challenging to tell whether those techniques and tools are accurate.

Identifying data poisoning attacks

Data poisoning attacks are challenging and time consuming to spot. So, victims often find that when they discover the issue, the damage is already extensive.

In addition, they don’t know what data is real and what data has been manipulated. Often data poisoning attacks are an inside job and committed at a very slow pace. Both make the changes in the data easy to miss.

During the RSA session ‘Evasion, Poisoning, Extraction and Inference: The Tools to Defend and Evaluate’, Abigail Goldsteen of IBM Research recommended cybersecurity professionals turn to Adversarial Robustness 360 Toolbox (ART) to identify, stop and prevent data poisoning attacks. This open-source toolkit allows developers to quickly create, analyze and attack, and then rapidly select the right defense methods for machine learning models.

Using the tools we have

So, should you not use AI? At this point, it would not be practical to abandon it completely. Doing so will result in threat actors simply using AI and ML to create attacks that we cannot defend against.

Instead, as defenders, we must not blindly trust the tools and the data we have. Becoming more knowledgeable in how the algorithms work and routinely checking the data for anomalies will help us keep ahead of attacks.

More from Data Protection

SpyAgent malware targets crypto wallets by stealing screenshots

4 min read - A new Android malware strain known as SpyAgent is making the rounds — and stealing screenshots as it goes. Using optical character recognition (OCR) technology, the malware is after cryptocurrency recovery phrases often stored in screenshots on user devices.Here's how to dodge the bullet.Attackers shooting their (screen) shotAttacks start — as always — with phishing efforts. Users receive text messages prompting them to download seemingly legitimate apps. If they take the bait and install the app, the SpyAgent malware gets…

Exploring DORA: How to manage ICT incidents and minimize cyber threat risks

3 min read - As cybersecurity breaches continue to rise globally, institutions handling sensitive information are particularly vulnerable. In 2024, the average cost of a data breach in the financial sector reached $6.08 million, making it the second hardest hit after healthcare, according to IBM's 2024 Cost of a Data Breach report. This underscores the need for robust IT security regulations in critical sectors.More than just a defensive measure, compliance with security regulations helps organizations reduce risk, strengthen operational resilience and enhance customer trust.…

Skills shortage directly tied to financial loss in data breaches

2 min read - The cybersecurity skills gap continues to widen, with serious consequences for organizations worldwide. According to IBM's 2024 Cost Of A Data Breach Report, more than half of breached organizations now face severe security staffing shortages, a whopping 26.2% increase from the previous year.And that's expensive. This skills deficit adds an average of $1.76 million in additional breach costs.The shortage spans both technical cybersecurity skills and adjacent competencies. Cloud security, threat intelligence analysis and incident response capabilities are in high demand. Equally…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today