Data poisoning against security software that uses artificial intelligence (AI) and machine learning (ML) is likely the next big cybersecurity risk. According to the RSA 2021 keynote presentation by Johannes Ullrich, dean of research of SANS Technology Institute, it’s a threat we should all keep an eye on.
“One of the most basic threats when it comes to machine learning is one of the attackers actually being able to influence the samples that we are using to train our models,” Ullrich said at RSA.
With this new threat quickly emerging, defenders must learn how to spot data poisoning attacks and how to prevent them. Otherwise, you will make business and cybersecurity decisions based on faulty data.
What is data poisoning?
When attackers tamper with data used to train AI models, it effectively becomes ‘poisoned.’ Because AI relies on that data to learn how to make accurate predictions, the predictions generated by the algorithm will be incorrect.
Threat actors are now messing with data in ways that can be used for cyberattacks. For example, they can do a lot just by changing data for a recommendation engine. From there, they can get someone to download a malware app or click on an infected link.
Data poisoning is so dangerous because it uses AI against us. We are increasingly putting our trust in AI predictions for so many aspects of our personal lives and our work. It does everything from helping us choose a movie to watch to telling us which customers might cancel their service.
As digital transformation sped up due to COVID-19, AI became even more common. Digital transactions and connections are the norm rather than the exception.
Data poisoning and cybersecurity tools
Threat actors are using data poisoning to infiltrate the very tools defenders are using to spot threats, too. First, they can change the data or add data to generate incorrect classifications. In addition, attackers also use data poisoning to generate back doors.
This increase of data poisoning attacks on AI tools means businesses and agencies may hesitate to turn to those tools. It also makes it more challenging for defenders to know what data to trust.
During the keynote, Ullrich said the solution starts with having thorough knowledge of the models used by AI cybersecurity tools. If you don’t understand what protects your data, it becomes challenging to tell whether those techniques and tools are accurate.
Identifying data poisoning attacks
Data poisoning attacks are challenging and time consuming to spot. So, victims often find that when they discover the issue, the damage is already extensive.
In addition, they don’t know what data is real and what data has been manipulated. Often data poisoning attacks are an inside job and committed at a very slow pace. Both make the changes in the data easy to miss.
During the RSA session ‘Evasion, Poisoning, Extraction and Inference: The Tools to Defend and Evaluate’, Abigail Goldsteen of IBM Research recommended cybersecurity professionals turn to Adversarial Robustness 360 Toolbox (ART) to identify, stop and prevent data poisoning attacks. This open-source toolkit allows developers to quickly create, analyze and attack, and then rapidly select the right defense methods for machine learning models.
Using the tools we have
So, should you not use AI? At this point, it would not be practical to abandon it completely. Doing so will result in threat actors simply using AI and ML to create attacks that we cannot defend against.
Instead, as defenders, we must not blindly trust the tools and the data we have. Becoming more knowledgeable in how the algorithms work and routinely checking the data for anomalies will help us keep ahead of attacks.