Data poisoning against security software that uses artificial intelligence (AI) and machine learning (ML) is likely the next big cybersecurity risk. According to the RSA 2021 keynote presentation by Johannes Ullrich, dean of research of SANS Technology Institute, it’s a threat we should all keep an eye on.

“One of the most basic threats when it comes to machine learning is one of the attackers actually being able to influence the samples that we are using to train our models,” Ullrich said at RSA.

With this new threat quickly emerging, defenders must learn how to spot data poisoning attacks and how to prevent them. Otherwise, you will make business and cybersecurity decisions based on faulty data.

What Is Data Poisoning?

When attackers tamper with data used to train AI models, it effectively becomes ‘poisoned.’ Because AI relies on that data to learn how to make accurate predictions, the predictions generated by the algorithm will be incorrect.

Threat actors are now messing with data in ways that can be used for cyberattacks. For example, they can do a lot just by changing data for a recommendation engine. From there, they can get someone to download a malware app or click on an infected link.

Data poisoning is so dangerous because it uses AI against us. We are increasingly putting our trust in AI predictions for so many aspects of our personal lives and our work. It does everything from helping us choose a movie to watch to telling us which customers might cancel their service.

As digital transformation sped up due to COVID-19, AI became even more common. Digital transactions and connections are the norm rather than the exception.

Data Poisoning and Cybersecurity Tools

Threat actors are using data poisoning to infiltrate the very tools defenders are using to spot threats, too. First, they can change the data or add data to generate incorrect classifications. In addition, attackers also use data poisoning to generate back doors.

This increase of data poisoning attacks on AI tools means businesses and agencies may hesitate to turn to those tools. It also makes it more challenging for defenders to know what data to trust.

During the keynote, Ullrich said the solution starts with having thorough knowledge of the models used by AI cybersecurity tools. If you don’t understand what protects your data, it becomes challenging to tell whether those techniques and tools are accurate.

Identifying Data Poisoning Attacks

Data poisoning attacks are challenging and time consuming to spot. So, victims often find that when they discover the issue, the damage is already extensive.

In addition, they don’t know what data is real and what data has been manipulated. Often data poisoning attacks are an inside job and committed at a very slow pace. Both make the changes in the data easy to miss.

During the RSA session ‘Evasion, Poisoning, Extraction and Inference: The Tools to Defend and Evaluate’, Abigail Goldsteen of IBM Research recommended cybersecurity professionals turn to Adversarial Robustness 360 Toolbox (ART) to identify, stop and prevent data poisoning attacks. This open-source toolkit allows developers to quickly create, analyze and attack, and then rapidly select the right defense methods for machine learning models.

Using the Tools We Have

So, should you not use AI? At this point, it would not be practical to abandon it completely. Doing so will result in threat actors simply using AI and ML to create attacks that we cannot defend against.

Instead, as defenders, we must not blindly trust the tools and the data we have. Becoming more knowledgeable in how the algorithms work and routinely checking the data for anomalies will help us keep ahead of attacks.

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today