April 17, 2018 By Sridhar Muppidi 3 min read

In late February 2017, nearly two dozen leading researchers gathered in centuries-old Oxford, England, to warn of the most modern of hazards: malicious use of AI.

Among the red flags they raised was an attack called adversarial machine learning. In this scenario, AI systems’ neural networks are tricked by intentionally modified external data. An attacker ever so slightly distorts these inputs for the sole purpose of causing AI to misclassify them. An adversarial image of a spoon, for instance, is exactly that — a spoon — to human eyes. To AI, there is no spoon.

All Adversarial AI Methods in One Toolbox

Matrix memes notwithstanding, the potential for this kind of hoax to inflict real damage on real people is both deadly serious and extremely pressing. To see how, just replace that adversarial spoon image with one of an adversarial stop sign. In response, researchers throughout the world have scrambled into white-hat mode, building defenses and creating pre-emptive adversarial attack models to probe AI vulnerabilities.

But it’s been a scattered and ad hoc campaign, which is why IBM’s Dublin labs developed an open source adversarial AI library to fill the void, called the IBM Adversarial Robustness Toolbox (ART). When I spoke with one of the researchers, Irina Nicolae, who is working on a solution to this problem, she told me, “ART is designed to have all methods existing in literature in the same place.” You can read much more about how the technology works in a recent blog post Irina shared.

Building Blocks for Maximum AI Robustness

Of course, cybercriminals are not quite advanced enough to hack the car idling in your driveway with adversarial machine learning just yet. But with more decisions based on prediction algorithms — and fewer humans making top-line judgment calls — learning to defend against the scenarios below is quickly becoming critical to security professionals and developers.

So far, most libraries that have attempted to test or harden AI systems have only offered collections of attacks. While useful, developers and researchers still need to apply the appropriate defenses to actually improve their systems. With the Adversarial Robustness Toolbox, multiple attacks can be launched against an AI system, and security teams can select the most effective defenses as building blocks for maximum robustness. With each proposed change to the defense of the system, the ART will provide benchmarks for the increase or decrease in efficiency.

One of the biggest challenges with some existing models designed to defend against adversarial AI is that they are very platform-specific. The IBM team designed its Adversarial Robustness Toolbox to be platform-agnostic. Whether you’re coding or developing in Keras or TensorFlow, you can apply the same library in which to build defenses.

Enabling Collaborative Defense With Open Source Tools

As with any new technology, the right course of action is to explore the strengths and weaknesses to improve the benefits to society while maximizing privacy and security. IBM believes in developing technology in mature, responsible and trustworthy ways. We outlined these principles last year and are living up to them in 2018 with the open sourcing of ART.

IBM developers recently released open source AI training and assembly tools such as Fabric for Deep Learning, Model Asset eXchange (MAX), and the Center of Open-Source Data and AI Technologies (CODAIT). Recognizing that collaborative defense is the only way for security teams and developers to get ahead of the adversarial AI threat, IBM also announced today that the ART is freely available on GitHub. If you’re an AI developer, researcher or are otherwise interested in adversarial AI, we welcome you to check out the IBM ART.

Visit the IBM Adversarial Robustness Toolbox

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today