April 17, 2018 By Sridhar Muppidi 3 min read

In late February 2017, nearly two dozen leading researchers gathered in centuries-old Oxford, England, to warn of the most modern of hazards: malicious use of AI.

Among the red flags they raised was an attack called adversarial machine learning. In this scenario, AI systems’ neural networks are tricked by intentionally modified external data. An attacker ever so slightly distorts these inputs for the sole purpose of causing AI to misclassify them. An adversarial image of a spoon, for instance, is exactly that — a spoon — to human eyes. To AI, there is no spoon.

All Adversarial AI Methods in One Toolbox

Matrix memes notwithstanding, the potential for this kind of hoax to inflict real damage on real people is both deadly serious and extremely pressing. To see how, just replace that adversarial spoon image with one of an adversarial stop sign. In response, researchers throughout the world have scrambled into white-hat mode, building defenses and creating pre-emptive adversarial attack models to probe AI vulnerabilities.

But it’s been a scattered and ad hoc campaign, which is why IBM’s Dublin labs developed an open source adversarial AI library to fill the void, called the IBM Adversarial Robustness Toolbox (ART). When I spoke with one of the researchers, Irina Nicolae, who is working on a solution to this problem, she told me, “ART is designed to have all methods existing in literature in the same place.” You can read much more about how the technology works in a recent blog post Irina shared.

Building Blocks for Maximum AI Robustness

Of course, cybercriminals are not quite advanced enough to hack the car idling in your driveway with adversarial machine learning just yet. But with more decisions based on prediction algorithms — and fewer humans making top-line judgment calls — learning to defend against the scenarios below is quickly becoming critical to security professionals and developers.

So far, most libraries that have attempted to test or harden AI systems have only offered collections of attacks. While useful, developers and researchers still need to apply the appropriate defenses to actually improve their systems. With the Adversarial Robustness Toolbox, multiple attacks can be launched against an AI system, and security teams can select the most effective defenses as building blocks for maximum robustness. With each proposed change to the defense of the system, the ART will provide benchmarks for the increase or decrease in efficiency.

One of the biggest challenges with some existing models designed to defend against adversarial AI is that they are very platform-specific. The IBM team designed its Adversarial Robustness Toolbox to be platform-agnostic. Whether you’re coding or developing in Keras or TensorFlow, you can apply the same library in which to build defenses.

Enabling Collaborative Defense With Open Source Tools

As with any new technology, the right course of action is to explore the strengths and weaknesses to improve the benefits to society while maximizing privacy and security. IBM believes in developing technology in mature, responsible and trustworthy ways. We outlined these principles last year and are living up to them in 2018 with the open sourcing of ART.

IBM developers recently released open source AI training and assembly tools such as Fabric for Deep Learning, Model Asset eXchange (MAX), and the Center of Open-Source Data and AI Technologies (CODAIT). Recognizing that collaborative defense is the only way for security teams and developers to get ahead of the adversarial AI threat, IBM also announced today that the ART is freely available on GitHub. If you’re an AI developer, researcher or are otherwise interested in adversarial AI, we welcome you to check out the IBM ART.

Visit the IBM Adversarial Robustness Toolbox

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today