In late February 2017, nearly two dozen leading researchers gathered in centuries-old Oxford, England, to warn of the most modern of hazards: malicious use of AI.

Among the red flags they raised was an attack called adversarial machine learning. In this scenario, AI systems’ neural networks are tricked by intentionally modified external data. An attacker ever so slightly distorts these inputs for the sole purpose of causing AI to misclassify them. An adversarial image of a spoon, for instance, is exactly that — a spoon — to human eyes. To AI, there is no spoon.

All Adversarial AI Methods in One Toolbox

Matrix memes notwithstanding, the potential for this kind of hoax to inflict real damage on real people is both deadly serious and extremely pressing. To see how, just replace that adversarial spoon image with one of an adversarial stop sign. In response, researchers throughout the world have scrambled into white-hat mode, building defenses and creating pre-emptive adversarial attack models to probe AI vulnerabilities.

But it’s been a scattered and ad hoc campaign, which is why IBM’s Dublin labs developed an open source adversarial AI library to fill the void, called the IBM Adversarial Robustness Toolbox (ART). When I spoke with one of the researchers, Irina Nicolae, who is working on a solution to this problem, she told me, “ART is designed to have all methods existing in literature in the same place.” You can read much more about how the technology works in a recent blog post Irina shared.

Building Blocks for Maximum AI Robustness

Of course, cybercriminals are not quite advanced enough to hack the car idling in your driveway with adversarial machine learning just yet. But with more decisions based on prediction algorithms — and fewer humans making top-line judgment calls — learning to defend against the scenarios below is quickly becoming critical to security professionals and developers.

So far, most libraries that have attempted to test or harden AI systems have only offered collections of attacks. While useful, developers and researchers still need to apply the appropriate defenses to actually improve their systems. With the Adversarial Robustness Toolbox, multiple attacks can be launched against an AI system, and security teams can select the most effective defenses as building blocks for maximum robustness. With each proposed change to the defense of the system, the ART will provide benchmarks for the increase or decrease in efficiency.

One of the biggest challenges with some existing models designed to defend against adversarial AI is that they are very platform-specific. The IBM team designed its Adversarial Robustness Toolbox to be platform-agnostic. Whether you’re coding or developing in Keras or TensorFlow, you can apply the same library in which to build defenses.

Enabling Collaborative Defense With Open Source Tools

As with any new technology, the right course of action is to explore the strengths and weaknesses to improve the benefits to society while maximizing privacy and security. IBM believes in developing technology in mature, responsible and trustworthy ways. We outlined these principles last year and are living up to them in 2018 with the open sourcing of ART.

IBM developers recently released open source AI training and assembly tools such as Fabric for Deep Learning, Model Asset eXchange (MAX), and the Center of Open-Source Data and AI Technologies (CODAIT). Recognizing that collaborative defense is the only way for security teams and developers to get ahead of the adversarial AI threat, IBM also announced today that the ART is freely available on GitHub. If you’re an AI developer, researcher or are otherwise interested in adversarial AI, we welcome you to check out the IBM ART.

Visit the IBM Adversarial Robustness Toolbox

More from Artificial Intelligence

Machine Learning Applications in the Cybersecurity Space

3 min read - Machine learning is one of the hottest areas in data science. This subset of artificial intelligence allows a system to learn from data and make accurate predictions, identify anomalies or make recommendations using different techniques. Machine learning techniques extract information from vast amounts of data and transform it into valuable business knowledge. While most industries use these techniques, they are especially prominent in the finance, marketing, healthcare, retail and cybersecurity sectors. Machine learning can also address new cyber threats. There…

3 min read

Now Social Engineering Attackers Have AI. Do You? 

4 min read - Everybody in tech is talking about ChatGPT, the AI-based chatbot from Open AI that writes convincing prose and usable code. The trouble is malicious cyber attackers can use generative AI tools like ChatGPT to craft convincing prose and usable code just like everybody else. How does this powerful new category of tools affect the ability of criminals to launch cyberattacks, including social engineering attacks? When Every Social Engineering Attack Uses Perfect English ChatGPT is a public tool based on a…

4 min read

Can Large Language Models Boost Your Security Posture?

4 min read - The threat landscape is expanding, and regulatory requirements are multiplying. For the enterprise, the challenges just to keep up are only mounting. In addition, there’s the cybersecurity skills gap. According to the (ISC)2 2022 Cybersecurity Workforce Study, the global cybersecurity workforce gap has increased by 26.2%, which means 3.4 million more workers are needed to help protect data and prevent threats. Leveraging AI-based tools is unquestionably necessary for modern organizations. But how far can tools like ChatGPT take us with…

4 min read

Why Robot Vacuums Have Cameras (and What to Know About Them)

4 min read - Robot vacuum cleaner products are by far the largest category of consumer robots. They roll around on floors, hoovering up dust and dirt so we don’t have to, all while avoiding obstacles. The industry leader, iRobot, has been cleaning up the robot vacuum market for two decades. Over this time, the company has steadily gained fans and a sterling reputation, including around security and privacy. And then, something shocking happened. Someone posted on Facebook a picture of a woman sitting…

4 min read