June 19, 2018 By Kacy Zurkus 3 min read

Humans versus machines: Who’s the better hacker? The advent of artificial intelligence (AI) brought with it a new set of attacks using adversarial AI, and this influx suggests the answer is likely machine.

With each innovation in technology comes the reality that attackers who study the security tools will find ways to exploit it. AI can make a phone number look like it’s coming from your home area code — and trick your firewall like a machine learning Trojan horse.

How can organizations fight an unknown enemy that’s not even human?

Humans vs. Machines: The Problem for Security

When cybersecurity company ZeroFOX asked if humans or machines were better hackers back in 2016, they took to Twitter with an automated E2E spear phishing attack. The results? According to their experiment, machines are much more effective at getting humans to click on malicious links.

AI models are built with a type of machine learning called deep neural networks (DNNs), which are similar to neurons in the human brain. DNNs make the machine capable of mimicking human behaviors like decision-making, reasoning and problem-solving.

When researchers and developers make an image, they are trying to picture an object, such as a cup, stop sign or cat. They can generate data that attempts to mimic real data by using machine learning — and each model brings that image closer to the real object. Now, imagine those pictures for medical imaging: The power of AI offers massive benefits when it comes to analyzing images.

So, what’s the problem for security? “Adversarial examples are (say, images) which have deliberately been modified to produce a desired response by a DNN,” according to IBM Research – Ireland.

The differences between the real and the fabricated are too small for the human eye to catch. Trained DNNs might catch those differences and classify the image as something all-together different — which is exactly what the attacker wants.

An Adversarial AI Arms Race

As the amount of data increases, nefarious actors will become more efficient at deploying new types of attacks by leveraging adversarial AI. This tactic will make attack attribution even more challenging.

“Adversaries will increase their use of machine learning to create attacks, experiment with combinations of machine learning and AI and expand their efforts to discover and disrupt the machine learning models used by defenders,” according to a 2018 cybercrime report. Enterprises must essentially prepare for an adversarial arms race.

Attacks will also become more affordable, according to the report — an additional bonus for attackers. An attacker can use an AI system to perform functions that would be virtually impossible for humans given the brain power and technical expertise required to achieve at scale.

Rage Against the Machine

What’s different about adversarial AI attacks? They can put on the same malicious offenses with great speed and depth. While AI is not a fully accessible tool for cybercriminals just yet, it’s weaponization is quickly growing more widespread. These threats can multiply the variations of the attack, vector or payload and increase the volume of the attacks. But outside of speed and scale, the attacks are fundamentally quite similar to current threat tactics.

So, how can organizations defend themselves? IBM recently released the Adversarial Robustness Toolbox to help defend DNNs against weaponized AI attacks, allowing researchers and developers to measure the robustness of their DNN models. This, in turn, will improve AI systems.

Sharing intelligence information with the cybersecurity community is also important in building strong defenses. The solution to adversarial AI will come from a combination of technology and policy, but all hands must be on deck. The risks threaten all sectors across public and private institutions. Coordinated efforts among key stakeholders will help to build a more secure future.

After all, the union of man and machine has the power to give defenders a leg up.

Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today