Humans versus machines: Who’s the better hacker? The advent of artificial intelligence (AI) brought with it a new set of attacks using adversarial AI, and this influx suggests the answer is likely machine.
With each innovation in technology comes the reality that attackers who study the security tools will find ways to exploit it. AI can make a phone number look like it’s coming from your home area code — and trick your firewall like a machine learning Trojan horse.
How can organizations fight an unknown enemy that’s not even human?
Humans vs. Machines: The Problem for Security
When cybersecurity company ZeroFOX asked if humans or machines were better hackers back in 2016, they took to Twitter with an automated E2E spear phishing attack. The results? According to their experiment, machines are much more effective at getting humans to click on malicious links.
AI models are built with a type of machine learning called deep neural networks (DNNs), which are similar to neurons in the human brain. DNNs make the machine capable of mimicking human behaviors like decision-making, reasoning and problem-solving.
When researchers and developers make an image, they are trying to picture an object, such as a cup, stop sign or cat. They can generate data that attempts to mimic real data by using machine learning — and each model brings that image closer to the real object. Now, imagine those pictures for medical imaging: The power of AI offers massive benefits when it comes to analyzing images.
So, what’s the problem for security? “Adversarial examples are (say, images) which have deliberately been modified to produce a desired response by a DNN,” according to IBM Research – Ireland.
The differences between the real and the fabricated are too small for the human eye to catch. Trained DNNs might catch those differences and classify the image as something all-together different — which is exactly what the attacker wants.
An Adversarial AI Arms Race
As the amount of data increases, nefarious actors will become more efficient at deploying new types of attacks by leveraging adversarial AI. This tactic will make attack attribution even more challenging.
“Adversaries will increase their use of machine learning to create attacks, experiment with combinations of machine learning and AI and expand their efforts to discover and disrupt the machine learning models used by defenders,” according to a 2018 cybercrime report. Enterprises must essentially prepare for an adversarial arms race.
Attacks will also become more affordable, according to the report — an additional bonus for attackers. An attacker can use an AI system to perform functions that would be virtually impossible for humans given the brain power and technical expertise required to achieve at scale.
Rage Against the Machine
What’s different about adversarial AI attacks? They can put on the same malicious offenses with great speed and depth. While AI is not a fully accessible tool for cybercriminals just yet, it’s weaponization is quickly growing more widespread. These threats can multiply the variations of the attack, vector or payload and increase the volume of the attacks. But outside of speed and scale, the attacks are fundamentally quite similar to current threat tactics.
So, how can organizations defend themselves? IBM recently released the Adversarial Robustness Toolbox to help defend DNNs against weaponized AI attacks, allowing researchers and developers to measure the robustness of their DNN models. This, in turn, will improve AI systems.
Sharing intelligence information with the cybersecurity community is also important in building strong defenses. The solution to adversarial AI will come from a combination of technology and policy, but all hands must be on deck. The risks threaten all sectors across public and private institutions. Coordinated efforts among key stakeholders will help to build a more secure future.
After all, the union of man and machine has the power to give defenders a leg up.
Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks