April 24, 2018 By Kelly Ryver 3 min read

Humanity has been fascinated with artificial intelligence (AI) for the better part of a century — from Aldus Huxley’s “Brave New World” and Gene Roddenberry’s “Star Trek” to the “Matrix” trilogy and the most recent season of “The X-Files.”

AI-based algorithms, specifically machine learning algorithms, enable news-curating apps such as Flipboard to deliver content to users that match their individual tastes. Reuters uses AI to review social media posts, news stories and readers’ habits to generate opinion editorials. The city of Atlanta is installing smart traffic lights based on AI algorithms to help alleviate traffic congestion. AI is also being used to control street lights, automate certain elements of office buildings, automate customer service chat features and perform concierge services at hotels and offices around the world.

Japan has been experimenting with AI and robots since the late 1960s, due largely to the country’s exceptionally long life expectancy and low birth rates. In fact, Japanese researchers have been working on artificially intelligent robots that are so sophisticated and mimic human thought and behavior so closely that they can serve as companions and assistants for the elderly and infirm. These machines are also being put to work in a wide variety of industries.

What else could AI be used for? It could, in theory, learn to write its own code, construct its own algorithms, correct its own mathematical proofs and write better programs than its human designers. At the 2017 Black Hat conference, during a 25-minute briefing titled “Bot vs. Bot for Evading Machine Learning Malware Detection,” the presenter demonstrated how an AI agent can compete against a malware detector by proactively probing it for blind spots that can be exploited. This simplistic approach to creating better malware detection engines is essentially game theory with a two-player game between machines.

The Current State of AI: All Science, No Philosophy

There are dozens of examples of beneficial uses for AI in the world today, and possibly hundreds more to come. But what happens when AI demonstrates capabilities that it is not supposed to have?

A major technology company proved through a chatbot experiment that AI could be taught to recognize human speech patterns on a social media site and, after learning the natural language patterns, develop patterns of its own. This was a forward-thinking and quite exciting experiment, but it proved that humans are extremely poor teachers and that the social world is an even poorer classroom. It was not long before the bot began picking speech patterns based the sentiments that were most prevalent on social media: hatred, envy, jealousy, rage and so forth. The experiment was ultimately canceled.

The unexpected result of this novel experiment is something everyone working in the field of AI and machine learning should have paid attention to — because it will happen again, albeit in a different fashion. AI and machine learning are so new that very few people truly understand the technology, and even fewer understand it well enough to work with it every day or employ it to find creative solutions to difficult problems.

As both AI and machine learning grow in popularity, the number of AI-based products is growing in parallel. Today, AI is used as a marketing tactic everywhere. It is integrated into everything, leveraged to gain competitive advantage over imaginary competitors, mentioned on every ad for commercially available off-the-shelf (COTS) security products, sold as a magic bullet for every defensive security problem and employed with impunity without an ounce of philosophical oversight.

AI Versus AI: A Malware Arms Race

It is only a matter of time before threat actors of all calibers employ AI to break down defensive barriers faster than any security product or antivirus detection engine can stop them, much less a team of humans accustomed to being reactive with security. AI-based malware could wreak havoc on an unprecedented scale in many different areas and sectors, including:

  • National power grids and modernized industrial control systems;
  • Aerospace and defense;
  • Nuclear programs, particularly those that utilize war game scenarios;
  • Satellite and telecommunications networks;
  • Commodities, foreign exchanges and futures trading; and
  • Artificial neural networks, especially when used within large constructs such as the Internet of Things (IoT) and autonomous vehicles.

If companies around the world are interested in employing AI and machine learning to solve difficult problems, they must incorporate both offensive and defensive capabilities into the technology. It is also necessary to test any artificially intelligent product by pitting it against both humans and other machines, which some imaginative companies are already beginning to do.

Product development and solution design teams should consider moving away from a purely defensive security posture to one that is more offensive in nature. An artificially intelligent system will likely only get one chance to defend itself against an aggressive and more advanced adversary.

Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today