April 24, 2018 By Kelly Ryver 3 min read

Humanity has been fascinated with artificial intelligence (AI) for the better part of a century — from Aldus Huxley’s “Brave New World” and Gene Roddenberry’s “Star Trek” to the “Matrix” trilogy and the most recent season of “The X-Files.”

AI-based algorithms, specifically machine learning algorithms, enable news-curating apps such as Flipboard to deliver content to users that match their individual tastes. Reuters uses AI to review social media posts, news stories and readers’ habits to generate opinion editorials. The city of Atlanta is installing smart traffic lights based on AI algorithms to help alleviate traffic congestion. AI is also being used to control street lights, automate certain elements of office buildings, automate customer service chat features and perform concierge services at hotels and offices around the world.

Japan has been experimenting with AI and robots since the late 1960s, due largely to the country’s exceptionally long life expectancy and low birth rates. In fact, Japanese researchers have been working on artificially intelligent robots that are so sophisticated and mimic human thought and behavior so closely that they can serve as companions and assistants for the elderly and infirm. These machines are also being put to work in a wide variety of industries.

What else could AI be used for? It could, in theory, learn to write its own code, construct its own algorithms, correct its own mathematical proofs and write better programs than its human designers. At the 2017 Black Hat conference, during a 25-minute briefing titled “Bot vs. Bot for Evading Machine Learning Malware Detection,” the presenter demonstrated how an AI agent can compete against a malware detector by proactively probing it for blind spots that can be exploited. This simplistic approach to creating better malware detection engines is essentially game theory with a two-player game between machines.

The Current State of AI: All Science, No Philosophy

There are dozens of examples of beneficial uses for AI in the world today, and possibly hundreds more to come. But what happens when AI demonstrates capabilities that it is not supposed to have?

A major technology company proved through a chatbot experiment that AI could be taught to recognize human speech patterns on a social media site and, after learning the natural language patterns, develop patterns of its own. This was a forward-thinking and quite exciting experiment, but it proved that humans are extremely poor teachers and that the social world is an even poorer classroom. It was not long before the bot began picking speech patterns based the sentiments that were most prevalent on social media: hatred, envy, jealousy, rage and so forth. The experiment was ultimately canceled.

The unexpected result of this novel experiment is something everyone working in the field of AI and machine learning should have paid attention to — because it will happen again, albeit in a different fashion. AI and machine learning are so new that very few people truly understand the technology, and even fewer understand it well enough to work with it every day or employ it to find creative solutions to difficult problems.

As both AI and machine learning grow in popularity, the number of AI-based products is growing in parallel. Today, AI is used as a marketing tactic everywhere. It is integrated into everything, leveraged to gain competitive advantage over imaginary competitors, mentioned on every ad for commercially available off-the-shelf (COTS) security products, sold as a magic bullet for every defensive security problem and employed with impunity without an ounce of philosophical oversight.

AI Versus AI: A Malware Arms Race

It is only a matter of time before threat actors of all calibers employ AI to break down defensive barriers faster than any security product or antivirus detection engine can stop them, much less a team of humans accustomed to being reactive with security. AI-based malware could wreak havoc on an unprecedented scale in many different areas and sectors, including:

  • National power grids and modernized industrial control systems;
  • Aerospace and defense;
  • Nuclear programs, particularly those that utilize war game scenarios;
  • Satellite and telecommunications networks;
  • Commodities, foreign exchanges and futures trading; and
  • Artificial neural networks, especially when used within large constructs such as the Internet of Things (IoT) and autonomous vehicles.

If companies around the world are interested in employing AI and machine learning to solve difficult problems, they must incorporate both offensive and defensive capabilities into the technology. It is also necessary to test any artificially intelligent product by pitting it against both humans and other machines, which some imaginative companies are already beginning to do.

Product development and solution design teams should consider moving away from a purely defensive security posture to one that is more offensive in nature. An artificially intelligent system will likely only get one chance to defend itself against an aggressive and more advanced adversary.

Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks

More from Artificial Intelligence

Generative AI security requires a solid framework

4 min read - How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.CISA Director Jen…

Self-replicating Morris II worm targets AI email assistants

4 min read - The proliferation of generative artificial intelligence (gen AI) email assistants such as OpenAI’s GPT-3 and Google’s Smart Compose has revolutionized communication workflows. Unfortunately, it has also introduced novel attack vectors for cyber criminals. Leveraging recent advancements in AI and natural language processing, malicious actors can exploit vulnerabilities in gen AI systems to orchestrate sophisticated cyberattacks with far-reaching consequences. Recent studies have uncovered the insidious capabilities of self-replicating malware, exemplified by the “Morris II” strain created by researchers. How the Morris…

Open source, open risks: The growing dangers of unregulated generative AI

3 min read - While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today