April 24, 2018 By Kelly Ryver 3 min read

Humanity has been fascinated with artificial intelligence (AI) for the better part of a century — from Aldus Huxley’s “Brave New World” and Gene Roddenberry’s “Star Trek” to the “Matrix” trilogy and the most recent season of “The X-Files.”

AI-based algorithms, specifically machine learning algorithms, enable news-curating apps such as Flipboard to deliver content to users that match their individual tastes. Reuters uses AI to review social media posts, news stories and readers’ habits to generate opinion editorials. The city of Atlanta is installing smart traffic lights based on AI algorithms to help alleviate traffic congestion. AI is also being used to control street lights, automate certain elements of office buildings, automate customer service chat features and perform concierge services at hotels and offices around the world.

Japan has been experimenting with AI and robots since the late 1960s, due largely to the country’s exceptionally long life expectancy and low birth rates. In fact, Japanese researchers have been working on artificially intelligent robots that are so sophisticated and mimic human thought and behavior so closely that they can serve as companions and assistants for the elderly and infirm. These machines are also being put to work in a wide variety of industries.

What else could AI be used for? It could, in theory, learn to write its own code, construct its own algorithms, correct its own mathematical proofs and write better programs than its human designers. At the 2017 Black Hat conference, during a 25-minute briefing titled “Bot vs. Bot for Evading Machine Learning Malware Detection,” the presenter demonstrated how an AI agent can compete against a malware detector by proactively probing it for blind spots that can be exploited. This simplistic approach to creating better malware detection engines is essentially game theory with a two-player game between machines.

The Current State of AI: All Science, No Philosophy

There are dozens of examples of beneficial uses for AI in the world today, and possibly hundreds more to come. But what happens when AI demonstrates capabilities that it is not supposed to have?

A major technology company proved through a chatbot experiment that AI could be taught to recognize human speech patterns on a social media site and, after learning the natural language patterns, develop patterns of its own. This was a forward-thinking and quite exciting experiment, but it proved that humans are extremely poor teachers and that the social world is an even poorer classroom. It was not long before the bot began picking speech patterns based the sentiments that were most prevalent on social media: hatred, envy, jealousy, rage and so forth. The experiment was ultimately canceled.

The unexpected result of this novel experiment is something everyone working in the field of AI and machine learning should have paid attention to — because it will happen again, albeit in a different fashion. AI and machine learning are so new that very few people truly understand the technology, and even fewer understand it well enough to work with it every day or employ it to find creative solutions to difficult problems.

As both AI and machine learning grow in popularity, the number of AI-based products is growing in parallel. Today, AI is used as a marketing tactic everywhere. It is integrated into everything, leveraged to gain competitive advantage over imaginary competitors, mentioned on every ad for commercially available off-the-shelf (COTS) security products, sold as a magic bullet for every defensive security problem and employed with impunity without an ounce of philosophical oversight.

AI Versus AI: A Malware Arms Race

It is only a matter of time before threat actors of all calibers employ AI to break down defensive barriers faster than any security product or antivirus detection engine can stop them, much less a team of humans accustomed to being reactive with security. AI-based malware could wreak havoc on an unprecedented scale in many different areas and sectors, including:

  • National power grids and modernized industrial control systems;
  • Aerospace and defense;
  • Nuclear programs, particularly those that utilize war game scenarios;
  • Satellite and telecommunications networks;
  • Commodities, foreign exchanges and futures trading; and
  • Artificial neural networks, especially when used within large constructs such as the Internet of Things (IoT) and autonomous vehicles.

If companies around the world are interested in employing AI and machine learning to solve difficult problems, they must incorporate both offensive and defensive capabilities into the technology. It is also necessary to test any artificially intelligent product by pitting it against both humans and other machines, which some imaginative companies are already beginning to do.

Product development and solution design teams should consider moving away from a purely defensive security posture to one that is more offensive in nature. An artificially intelligent system will likely only get one chance to defend itself against an aggressive and more advanced adversary.

Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks

More from Artificial Intelligence

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today