April 24, 2018 By Kelly Ryver 3 min read

Humanity has been fascinated with artificial intelligence (AI) for the better part of a century — from Aldus Huxley’s “Brave New World” and Gene Roddenberry’s “Star Trek” to the “Matrix” trilogy and the most recent season of “The X-Files.”

AI-based algorithms, specifically machine learning algorithms, enable news-curating apps such as Flipboard to deliver content to users that match their individual tastes. Reuters uses AI to review social media posts, news stories and readers’ habits to generate opinion editorials. The city of Atlanta is installing smart traffic lights based on AI algorithms to help alleviate traffic congestion. AI is also being used to control street lights, automate certain elements of office buildings, automate customer service chat features and perform concierge services at hotels and offices around the world.

Japan has been experimenting with AI and robots since the late 1960s, due largely to the country’s exceptionally long life expectancy and low birth rates. In fact, Japanese researchers have been working on artificially intelligent robots that are so sophisticated and mimic human thought and behavior so closely that they can serve as companions and assistants for the elderly and infirm. These machines are also being put to work in a wide variety of industries.

What else could AI be used for? It could, in theory, learn to write its own code, construct its own algorithms, correct its own mathematical proofs and write better programs than its human designers. At the 2017 Black Hat conference, during a 25-minute briefing titled “Bot vs. Bot for Evading Machine Learning Malware Detection,” the presenter demonstrated how an AI agent can compete against a malware detector by proactively probing it for blind spots that can be exploited. This simplistic approach to creating better malware detection engines is essentially game theory with a two-player game between machines.

The Current State of AI: All Science, No Philosophy

There are dozens of examples of beneficial uses for AI in the world today, and possibly hundreds more to come. But what happens when AI demonstrates capabilities that it is not supposed to have?

A major technology company proved through a chatbot experiment that AI could be taught to recognize human speech patterns on a social media site and, after learning the natural language patterns, develop patterns of its own. This was a forward-thinking and quite exciting experiment, but it proved that humans are extremely poor teachers and that the social world is an even poorer classroom. It was not long before the bot began picking speech patterns based the sentiments that were most prevalent on social media: hatred, envy, jealousy, rage and so forth. The experiment was ultimately canceled.

The unexpected result of this novel experiment is something everyone working in the field of AI and machine learning should have paid attention to — because it will happen again, albeit in a different fashion. AI and machine learning are so new that very few people truly understand the technology, and even fewer understand it well enough to work with it every day or employ it to find creative solutions to difficult problems.

As both AI and machine learning grow in popularity, the number of AI-based products is growing in parallel. Today, AI is used as a marketing tactic everywhere. It is integrated into everything, leveraged to gain competitive advantage over imaginary competitors, mentioned on every ad for commercially available off-the-shelf (COTS) security products, sold as a magic bullet for every defensive security problem and employed with impunity without an ounce of philosophical oversight.

AI Versus AI: A Malware Arms Race

It is only a matter of time before threat actors of all calibers employ AI to break down defensive barriers faster than any security product or antivirus detection engine can stop them, much less a team of humans accustomed to being reactive with security. AI-based malware could wreak havoc on an unprecedented scale in many different areas and sectors, including:

  • National power grids and modernized industrial control systems;
  • Aerospace and defense;
  • Nuclear programs, particularly those that utilize war game scenarios;
  • Satellite and telecommunications networks;
  • Commodities, foreign exchanges and futures trading; and
  • Artificial neural networks, especially when used within large constructs such as the Internet of Things (IoT) and autonomous vehicles.

If companies around the world are interested in employing AI and machine learning to solve difficult problems, they must incorporate both offensive and defensive capabilities into the technology. It is also necessary to test any artificially intelligent product by pitting it against both humans and other machines, which some imaginative companies are already beginning to do.

Product development and solution design teams should consider moving away from a purely defensive security posture to one that is more offensive in nature. An artificially intelligent system will likely only get one chance to defend itself against an aggressive and more advanced adversary.

Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today