January 20, 2016 By Douglas Bonderud 4 min read

It’s the ultimate nightmare scenario: Highly intelligent robots shake off the yoke of their human masters and rebel — often violently and with the aim of eradicating or enslaving humanity. But artificial intelligence (AI) solutions have been in development for years and the worry of a “Frankenstein’s Monster” scenario is virtually nonexistent.

The lack of murderous intent, however, doesn’t remove the risk of IT security issues born from emerging AI technologies. In fact, both intelligent machines and their software come with significant risk to line-of-business (LOB) aims. In other words, it’s time to meet Frankenstien’s children.

Smart Choices?

AI interest is quickly ramping up as both physical and virtual technologies make it possible for robots to better mimic human action and provide seemingly normal responses. According to TODAY, for example, Facebook founder Mark Zuckerberg is developing a simple AI to help run his home and help with his work — he likens it to JARVIS, Tony Stark’s intelligent robot butler in the “Iron Man” films.

At Nanyang Technological University in Singapore, researchers have developed an AI receptionist that looks fully human at first glance. It can perform social interactions such as shake hands, make eye contact and respond to simple queries.

Back in the U.S., automaker Ford is tackling the challenge of intelligent, self-driving cars. According to Wired, the company is on track to solve one major problem with AI vehicles: bad weather. When typical lane markers and street signs are obscured, most smart cars are reduced to hurtling steel idiots. Ford is developing a set of high-fidelity maps that let the car use any visible markers to determine its exact position on the road, in turn freeing up more active processing to detect other cars or pedestrians in motion.

A team at Virginia Tech, meanwhile, has been hard at work on something more abstract: humor. The VT scientists developed a machine-learning algorithm able to recognize funny images by analyzing specific parts of the scene. It then attempts to make the images unfunny — a goal it achieved 95 percent of the time. Performance was worse the other way around, with non-funny images made humorous with only 28 percent success. Still, it’s a big step forward in unlocking emotional intelligence, which could greatly enhance the ability of AI to relate with human beings.

There’s also the work of Japanese scientists at the Kyushu Institute of Technology aimed at predicting human speech by analyzing brainwaves. So far, the technology has a 25 percent success rate predicting whole words but scores 90 percent when focused on specific syllables or characters.

AI is quickly becoming big business. But are these efforts good news for today’s businesses?

Breaking the Law

Why do human beings fear the rapidly rising intelligence of humanoid robots? It goes like this: When AI machines get smart enough, they’ll start ignoring human commands in favor of their own judgment — and they’ll be especially resentful since humankind has been using them as servants for so long.

Over 70 years ago, science fiction writer Issac Asimov came up with a solution that remains top of mind for many AI developers: the Three Laws of Robotics.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

Researchers are already working on robots that conform to this behavioral structure. As noted by the International Business Times, a team at Tufts University in Massachusetts is training robots to refuse human commands if those commands cause harm to either the robot itself or a human being. It’s possible for humans to override the AI’s ethical system, but only if they are known and trusted by the — otherwise, the command is ignored.

While emerging AI constraints may not be a perfect replica of Asimov’s work, developers have already taken the first steps toward a kind of robot ethical code, which should prevent the Frankenstein disaster dystopian writers and technophobes seem to love so much.

Unique Properties of Artificial Intelligence

Just because the robot rebellion is canceled, however, doesn’t mean the advancement of AI is free of security risks. In fact, the increasing use of artificial systems in business poses two key challenges: honest mistakes and deliberate sabotage. Think of it like this: While humans tend to imagine that AI is a replication of humanity, right down to speech and mannerisms, most in-use and developing artificial intelligence looks and acts nothing like flesh-and-blood workers. Instead, robots are designed to complete single tasks or evaluate specific sets of data rather than supplant humans as plant workers or critical thinkers.

The problem? These robots aren’t intelligent in the traditional sense. Consider the death of a plant worker in Germany who was crushed by a robot arm designed to install car parts. This single-task programming made the machine smart in one area but woefully inept in others. Here, the big risk is that even sophisticated AI systems placed in charge of critical LOB functions could make extremely poor decisions because they lack the human ability to think outside the box or react to unknown variables.

Software is the sibling of Frankenstein’s less intelligent child. Why? Because it’s hard to hack a human. AI machines, meanwhile, come with two distinct risk factors: software code and connection to the Internet at large. This makes it possible for determined actors to either hack the robot directly or — more worrisome — execute a remote attack. If coded ethics or behavioral programming is altered, robots might appear to be functioning as normal until a specific scenario emerges and lurking malware takes over.

So what’s the bottom line for artificial intelligence? Development won’t precipitate the panic-inducing nightmares of fantastic robot stories; Frankenstein’s monster never stood a chance. His children, however, may be of more concern. AI that’s tasked with too much, too soon or compromised by malicious actors could mean big problems for corporations. Best bet? Think of AI like IoT: A bigger attack surface means bigger problems and demands higher priority on the IT security list.

More from Intelligence & Analytics

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

Web injections are back on the rise: 40+ banks affected by new malware campaign

8 min read - Web injections, a favored technique employed by various banking trojans, have been a persistent threat in the realm of cyberattacks. These malicious injections enable cyber criminals to manipulate data exchanges between users and web browsers, potentially compromising sensitive information. In March 2023, security researchers at IBM Security Trusteer uncovered a new malware campaign using JavaScript web injections. This new campaign is widespread and particularly evasive, with historical indicators of compromise (IOCs) suggesting a possible connection to DanaBot — although we…

Accelerating security outcomes with a cloud-native SIEM

5 min read - As organizations modernize their IT infrastructure and increase adoption of cloud services, security teams face new challenges in terms of staffing, budgets and technologies. To keep pace, security programs must evolve to secure modern IT environments against fast-evolving threats with constrained resources. This will require rethinking traditional security strategies and focusing investments on capabilities like cloud security, AI-powered defense and skills development. The path forward calls on security teams to be agile, innovative and strategic amidst the changes in technology…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today