It’s hard to imagine something more frustrating to a runner than moving the finish line after the race has started. After all, how can you set a proper pace if the distance keeps changing? How will you know you’ve succeeded if the definition of success is in flux?

In a sense, that’s what has happened over the years in the field of artificial intelligence (AI). What would you call something that could add, subtract, multiply and divide large, complex numbers in an instant? You’d probably call it smart, right? Or what if it could memorize massive quantities of seemingly random data and recall it on the spot, in sequence, and never make a mistake? You might even interpret that sort of brain power as a sign of genius. But what exactly does it mean to be intelligent, anyway?

Now that calculators are included as default features on our phones and smartwatches, we don’t consider them to be particularly intelligent. We also have databases with seemingly infinite capacity at every turn, so we no longer view these abilities as indicative of some sort of higher intelligence, but rather as features of an ordinary, modern computer. The bottom line is that the bar for what is generally considered smart has moved — albeit far from the first time.

What Does It Mean to Be Intelligent?

There was a time when we thought that chess was such a complex game that only people with superior brain power could be champions. Surely, the ability to plot strategies, respond to an opponent’s moves and see many moves ahead with hundreds or even thousands of outcomes was proof of incredible intellect, right?

That was pretty much the case until 1997, when IBM’s Deep Blue computer beat grandmaster and world champion Gary Kasparov in a six-game match. Was Deep Blue intelligent even though the system couldn’t even read a newspaper? Surely, intelligence involved more than just being a chess savant. The bar for smart had moved.

Consider the ability to consume and comprehend huge stores of unstructured content written in a form that humans can read but computers struggle with due to the vagaries of normal expression, such as idioms, puns and other quirks of language. For example, saying, “it’s raining cats and dogs,” or that someone has “cold feet?” The former has nothing to do with animals and the latter is not a condition that can be remedied with wool socks.

What if a system could read this sort of information nonstop across a wide range of categories, never forget anything it reads and recall the facts relevant to a given clue with subsecond response time? What if it was so good at this exercise that it could beat the best in the world with more correct responses in less time? That would surely be the sign of a genius, wouldn’t it?

It would have been until, in 2011, IBM’s Watson computer beat two grand champions at the game of Jeopardy! while the world watched on live TV. Even so, was Watson intelligent, or just really good at a given task as its predecessors had been? The bar for smart had moved yet again.

Passing the Turing Test: Are We Near the Finish Line?

The gold standard for AI — proof that a machine is able to match or exceed human intelligence in its various forms by mimicking the human ability to discover, infer and reason — was established in 1950 by Alan Turing, widely considered the father of theoretical computer science and AI. The Turing Test involved having a person communicate with another human and a machine. If that person was unable to distinguish through written messages whether they were conversing with the other person or the computer, the computer would be considered intelligent.

This elegant test incorporated many elements of what we consider intelligence: natural language processing, general knowledge across a wide variety of subjects, flexibility and creativity, and a certain social intelligence that we all possess, but may take for granted in personal communications until we encounter a system that lacks it. Surely, a computer that can simulate human behavior and knowledge to the extent that a neutral observer could not tell difference would be the realization of the AI dream — finish line crossed.

That was the conventional wisdom until 2014, when a computer managed to fool 33 percent of evaluators into thinking they were talking to a 13-year old Ukrainian boy. Surely, this achievement would have convinced most people that AI was finally here now that a machine had passed the iconic Turing Test, right? Nope — you guessed it — the bar for smart had moved.

How AI for Cybersecurity Is Raising the Bar

Now, we have systems doing what was previously unthinkable, but there is still a sense that we’ve yet to see the full potential of AI for cybersecurity. The good news is that we now have systems like Watson that can do anything from recommending treatment for some of the most intractable cancer cases to detecting when your IT systems are under attack, by whom and to what extent. Watson for Cybersecurity can do the latter today by applying knowledge it has gleaned from reading millions of documents in unstructured form and applying that learning to the precise details of a particular IT environment. Better still, it does all this with the sort of speed even the most experienced security experts could only dream of.

Does it solve all the problems of a modern security operations center (SOC)? Of course not. We still need human intelligence and insight to guide the process, make sense of the results and devise appropriate responses that account for ethical dilemmas, legal considerations, business priorities and more. However, the ability to reduce the time for investigations from a few hours to a few minutes can be a game changer. There’s still much more to be done with AI for cybersecurity, but one thing’s for sure: We have, once again, raised the bar for smart.

More from Artificial Intelligence

Data Privacy: How the Growing Field of Regulations Impacts Businesses

The proposed rules over artificial intelligence (AI) in the European Union (EU) are a harbinger of things to come. Data privacy laws are becoming more complex and growing in number and relevance. So, businesses that seek to become — and stay — compliant must find a solution that can do more than just respond to current challenges. Take a look at upcoming trends when it comes to data privacy regulations and how to follow them. Today's AI Solutions On April…

Tackling Today’s Attacks and Preparing for Tomorrow’s Threats: A Leader in 2022 Gartner® Magic Quadrant™ for SIEM

Get the latest on IBM Security QRadar SIEM, recognized as a Leader in the 2022 Gartner Magic Quadrant. As I talk to security leaders across the globe, four main themes teams constantly struggle to keep up with are: The ever-evolving and increasing threat landscape Access to and retaining skilled security analysts Learning and managing increasingly complex IT environments and subsequent security tooling The ability to act on the insights from their security tools including security information and event management software…

4 Ways AI Capabilities Transform Security

Many industries have had to tighten belts in the "new normal". In cybersecurity, artificial intelligence (AI) can help.   Every day of the new normal we learn how the pandemic sped up digital transformation, as reflected in the new opportunities and new risks. For many, organizational complexity and legacy infrastructure and support processes are the leading barriers to the effectiveness of their security.   Adding to the dynamics, short-handed teams are overwhelmed with too much data from disparate sources and…

What’s New in the 2022 Cost of a Data Breach Report

The average cost of a data breach reached an all-time high of $4.35 million this year, according to newly published 2022 Cost of a Data Breach Report, an increase of 2.6% from a year ago and 12.7% since 2020. New research in this year’s report also reveals for the first time that 83% of organizations in the study have experienced more than one data breach and just 17% said this was their first data breach. And at a time when…