This is the first article in a three-part series on how IBM Watson for Cyber Security can help analysts win the arms race against the increasingly sophisticated cybercrime landscape.

IBM’s latest and most glamorous offering in the security space is QRadar Advisor with Watson. This technology is designed to reduce the time taken to identify, classify and respond to malicious activity in a company’s infrastructure. In beta, it has gone beyond this remit, finding threats and activities beyond the initial scope of the investigation. Is this the dawn of an age of security provided by artificial intelligence? Can Watson save the security world?

Entering the Age of Watson

Security is increasingly seen as a big data problem. The challenge these days is less about noticing that something isn’t right — there are dozens, if not hundreds, of products available to spot anomalies and raise red flags — and more about whether or not you can consume and act upon that knowledge. The number of alerts coming in just keeps increasing as we find more automated ways to identify malicious behavior, and because there are more people on the dark side creating additional attacks.

Companies today struggle to find sufficiently skilled and affordable analysts to handle the onslaught. Analysts also face a formidable challenge: They must constantly update their skills through research and be able to apply them precisely and quickly. Security is, perhaps, the fastest moving area in IT today. Even if money were no object, there are just too few professionals. It makes sense, then, to address this with a solution that can easily handle the quantities of data involved.

Watson essentially develops hypotheses about what is happening from the data it sees and then uses statistical techniques to prove or disprove them. The technical detail is fascinating, and the IBM Journal of Research and Development (Volume 56, Number 3.4) provides ample reading in this area. The idea of hypothesis testing machines is not new — in fact, it was the prevailing approach to expert systems as far back as the 1970s.

Watson in Action

The beta program for QRadar Adviser with Watson offered a taste of what it can achieve. It has already proven its worth in a live customer environment. However, Watson is a self-learning system. What it has learned so far is not all the knowledge that will ever exist. It will keep learning and refining both its corpus of knowledge and how that knowledge relates to the real world. This leads us to an enticing thought: What will Watson be able to do in the future? Is there a reasonable way we can imagine that?

What we can do is look at Watson’s “sibling” working in the complex world of oncology. This version of IBM’s cognitive computing solution, which is older than Watson for Cyber Security but uses the same or similar techniques and the same algorithms at its heart, has done some truly astonishing things. Cancer researchers working with Watson in this field identified six new proteins to investigate in one month. For comparison, human research identified just 28 such proteins in 30 years.

Watson for Cyber Security already provides highly satisfactory breakthroughs to organizational SOCs in terms of accuracy and speed. But it is also showing glimpses of how it will match its older sibling and do so much more. When given an offense to analyze, Watson returns useful information that human analysts might miss on their own.

Saving the World, One Cyberthreat at a Time

So, will Watson save the world? I should stress again that Watson, in its current incarnation, is intended to augment, not replace, the work of human analysts. To claim that any system could save the world suggests that a critical, irreplaceable resource is protected at least enough for the world to function. But the modern world is completely dependent on technology, communications and the internet, and the consequences if resources were completely compromised could be dire and far-reaching. It seems reasonable, therefore, to call the internet a critical resource.

Cybercriminals are making far too much money to ever quit, and their industry is short term. They will not care much if their activities completely undermine the world economic landscape. They will continue to up the ante and generate more sophisticated and devious attacks. We are in the middle of an arms race.

It’s possible that we now have a way to see much further and faster than ever before. That capacity will help analysts prevent breaches in a day-to-day security operations center (SOC) and develop the tools and skills to keep up with the changing threat landscape. In this sense, Watson holds the potential to save the security world.

Listen to the podcast: What Makes IBM Watson the Whole Package?

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today