November 2, 2016 By Suzy Deffeyes 3 min read

Last week was really exciting, thanks to the energizing atmosphere at the World of Watson 2016 conference. I spent the week talking to customers about how we are integrating Watson for Cyber Security into QRadar and demoing QRadar’s User Behavior Analytics application. As an architect, I always like finding new use cases that customers are interested in, and I found several at World of Watson.

AI: Augmented Intelligence

One of the messages from WoW is that Watson is not artificial intelligence — it’s more accurately described as augmented intelligence. We aren’t trying to replace humans — and that is especially true in the security space.

We aren’t trying to replace security analysts that study threats in their environments and on their networks. We are simply trying to make a very challenging job easier by helping analysts find the needles in haystacks of data and prioritize threats more effectively.

The initial integration of Watson for Cyber Security with IBM QRadar is designed to help security operations center (SOC) analysts study security anomalies more thoroughly and with greater velocity. I demoed this integration during the security keynote at the event.

Training Watson

We’ve been training Watson to understand the language of security. To do this, we created a security-specific machine learning model loosely based on Structured Threat Information Expression (STIX) and Cyber Observable Expression (CybOX) constructs. This allows Watson to pull in and utilize vast amounts of the human-created content written about security. A human analyst cannot possibly read and understand hundreds of published pages of threat information every day; there simply aren’t enough hours.

Watson helps by pulling in security blogs, threat research and other natural-language text written about emerging threats and comprehending it from a security point of view. The system understands which URLs in a threat research document are indicators of compromise (IoCs) and places them in a negative context. Watson also understands which URLs in the threat research documents represent a course of action, or remediation, for a threat. These are viewed in a positive context.

In addition, Watson has to be able to understand what type of malware a given article is about. Without a security-specific model, for instance, Watson thinks that poison ivy is a skin rash. In the security realm, however, Poison Ivy is actually a type of remote access Trojan (RAT) used to control a compromised computer.

Enriched Analysis

Watson for Cyber Security also makes use of traditional, structured threat data. For instance, we pull in curated threat intelligence from IBM’s X-Force research team and use this traditional data to build a large IBM Graph to show relationships between entities.

These large knowledge graphs of structured and unstructured data help enrich the analysis of offenses. Watson for Cyber Security will be able to use cognitive reasoning algorithms to conduct toxicity analyses on relationships in the knowledge graphs, helping analysts know what to focus on.

World of Watson Offers a Broad View on Cognitive

For me, the week at World of Watson was eye-opening because it gave me a broader view on cognitive technology outside my focus on security. There were lots of cognitive Internet of Things (IoT) demos — quadcopters, cars, robots and more. IBM’s top technologists presented on all topics cognitive-related, including sentiment analysis of natural language, computer vision applications and machine learning used to train Watson to understand a new domain.

It was great to see the plethora of solutions the IBM Analytics team offered that fit naturally with cognitive. My inner geek was certainly well-fed. I’m now looking forward to all the exciting ways I’ll be able to apply cognitive technologies in the realm of security.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today