In a world full of threat actors, from cybercriminals to state-sponsored agencies, it’s clear that risks are increasing. The global cost of cybercrime is expected to hit $2 trillion by 2019, a threefold increase from the 2015 estimate of $500 billion.

As a result, securing data is becoming a more complex endeavor — one that can only be accomplished with the help of augmented intelligence. Over 2.5 quintillion bytes of data is generated every day worldwide, and 80 percent of it is unstructured. Humans alone simply can’t handle all that information.

A Brief History of Threat Intelligence

Organizations began with perimeter defense, which built a strong moat around data. However, the perimeter defense was eventually rendered inadequate as cybercrime techniques became more sophisticated.

Companies began purchasing a variety of security products to counteract the risk. They used analytics from security information and event management (SIEM) systems to collect and assess the event data. But cybercriminals are well organized and well-funded. To stay ahead of the good guys, criminals on the Dark Web collaborated and shared techniques that led to cybercrime for hire.

Augmented Intelligence Is the Answer

Augmented intelligence, or cognitive augmentation, is the answer. This technology elevates analysts’ performance by providing them with richly focused threat knowledge. It mitigates the shortage of skilled security analysts by increasing their effectiveness.

Security analysts are limited to consuming a fraction of available threat data, while security systems are effective in consuming structured data from fully instrumented enterprises. Unfortunately, the vast amount of security content is both human-generated and unstructured.

The average organization captures over 17,000 malware alerts per day, and some are spending as much as $1.3 million responding to erroneous or inaccurate malware alerts, according to the Ponemon Institute report, “The Cost of Malware Containment.” Only 19 percent of malware alerts are considered reliable and just 4 percent are ever investigated. Additionally, the cost of breach and incident mitigation is daunting as companies continue to assess methods to augment the limited number of security personnel.

The only way to digest, comprehend, analyze and leverage that much data is to utilize cognitive systems. The following number prove it:

  • There are over 75,000 known software vulnerabilities reported in the National Vulnerability Database.
  • More than 15,000 security blogs are published every month.
  • Approximately 10,000 security research papers are published each year, all as unstructured content.

Cognitive analytics is the strongest way to consume and prioritize a large collection of security data by providing predictive analytics. IBM is one of the first companies to offer this kind of capability with Watson for Cyber Security. It is training a new generation of systems to understand, reason and learn about rapidly evolving security threats. Watson can quickly analyze and index research, web text, video and threat data at an unprecedented speed and scale.

Watson Works Side by Side With Human Analysts

Watson does not replace the security analyst, but rather augments the analyst’s capability by providing analytics from datasets too large for human consumption. Understanding context and using reasoning, Watson provides insights into large security datasets, allowing it to become more proficient at proactively identifying risks. The following trends from IBM’s “Cybersecurity in the Cognitive Era: Priming Your Digital Immune System” report demonstrated the need for this new paradigm:

Companies must examine how to supercharge their existing security analysts by providing them access to focus analytics and threat research. Watson seeks hidden patterns in data and recognizes anomalous behaviors to expand its understanding of the global threat environment. It also produces rich contextual information key to threat detection and triage.

The economics of cybercrime will change as this intelligent augmentation expands. Cybercriminals relying on obscurity in a vast array of event data are exposed earlier in the process. Cognitive security represents a huge leap forward, even in its infancy, with its ability to effectively leverage unstructured data to find complex patterns of anomalies. Watson can significantly shorten the timeline from alert to action by applying advanced data analytics and natural language processing.

A Learning Process

How is cognitive security any different than an experienced analyst with Google? The key is cognitive systems understand context and natural language. Watson can correlate unstructured security data from multiple language sources, whereas Google’s search engine does not understand context. For example, consider the search, “Don’t show me a cat.” The engine shows you plenty of cats as a result.

Watson, however, understands natural language and can provide more powerful analytics. Its growth continues with instruction from IBM staff as well as faculty and students from eight universities. The system utilizes learning processes to generate suggestions, hypotheses and evidence-based recommendations. This capability will help address the current skills gap, accelerate threat identification and reduce the overall complexity of security analytics.

The economics of cybercrime will be transformed as companies move from being reactive to proactive. Global security companies handle over a trillion security events per month from inside corporate firewalls. Cognitive systems like Watson consider threats in the wild and leverage the rich dataset from instrumented enterprises to drastically cut down on this noise. This capability will evolve, allowing for proactive threat alerting.

For example, a specific exploit found in Eastern Europe is identified via a security alert, blog or event by a monitored system. Companies who combine the power of cognitive analytics with managed security services would be alerted in advance so that they might implement preventative action before the threat materializes.

Learn More

Cognitive systems do not replace security analysts. Instead, they enable analysts to be more effective by providing contextual threat information. Applying cognitive systems to global threat datasets can reveal patterns so complex and nuanced that they are easily missed by an analyst.

Interested in learning more about how IBM’s Watson and augmented intelligence are changing the threat landscape to stay ahead of the most advanced threats? Watch the “60 Minutes” Watson special and check out our video, “What’s So Revolutionary About the Cognitive Revolution?

Read the IBM Executive Report: Cybersecurity in the cognitive era

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today