80% of the world’s data has been invisible to traditional systems. Until now.

The best security professionals build their body of knowledge every day through experience, talking with colleagues, attending conferences and staying up to date with online sources like blogs, research papers, and publications. But people can only consume and make sense of a fraction of this data and much of it is unstructured – created by humans, for humans – making it inaccessible to traditional systems. As a result, most data remains untapped and dark to an organization’s defenses.

Watson for Cyber Security

Watson for Cyber Security shines a light on the data that has previously been hidden from organizational defenses — uncovering new insights, patterns and security context never before seen. Think about the 75,000+ documented software vulnerabilities, 10,000+ security research papers published each year and 60,000+ security blogs published each month. What’s possible now is the ability to quickly interpret this data — created by humans, for humans — and integrate it with structured data from countless sources and locations.

 

The result: Watson will arm security analysts with the collective knowledge to respond to threats with greater confidence, at speed and scale.

Step into the cognitive era: Learn more about Watson for Cyber Security

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today