March 15, 2017 By Rick M Robinson 2 min read

Artificial intelligence (AI) and its role in security was a hot topic at last month’s RSA Conference in San Francisco. But some cold water was also being thrown on the growing tendency of vendors to use AI, especially machine learning, as marketing hype.

AI indeed “moves the needle,” Zulfikar Ramzan, the RSA chief technology officer (CTO), said at the conference. But, he added, “the real open question to me is how much has that needle actually moved in practice?”

To cut through the marketing hype, it is necessary to understand the real capabilities and limitations of artificial intelligence in security.

Distinguishing Substance From Hype

Some of what is being hyped as artificial intelligence breakthroughs is actually well-established technology. For example, Ramzan noted that the use of machine learning to recognize and identify hostile traffic is the basis of familiar tools such as spam filters.

Machine learning technology continues to improve. It is particularly useful in roles such as spotting attacks that do not involve malware, where the overall pattern identifies the attack as a threat. But the hype threatens to produce what Ramzan called a “lemons market” in which security customers cannot readily tell which vendors are offering real value.

Artificial Intelligence and the Security Data Challenge

Other security observers point to the sheer volume of traffic data as a key area where artificial intelligence can make an effective contribution.

“Right now, it’s an issue of volume. There’s just not enough people to do the work,” Mike Buratowski, senior vice president of Fidelis Cybersecurity, said at RSA. In this situation, he continued, AI technology “can crunch so much data and present it to somebody.”

In this application, AI works hand in hand, so to speak, with the human intelligence of security analysts. A high-level AI such as Watson can monitor enormous amounts of raw traffic data and look for patterns that it can then pass on to human analysts for closer examination and evaluation. In turn, interaction with its human colleagues allows the AI to refine its search algorithms.

The Future of Cognitive Security

Other innovative approaches to artificial intelligence in security include scanning messaging and other patterns on web forums and related sites associated with black market activities. As in network traffic analysis, the role of AI in tracking the Dark Web is to examine very large volumes of unstructured data — big data in the truest sense — for patterns that can then be further scrutinized by human expertise.

Given the rate at which big data is getting even bigger, the demand for this type of AI augmentation of threat intelligence is sure to grow. This will only expand the possibilities of AI and cognitive computing in the security space.

Listen to the podcast: The Cognitive Transformation is for Everyone

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today