October 2, 2017 By Bob Stasio 2 min read

Over the past year, our industry has seen an explosion in artificial intelligence and cognitive solutions to assist with some of the most challenging cybersecurity problems. In some cases, cognitive solutions can increase analytical capability by 50 percent.

With cognitive technology providing such high efficiencies for security teams, people often ask me what is next for the human analyst. Will machines take over our roles in security? Given the estimated 1.8 million cybersecurity job gap by 2022, according to Frost & Sullivan, this topic is top of mind for most organizations.

A Glide Path to Cognitive Security Success

My advice to security professionals is to think about the bigger picture. Instead of a one-to-one replacement of man with machine, consider the cognitive security glide path. This concept shows that cognitive capabilities exist on a spectrum. You must constantly build to fight evolving cyberthreats.

When considering adopting cognitive solutions, security professionals should consider the following factors.

Cognitive Security Is a Journey, Not a Destination

There is a progressive level of capability when it comes to cognitive technology in security. The first function should be automating processes that are particularly manual and labor-intensive for humans.

A cognitive security solution should also scan data sources to look for anomalies, pointing analysts to the signal in the noise where they would not otherwise think to look. Finally, machine automation enables analysts to ask complex questions about the data generated by the cognitive solution to identify fraudsters trying to breach the network.

The Human Factor

The human analyst will continue to be very important in advanced cyber operations. The purpose of machine-driven cognitive capabilities is to automate many labor-intensive and time-consuming steps, essentially providing a high degree of efficiency to add as a multiplier to your cyber force — though it is not a force replacement. When most tier-one or tier-two tasks become automated, this allows room for your current staff to grow into tier-three analysts. This will give your security team the human touch, driving more complex searches of data to discover more advanced threats.

Understand the Kill Chain

The ultimate goal of implementing the cognitive glide path is to defeat the adversary kill chain. This methodology describes the way cybercriminals infiltrate a network in a series of steps, culminating in actions on the objective. At each point along the kill chain, your security organization should be thinking of how it can implement cognitive elements to detect indicators of malicious action, ultimately reducing the adversary dwell time. The faster you can detect threat actors, the more successful you will be at stopping advanced attacks. Cognitive capabilities can greatly accelerate this process.

Empowering Humans With Machine Intelligence

The cognitive glide path can improve the efficacy of a security organization facing a dynamic adversary. Advocates of such a strategy realize that cognitive assets must be rolled out as an iterative process, ultimately empowering the human analyst. This strategic view of the problem will result in more effective methods to control adversary access on your network.

Read the solution brief: Arm security analysts with the power of cognitive security

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today