Over the past year, our industry has seen an explosion in artificial intelligence and cognitive solutions to assist with some of the most challenging cybersecurity problems. In some cases, cognitive solutions can increase analytical capability by 50 percent.

With cognitive technology providing such high efficiencies for security teams, people often ask me what is next for the human analyst. Will machines take over our roles in security? Given the estimated 1.8 million cybersecurity job gap by 2022, according to Frost & Sullivan, this topic is top of mind for most organizations.

A Glide Path to Cognitive Security Success

My advice to security professionals is to think about the bigger picture. Instead of a one-to-one replacement of man with machine, consider the cognitive security glide path. This concept shows that cognitive capabilities exist on a spectrum. You must constantly build to fight evolving cyberthreats.

When considering adopting cognitive solutions, security professionals should consider the following factors.

Cognitive Security Is a Journey, Not a Destination

There is a progressive level of capability when it comes to cognitive technology in security. The first function should be automating processes that are particularly manual and labor-intensive for humans.

A cognitive security solution should also scan data sources to look for anomalies, pointing analysts to the signal in the noise where they would not otherwise think to look. Finally, machine automation enables analysts to ask complex questions about the data generated by the cognitive solution to identify fraudsters trying to breach the network.

The Human Factor

The human analyst will continue to be very important in advanced cyber operations. The purpose of machine-driven cognitive capabilities is to automate many labor-intensive and time-consuming steps, essentially providing a high degree of efficiency to add as a multiplier to your cyber force — though it is not a force replacement. When most tier-one or tier-two tasks become automated, this allows room for your current staff to grow into tier-three analysts. This will give your security team the human touch, driving more complex searches of data to discover more advanced threats.

Understand the Kill Chain

The ultimate goal of implementing the cognitive glide path is to defeat the adversary kill chain. This methodology describes the way cybercriminals infiltrate a network in a series of steps, culminating in actions on the objective. At each point along the kill chain, your security organization should be thinking of how it can implement cognitive elements to detect indicators of malicious action, ultimately reducing the adversary dwell time. The faster you can detect threat actors, the more successful you will be at stopping advanced attacks. Cognitive capabilities can greatly accelerate this process.

Empowering Humans With Machine Intelligence

The cognitive glide path can improve the efficacy of a security organization facing a dynamic adversary. Advocates of such a strategy realize that cognitive assets must be rolled out as an iterative process, ultimately empowering the human analyst. This strategic view of the problem will result in more effective methods to control adversary access on your network.

Read the solution brief: Arm security analysts with the power of cognitive security

More from Artificial Intelligence

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…

Artificial intelligence threats in identity management

4 min read - The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures: 68% are concerned about insider threats from employee layoffs and churn 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments 74% are concerned about confidential data loss through employees, ex-employees and third-party vendors. Additionally, many feel digital identity proliferation is on the rise and the attack surface is…

AI reduces data breach lifecycles and costs

3 min read - The cybersecurity tools you implement can make a difference in the financial future of your business. According to the 2023 IBM Cost of a Data Breach report, organizations using security AI and automation incurred fewer data breach costs compared to businesses not using AI-based cybersecurity tools. The report found that the more an organization uses the tools, the greater the benefits reaped. Organizations that extensively used AI and security automation saw an average cost of a data breach of $3.60…