The security industry is facing a perfect storm created by the combination of an acute skill shortage, an expanding attack surface and increasingly sophisticated adversaries. As a result, the tactics that have served us in the past will no longer suffice. The only way to battle these mounting threats is through collaboration and the strategic use of machine learning to transform computers into trusted allies in the battle against cyberattackers.

Despite the best efforts of university and on-the-job trainers, the shortage of skilled security professionals is expected to reach 1.8 million unfilled jobs by 2022. This crisis comes just as the attack surface is expanding exponentially due to the proliferation of connected devices. At the same time, organized crime and rogue states are becoming major new cybercrime forces to contend with, bringing resources and skills that are orders of magnitude greater than anything the security community has faced in the past.

Fortunately, machine learning and other forms of artificial intelligence (AI) have matured to the point that they’re ready to join humans on the front lines. Computers’ ability to pore over large volumes of data to spot trends and anomalies far outstrips that of humans. Using machine learning algorithms, computers can now ingest a sets of basic rules and apply them to large data sets. As they test and iterate these rules, their understanding grows increasingly sophisticated.

Enhancing Prevention, Detection and Response Capabilities With Machine Learning

Artificial intelligence augments the skills of security analysts and alleviates the talent shortage. These technologies can provide a junior analyst with diagnostic skills and resources that used to take years of experience to develop. This has the potential to address some of our most basic security vulnerabilities. Let’s look at three examples in the areas of prevention, detection and response.


Some of the largest breaches in recent years occurred because attackers were able to leverage known vulnerabilities that had already been patched. For example, the 2014 Heartbleed bug exploited a weakness in the OpenSSL protocol for which patches were already available. Another major breach last year leveraged known vulnerabilities in the Apache Struts framework that had been patched two months earlier.

Patch management can be overwhelming for enterprise security professionals. Not only must they continuously monitor the status of all of their existing IT assets, but they also need to keep track of new updates. IT operations management, powered by machine learning, can automate much of the process of inventorying and identifying vulnerable systems.

Machine learning can also address the human element of prevention. Phishing attacks are becoming more sophisticated and harder for humans to detect. Cybercriminals also use scripting to redirect users from legitimate website to phony ones designed to steal credentials. They launch and take down these pages very quickly — in fact, 70 percent of credentials are stolen in the first hour of a phishing attack. It’s impossible for humans to keep up with this volume, but machines can be trained to look for characteristics common to phony webpages and block them within seconds. They can also share their findings across networks, making each machine more effective.


Behavioral analytics is a type of machine learning that scours massive amounts of system, network and database information to look for anomalous activity. This discipline can be a tremendous resource in reducing insider threats, which account for as much as 75 percent of security breach incidents. For example, machines can spot access attempts from unknown IP addresses, repeated login failures and large downloads of critical data.

Machine learning can help security teams tackle the vulnerabilities created by improper permissions. A recent Ponemon Institute report found that 62 percent of end users have excessive access to confidential company data. Machines can scan millions of folders on a network and look for warning signs, such as permissions granted to specific individuals or no permissions at all. They can also scour directories to look for login credentials associated with users who no longer work at the company.


Once an intrusion is detected, the security team needs to minimize damage and expunge the attackers. Immediate priorities include uncovering the nature of the breach, understanding what has been infected and determining how far the poison has spread.

When backed by machine learning, security teams can rapidly create knowledge graphs that depict interconnections that attackers could potentially traverse. They can pinpoint IP addresses, devices and even individual users much more efficiently than they could via manual analysis. That makes it possible for teams to orchestrate and automate a rapid response with a high level of confidence that all infected elements have been contained or removed. Automated processes can take remedial actions such as isolating intruders on a contained subnet, closing ports, quarantining devices and encrypting data.

One intriguing new response technique is moving threat defense, which continuously changes the state of resources on the network, such as IP addresses and data locations, so that an attacker is unable to home in on them. It’s impractical for humans to orchestrate such a response, but machines are well-suited for this task.

Better Together

As promising as machine learning is when it comes to addressing our security needs, we should assume that attackers have access to the same technology. That’s where collaboration can be our secret weapon. Organizations have historically been reluctant to share details about vulnerabilities, intrusions and responses, but the magnitude of today’s threats require us to put aside competitive concerns for the greater good. Fortunately, numerous collaborative efforts are under way.

One success story is the sector-based Information Sharing and Analysis Centers (ISACs), of which there are currently 24 representing major vertical industries. The Institute of Electrical and Electronics Engineers (IEEE)’s Industry Connection Security Group (ICSG) addresses issues that are common to all industries, such as malware and encrypted traffic inspection. There are also regional groups, like the Columbus Collaboratory, which is one of about 30 Information Sharing and Analysis Organizations established with the support of the U.S. Department of Homeland Security (DHS). There are even private efforts, such as TruSTAR, which uses anonymous collaboration to share news about cyber incidents. And I would be remiss if I didn’t mention IBM’s own X-Force Exchange threat intelligence-sharing platform.

Cybercriminals can share data, too, but their motives are quite different, and there is little trust among thieves. I believe that the collective intelligence of a global network of security professionals, united in a common cause and reinforced by intelligent machines, is our best defense in the long run. The technology has arrived — now let’s put our heads together and figure out the best ways to use it.

Read the interactive white paper: It’s time to take a proactive approach to threat detection and prevention

More from Artificial Intelligence

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…

Artificial intelligence threats in identity management

4 min read - The 2023 Identity Security Threat Landscape Report from CyberArk identified some valuable insights. 2,300 security professionals surveyed responded with some sobering figures: 68% are concerned about insider threats from employee layoffs and churn 99% expect some type of identity compromise driven by financial cutbacks, geopolitical factors, cloud applications and hybrid work environments 74% are concerned about confidential data loss through employees, ex-employees and third-party vendors. Additionally, many feel digital identity proliferation is on the rise and the attack surface is…

AI reduces data breach lifecycles and costs

3 min read - The cybersecurity tools you implement can make a difference in the financial future of your business. According to the 2023 IBM Cost of a Data Breach report, organizations using security AI and automation incurred fewer data breach costs compared to businesses not using AI-based cybersecurity tools. The report found that the more an organization uses the tools, the greater the benefits reaped. Organizations that extensively used AI and security automation saw an average cost of a data breach of $3.60…