March 6, 2018 By Mary O'Brien 4 min read

The security industry is facing a perfect storm created by the combination of an acute skill shortage, an expanding attack surface and increasingly sophisticated adversaries. As a result, the tactics that have served us in the past will no longer suffice. The only way to battle these mounting threats is through collaboration and the strategic use of machine learning to transform computers into trusted allies in the battle against cyberattackers.

Despite the best efforts of university and on-the-job trainers, the shortage of skilled security professionals is expected to reach 1.8 million unfilled jobs by 2022. This crisis comes just as the attack surface is expanding exponentially due to the proliferation of connected devices. At the same time, organized crime and rogue states are becoming major new cybercrime forces to contend with, bringing resources and skills that are orders of magnitude greater than anything the security community has faced in the past.

Fortunately, machine learning and other forms of artificial intelligence (AI) have matured to the point that they’re ready to join humans on the front lines. Computers’ ability to pore over large volumes of data to spot trends and anomalies far outstrips that of humans. Using machine learning algorithms, computers can now ingest a sets of basic rules and apply them to large data sets. As they test and iterate these rules, their understanding grows increasingly sophisticated.

Enhancing Prevention, Detection and Response Capabilities With Machine Learning

Artificial intelligence augments the skills of security analysts and alleviates the talent shortage. These technologies can provide a junior analyst with diagnostic skills and resources that used to take years of experience to develop. This has the potential to address some of our most basic security vulnerabilities. Let’s look at three examples in the areas of prevention, detection and response.

Prevention

Some of the largest breaches in recent years occurred because attackers were able to leverage known vulnerabilities that had already been patched. For example, the 2014 Heartbleed bug exploited a weakness in the OpenSSL protocol for which patches were already available. Another major breach last year leveraged known vulnerabilities in the Apache Struts framework that had been patched two months earlier.

Patch management can be overwhelming for enterprise security professionals. Not only must they continuously monitor the status of all of their existing IT assets, but they also need to keep track of new updates. IT operations management, powered by machine learning, can automate much of the process of inventorying and identifying vulnerable systems.

Machine learning can also address the human element of prevention. Phishing attacks are becoming more sophisticated and harder for humans to detect. Cybercriminals also use scripting to redirect users from legitimate website to phony ones designed to steal credentials. They launch and take down these pages very quickly — in fact, 70 percent of credentials are stolen in the first hour of a phishing attack. It’s impossible for humans to keep up with this volume, but machines can be trained to look for characteristics common to phony webpages and block them within seconds. They can also share their findings across networks, making each machine more effective.

Detection

Behavioral analytics is a type of machine learning that scours massive amounts of system, network and database information to look for anomalous activity. This discipline can be a tremendous resource in reducing insider threats, which account for as much as 75 percent of security breach incidents. For example, machines can spot access attempts from unknown IP addresses, repeated login failures and large downloads of critical data.

Machine learning can help security teams tackle the vulnerabilities created by improper permissions. A recent Ponemon Institute report found that 62 percent of end users have excessive access to confidential company data. Machines can scan millions of folders on a network and look for warning signs, such as permissions granted to specific individuals or no permissions at all. They can also scour directories to look for login credentials associated with users who no longer work at the company.

Response

Once an intrusion is detected, the security team needs to minimize damage and expunge the attackers. Immediate priorities include uncovering the nature of the breach, understanding what has been infected and determining how far the poison has spread.

When backed by machine learning, security teams can rapidly create knowledge graphs that depict interconnections that attackers could potentially traverse. They can pinpoint IP addresses, devices and even individual users much more efficiently than they could via manual analysis. That makes it possible for teams to orchestrate and automate a rapid response with a high level of confidence that all infected elements have been contained or removed. Automated processes can take remedial actions such as isolating intruders on a contained subnet, closing ports, quarantining devices and encrypting data.

One intriguing new response technique is moving threat defense, which continuously changes the state of resources on the network, such as IP addresses and data locations, so that an attacker is unable to home in on them. It’s impractical for humans to orchestrate such a response, but machines are well-suited for this task.

Better Together

As promising as machine learning is when it comes to addressing our security needs, we should assume that attackers have access to the same technology. That’s where collaboration can be our secret weapon. Organizations have historically been reluctant to share details about vulnerabilities, intrusions and responses, but the magnitude of today’s threats require us to put aside competitive concerns for the greater good. Fortunately, numerous collaborative efforts are under way.

One success story is the sector-based Information Sharing and Analysis Centers (ISACs), of which there are currently 24 representing major vertical industries. The Institute of Electrical and Electronics Engineers (IEEE)’s Industry Connection Security Group (ICSG) addresses issues that are common to all industries, such as malware and encrypted traffic inspection. There are also regional groups, like the Columbus Collaboratory, which is one of about 30 Information Sharing and Analysis Organizations established with the support of the U.S. Department of Homeland Security (DHS). There are even private efforts, such as TruSTAR, which uses anonymous collaboration to share news about cyber incidents. And I would be remiss if I didn’t mention IBM’s own X-Force Exchange threat intelligence-sharing platform.

Cybercriminals can share data, too, but their motives are quite different, and there is little trust among thieves. I believe that the collective intelligence of a global network of security professionals, united in a common cause and reinforced by intelligent machines, is our best defense in the long run. The technology has arrived — now let’s put our heads together and figure out the best ways to use it.

Read the interactive white paper: It’s time to take a proactive approach to threat detection and prevention

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today