March 6, 2018 By Mary O'Brien 4 min read

The security industry is facing a perfect storm created by the combination of an acute skill shortage, an expanding attack surface and increasingly sophisticated adversaries. As a result, the tactics that have served us in the past will no longer suffice. The only way to battle these mounting threats is through collaboration and the strategic use of machine learning to transform computers into trusted allies in the battle against cyberattackers.

Despite the best efforts of university and on-the-job trainers, the shortage of skilled security professionals is expected to reach 1.8 million unfilled jobs by 2022. This crisis comes just as the attack surface is expanding exponentially due to the proliferation of connected devices. At the same time, organized crime and rogue states are becoming major new cybercrime forces to contend with, bringing resources and skills that are orders of magnitude greater than anything the security community has faced in the past.

Fortunately, machine learning and other forms of artificial intelligence (AI) have matured to the point that they’re ready to join humans on the front lines. Computers’ ability to pore over large volumes of data to spot trends and anomalies far outstrips that of humans. Using machine learning algorithms, computers can now ingest a sets of basic rules and apply them to large data sets. As they test and iterate these rules, their understanding grows increasingly sophisticated.

Enhancing Prevention, Detection and Response Capabilities With Machine Learning

Artificial intelligence augments the skills of security analysts and alleviates the talent shortage. These technologies can provide a junior analyst with diagnostic skills and resources that used to take years of experience to develop. This has the potential to address some of our most basic security vulnerabilities. Let’s look at three examples in the areas of prevention, detection and response.

Prevention

Some of the largest breaches in recent years occurred because attackers were able to leverage known vulnerabilities that had already been patched. For example, the 2014 Heartbleed bug exploited a weakness in the OpenSSL protocol for which patches were already available. Another major breach last year leveraged known vulnerabilities in the Apache Struts framework that had been patched two months earlier.

Patch management can be overwhelming for enterprise security professionals. Not only must they continuously monitor the status of all of their existing IT assets, but they also need to keep track of new updates. IT operations management, powered by machine learning, can automate much of the process of inventorying and identifying vulnerable systems.

Machine learning can also address the human element of prevention. Phishing attacks are becoming more sophisticated and harder for humans to detect. Cybercriminals also use scripting to redirect users from legitimate website to phony ones designed to steal credentials. They launch and take down these pages very quickly — in fact, 70 percent of credentials are stolen in the first hour of a phishing attack. It’s impossible for humans to keep up with this volume, but machines can be trained to look for characteristics common to phony webpages and block them within seconds. They can also share their findings across networks, making each machine more effective.

Detection

Behavioral analytics is a type of machine learning that scours massive amounts of system, network and database information to look for anomalous activity. This discipline can be a tremendous resource in reducing insider threats, which account for as much as 75 percent of security breach incidents. For example, machines can spot access attempts from unknown IP addresses, repeated login failures and large downloads of critical data.

Machine learning can help security teams tackle the vulnerabilities created by improper permissions. A recent Ponemon Institute report found that 62 percent of end users have excessive access to confidential company data. Machines can scan millions of folders on a network and look for warning signs, such as permissions granted to specific individuals or no permissions at all. They can also scour directories to look for login credentials associated with users who no longer work at the company.

Response

Once an intrusion is detected, the security team needs to minimize damage and expunge the attackers. Immediate priorities include uncovering the nature of the breach, understanding what has been infected and determining how far the poison has spread.

When backed by machine learning, security teams can rapidly create knowledge graphs that depict interconnections that attackers could potentially traverse. They can pinpoint IP addresses, devices and even individual users much more efficiently than they could via manual analysis. That makes it possible for teams to orchestrate and automate a rapid response with a high level of confidence that all infected elements have been contained or removed. Automated processes can take remedial actions such as isolating intruders on a contained subnet, closing ports, quarantining devices and encrypting data.

One intriguing new response technique is moving threat defense, which continuously changes the state of resources on the network, such as IP addresses and data locations, so that an attacker is unable to home in on them. It’s impractical for humans to orchestrate such a response, but machines are well-suited for this task.

Better Together

As promising as machine learning is when it comes to addressing our security needs, we should assume that attackers have access to the same technology. That’s where collaboration can be our secret weapon. Organizations have historically been reluctant to share details about vulnerabilities, intrusions and responses, but the magnitude of today’s threats require us to put aside competitive concerns for the greater good. Fortunately, numerous collaborative efforts are under way.

One success story is the sector-based Information Sharing and Analysis Centers (ISACs), of which there are currently 24 representing major vertical industries. The Institute of Electrical and Electronics Engineers (IEEE)’s Industry Connection Security Group (ICSG) addresses issues that are common to all industries, such as malware and encrypted traffic inspection. There are also regional groups, like the Columbus Collaboratory, which is one of about 30 Information Sharing and Analysis Organizations established with the support of the U.S. Department of Homeland Security (DHS). There are even private efforts, such as TruSTAR, which uses anonymous collaboration to share news about cyber incidents. And I would be remiss if I didn’t mention IBM’s own X-Force Exchange threat intelligence-sharing platform.

Cybercriminals can share data, too, but their motives are quite different, and there is little trust among thieves. I believe that the collective intelligence of a global network of security professionals, united in a common cause and reinforced by intelligent machines, is our best defense in the long run. The technology has arrived — now let’s put our heads together and figure out the best ways to use it.

Read the interactive white paper: It’s time to take a proactive approach to threat detection and prevention

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today