Today’s security operations centers (SOC) have to manage data, tools and teams dispersed across the organization, making threat detection and teamwork difficult. There are many factors driving complex security work. Many people now work from home with coworkers in far-away places. The cost and maintenance of legacy tools and the migration to cloud also make this more complex. So do hybrid environments and the multiple tools and vendors in use. Taking all these factors into account, the average analyst’s job has become more difficult than ever. Often, tracking down a single incident requires hours or even days of collecting evidence. That’s where artificial intelligence (AI) in cybersecurity comes in.

Analysts might spend a lot of time trying to gather data, sifting through gigabytes of events and logs and locating the relevant pieces. While they try to cope with the sheer volume of alerts, attackers are free to come up with ever more inventive ways of conducting attacks and hiding their trails.

What AI in Cybersecurity Can Do

AI makes the SOC more effective by reducing manual analysis, evidence gathering and threat intelligence correlation — driving faster, more consistent and accurate responses.

Some AI models can tell what type of evidence to collect from which data sources. They can also locate the relevant among the noise, spot patterns used in many common incidents and correlate with the latest security data. AI in cybersecurity can generate a timeline and attack chain for the incident. All of this leads the way to quick response and repair.

AI security tools are very effective in finding false positives. After all, most false positives follow common patterns. X-Force Red Hacking Chief Technology Officer Steve Ocepek reports that his team sees analysts spending up to 30% of their time studying false positives. If an AI can take care of those alerts first, humans will have more time and less alert fatigue when they handle the most important tasks.

The Human Element of AI Security

While the demand for skilled SOC analysts is increasing, it is getting harder for employers to find and retain them. Should you instead aim to completely automate the SOC and not hire people at all?

The answer is no. AI in cybersecurity is here to augment analyst output, not replace it. Forrester analyst Allie Mellen recently shared a great take on this issue.

In “Stop Trying To Take Humans Out Of Security Operations,” Allie argues that detecting new types of attacks and handling more complex incidents require human smarts, critical and creative thinking and teamwork. Often effectively talking to users, employees and stakeholders can lead to new insights where data is lacking. When used along with automation, AI removes the most boring elements of the job. This allows analysts time for thinking, researching and learning, giving them a chance to keep up with the attackers.

AI helps SOC teams build intelligent workflows, connect and correlate data from different systems, streamline their processes and generate insights they can act on. Effective AI relies on consistent, accurate and streamlined data. The workflows created with the help of AI in turn generate better quality data needed to retrain the models. The SOC teams and AI in cybersecurity grow and improve together as they augment and support each other.

Is it time to put AI to work in your SOC? Ask yourself these questions first.

Register for the webinar: SOAR

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today