Today’s security operations centers (SOC) have to manage data, tools and teams dispersed across the organization, making threat detection and teamwork difficult. There are many factors driving complex security work. Many people now work from home with coworkers in far-away places. The cost and maintenance of legacy tools and the migration to cloud also make this more complex. So do hybrid environments and the multiple tools and vendors in use. Taking all these factors into account, the average analyst’s job has become more difficult than ever. Often, tracking down a single incident requires hours or even days of collecting evidence. That’s where artificial intelligence (AI) in cybersecurity comes in.

Analysts might spend a lot of time trying to gather data, sifting through gigabytes of events and logs and locating the relevant pieces. While they try to cope with the sheer volume of alerts, attackers are free to come up with ever more inventive ways of conducting attacks and hiding their trails.

What AI in Cybersecurity Can Do

AI makes the SOC more effective by reducing manual analysis, evidence gathering and threat intelligence correlation — driving faster, more consistent and accurate responses.

Some AI models can tell what type of evidence to collect from which data sources. They can also locate the relevant among the noise, spot patterns used in many common incidents and correlate with the latest security data. AI in cybersecurity can generate a timeline and attack chain for the incident. All of this leads the way to quick response and repair.

AI security tools are very effective in finding false positives. After all, most false positives follow common patterns. X-Force Red Hacking Chief Technology Officer Steve Ocepek reports that his team sees analysts spending up to 30% of their time studying false positives. If an AI can take care of those alerts first, humans will have more time and less alert fatigue when they handle the most important tasks.

The Human Element of AI Security

While the demand for skilled SOC analysts is increasing, it is getting harder for employers to find and retain them. Should you instead aim to completely automate the SOC and not hire people at all?

The answer is no. AI in cybersecurity is here to augment analyst output, not replace it. Forrester analyst Allie Mellen recently shared a great take on this issue.

In “Stop Trying To Take Humans Out Of Security Operations,” Allie argues that detecting new types of attacks and handling more complex incidents require human smarts, critical and creative thinking and teamwork. Often effectively talking to users, employees and stakeholders can lead to new insights where data is lacking. When used along with automation, AI removes the most boring elements of the job. This allows analysts time for thinking, researching and learning, giving them a chance to keep up with the attackers.

AI helps SOC teams build intelligent workflows, connect and correlate data from different systems, streamline their processes and generate insights they can act on. Effective AI relies on consistent, accurate and streamlined data. The workflows created with the help of AI in turn generate better quality data needed to retrain the models. The SOC teams and AI in cybersecurity grow and improve together as they augment and support each other.

Is it time to put AI to work in your SOC? Ask yourself these questions first.

Register for the webinar: SOAR

More from Artificial Intelligence

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Back to basics: Better security in the AI era

4 min read - The rise of artificial intelligence (AI), large language models (LLM) and IoT solutions has created a new security landscape. From generative AI tools that can be taught to create malicious code to the exploitation of connected devices as a way for attackers to move laterally across networks, enterprise IT teams find themselves constantly running to catch up. According to the Google Cloud Cybersecurity Forecast 2024 report, companies should anticipate a surge in attacks powered by generative AI tools and LLMs…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today