Today’s security operations centers (SOC) have to manage data, tools and teams dispersed across the organization, making threat detection and teamwork difficult. There are many factors driving complex security work. Many people now work from home with coworkers in far-away places. The cost and maintenance of legacy tools and the migration to cloud also make this more complex. So do hybrid environments and the multiple tools and vendors in use. Taking all these factors into account, the average analyst’s job has become more difficult than ever. Often, tracking down a single incident requires hours or even days of collecting evidence. That’s where artificial intelligence (AI) in cybersecurity comes in.

Analysts might spend a lot of time trying to gather data, sifting through gigabytes of events and logs and locating the relevant pieces. While they try to cope with the sheer volume of alerts, attackers are free to come up with ever more inventive ways of conducting attacks and hiding their trails.

What AI in Cybersecurity Can Do

AI makes the SOC more effective by reducing manual analysis, evidence gathering and threat intelligence correlation — driving faster, more consistent and accurate responses.

Some AI models can tell what type of evidence to collect from which data sources. They can also locate the relevant among the noise, spot patterns used in many common incidents and correlate with the latest security data. AI in cybersecurity can generate a timeline and attack chain for the incident. All of this leads the way to quick response and repair.

AI security tools are very effective in finding false positives. After all, most false positives follow common patterns. X-Force Red Hacking Chief Technology Officer Steve Ocepek reports that his team sees analysts spending up to 30% of their time studying false positives. If an AI can take care of those alerts first, humans will have more time and less alert fatigue when they handle the most important tasks.

The Human Element of AI Security

While the demand for skilled SOC analysts is increasing, it is getting harder for employers to find and retain them. Should you instead aim to completely automate the SOC and not hire people at all?

The answer is no. AI in cybersecurity is here to augment analyst output, not replace it. Forrester analyst Allie Mellen recently shared a great take on this issue.

In “Stop Trying To Take Humans Out Of Security Operations,” Allie argues that detecting new types of attacks and handling more complex incidents require human smarts, critical and creative thinking and teamwork. Often effectively talking to users, employees and stakeholders can lead to new insights where data is lacking. When used along with automation, AI removes the most boring elements of the job. This allows analysts time for thinking, researching and learning, giving them a chance to keep up with the attackers.

AI helps SOC teams build intelligent workflows, connect and correlate data from different systems, streamline their processes and generate insights they can act on. Effective AI relies on consistent, accurate and streamlined data. The workflows created with the help of AI in turn generate better quality data needed to retrain the models. The SOC teams and AI in cybersecurity grow and improve together as they augment and support each other.

Is it time to put AI to work in your SOC? Ask yourself these questions first.

Register for the webinar: SOAR

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today