Words for health and the human body often make their way into the language we use to describe IT. Computers get viruses; companies manage their security hygiene; incident response teams train on their cyber fitness. Framing IT concepts in terms of health can also be useful when looking at security operations centers (SOCs) and jobs in cybersecurity.

For many businesses and other entities today, SOCs are not the healthiest they could be. Jobs in cybersecurity can be stressful and overwhelming due to the volume of alerts. Many teams lack the staff they need to keep up with the influx.

The average SOC receives over 11,000 alerts a day, and 28% of all alerts are never addressed, says the 2020 State of Security Operations study from Forrester Consulting, sponsored by Palo Alto Networks.

What would healthier jobs in cybersecurity look like? Imagine fewer alerts organized by priority, and analysts being less stressed as a result. With their time freed up from processing false positives and low-value alerts, analysts could have a chance to dig into higher-value work and advance their careers. Applying AI and machine learning (ML) from detection all the way to response can set SOCs on the path toward achieving this vision for their analysts — and strengthening the group’s security postures.

Learn more about AI for cybersecurity

How AI/ML Advances the Health of Your SOC

From sorting alerts to enabling threat sharing, AI/ML can make the SOC more efficient in triage, analysis and response. Connecting the worlds of IT and health care once more, imagine the human body as a stand-in for your IT landscape. Using AI/ML is akin to suggesting the right medical care.

Detecting Issues

First, let’s consider detection of known threats. Say someone starts feeling sick with a runny nose and itchy eyes. These symptoms are well-known to them as an allergy flare-up. In some cases, this person skips the doctor’s office and heads right to the pharmacy. In others, this person may visit a doctor, who sees these seasonal symptoms, does not see a need for further tests and writes a prescription for the pharmacy.

These familiar symptoms are like a known risk. There are over-the-counter options for allergy relief — in the IT world, these are patches. Adding automation in this scenario is like dropping the right allergy pills on the patient’s doorstep, saving a trip to the pharmacy. AI/ML could detect known risks, spot a signature and update a patch without needing much effort from a human.

Responding Quickly

Jobs in cybersecurity always involve some surprises. What about protecting the body against uncommon illnesses — or threats? Maybe the patient in the first example starts having symptoms that are novel or more severe. The patient visits a medical center, where a nurse takes vitals and a doctor reviews symptoms. Sometimes the doctor asks for more bloodwork or X-rays to get a deeper look at the patient’s case. After support staff gathers all the data, the doctor starts forming a diagnosis and treatment. Sometimes, the doctor may ask for more specialists’ support.

In the IT metaphor, AI/ML-assisted threat disposition would be like helping the doctor through assistants and labs. AI/ML can help at an early stage by collating data about the IT landscape, as well as from other environments. This speeds up the time to a cure before the illness becomes dire. AI/ML can learn from the analysts’ decision making and assist with alert disposition.

Back in the doctor’s office, the patient could be having severe discomfort or dysfunction, with parts of the body weakening. Then, the patient needs to be rushed to the hospital. In IT, a team would call for emergency response when IT systems are off-kilter or there is a potential breach. Protecting the IT asset needs to be done right away, and this could involve calling in other specialists for support. By curating everything known about the IT asset, AI/ML could assist the incident response team with forensic analysis and access to playbooks.

Making Jobs in Cybersecurity Less Overwhelming

When AI/ML filters out the flood of low-value alerts through prioritization, analysts spend less time in triage and focus on high-value alerts. Phases of alert prioritization include auto-closure, auto-association and auto-escalation with explainability.

Auto-closure is the machine resolving an alert before it makes its way to the analyst’s screen. In terms of our metaphor, it’s like seeing another patient with a runny nose and itchy eyes, gathering the enough data about the problem, and prescribing allergy pills without taking up the time of a health care professional.

But, say we have another patient who presents similar symptoms to the person who was supported at the medical center. Then, the doctor can use context, connecting the background on the patient and the symptoms presented. Having more data helps the doctor to prescribe treatment, which will lead to an effective and efficient plan for care. It will also make it easier for the doctor to explain to the patient what’s going on.

Auto-escalation with explainability would bring that high-priority case forward with specific details for attention right away. The role of AI/ML here is to make sure the hospital patient is attended to faster, diagnosed for symptoms, and prescribed medication or further treatment urgently. AI supports analysts so specialists can spend their time where it is most needed and resolve critical issues.

A Healthier SOC Leads to Better Jobs in Cybersecurity

When you add up all these ways AI/ML can advance the ‘health’ of a SOC, the end result is more time. Automation isn’t the end goal of applying AI/ML. It’s about providing better jobs in cybersecurity to the people hard at work defending these systems.

For example, a Level 1 analyst’s day-to-day job might not look all that different as a result of AI/ML. Their work would still involve assessing alerts and conducting research. However, with a machine taking care of the low-value alerts, that analyst would be able to spend more time on fewer cases, going deeper into them. More time could be spent on breach simulations and tabletop tests that shift the entire team’s knowledge and posture from reactive to proactive.

AI/ML could also open brand new avenues for career progression. More time spent researching or focusing on high-value work could help analysts develop skills needed to move to the next level. Or, they might be able to use that time retraining for other critical jobs in cybersecurity like penetration testing, blue squad leadership, analytics, architecture or even an expanded AI/ML role.

Freeing the SOC

At the end of the day, jobs in cybersecurity are just like any other type of work. We want to feel fulfillment as we do them. A 2015 study examined factors that lead to SOC analyst burnout. The researchers found four factors that can lead to burnout if they’re not present: possessing the right skills to do the job, feeling empowered to perform work efficiently, applying creativity to new scenarios and seeing a path for intellectual growth.

When analysts in the study were empowered and given incentive to engage with automation, they could be more creative through two paths. Automation took care of repeated tasks, so the analysts could pursue more fun and challenging cases. Working with developers to build the tool also tapped into their creativity. These changes in turn lead to more chances for intellectual growth, reducing the risk of burnout and creating a healthier work space.

Building Trust to Create a Cycle of Trust

Achieving this vision requires a crucial element: trust. Fear is a natural reaction to adding automation. Experts fear AI could take away their jobs in cybersecurity. Giving teams time to audit new systems is critical to building trust.

Before installing AI/ML, the machine should be put into simulation mode, allowing the team to audit how it performs. When nothing breaks and the routine work gives way to less noise, their confidence grows. Auditing gives teams time to adjust to a system that should lead to job satisfaction, not fear. And the auditing process should be done multiple times. Conducting short, daily audits ensures that if anything does go wrong, the team will catch it.

Teaching the machine on an ongoing basis creates a virtuous cycle of people being able to trust it and the machine performing at a higher level. AI learns from people and people learn from AI in a feedback loop that makes the team more efficient — and creates a stronger cybersecurity posture for the business overall.

As noted by the 2020 Cost of a Data Breach Report, “the effectiveness of security automation in reducing the average cost of a data breach continued to grow” over the past three years.

When it comes to building a healthy SOC and more fulfilling jobs in cybersecurity, AI/ML should be deployed in ways that first improve analysts’ day-to-day work. It’s worth stressing the point: people are the most important element in cybersecurity, and moving to a modern SOC starts with making the job better for them.

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today