In a previous blog post, I covered some of the challenges encountered by security operations centers (SOCs) and how leveraging artificial intelligence (AI) can help alleviate these challenges, including the cybersecurity skills shortage, unaddressed security risks and long dwell times. According to ISACA’s State of Cybersecurity Report, 78 percent of respondents expect the demand for technical cybersecurity roles to increase in the future. The report also mentions that the effects of the skills shortage are going to get worse.

This is where AI can step in and help lighten the load considerably.

Justify Your Spend

During a time of tight budgets and IT spend, there is no doubt that any new expenditures must have solid business justifications. When considering any new security initiatives or solutions, it’s imperative that its improvements help with business-critical decision-making. Further, if your organization is going to leverage a new AI tool (or any new solution or approach) there has to be a way to confirm that the new method clearly outperforms the old.

Typically, you will need to be able to clearly demonstrate these performance improvements in the business world and deliver reports to several different stakeholders who each look at different metrics based on their roles. Below are some guidelines to consider when establishing or reassessing performance metrics when implementing an AI solution to bolster your organization’s security.

Establish Realistic Metrics

You may already have an idea of what metrics you’d like to evaluate. If not, then now is a good time to consider them. Metrics need to be relevant, timely and trackable. Always establish a baseline before implementing your new AI so you can compare your SOC’s performance before and after implementing the new tool and track future improvement at regular intervals as the AI learns. Obtaining these figures should be relatively simple and not overly reliant on manual processes, which can be time-consuming and prone to error.

Define Success for Different Stakeholders

Metrics presented to the board and C-Suite are usually different from metrics regularly needed by the SOC analyst team. Though chief information security officers (CISOs) are typically interested in bottom-line numbers, SOC analysts typically look at metrics on a more granular level.

For example, security analysts focus on the security posture of the organization and look at the number of AI security alerts, it’s average time to investigate incidents, percentage of incidents that it correctly escalates to upper-tier analysts and percentage of false positives, while senior executives, such as CISOs, CEOs and board members, are more interested in outcome-centric metrics like dwell time, mean time to detect (MTTD), mean time to respond/remediate (MTTR) and what a security breach could potentially cost the organization. Be sure to have a plan for distilling these high-level insights from the in-the-weeds figures.

Don’t Reinvent the Wheel

There are some metrics that have already been established and are being used widely in cybersecurity. Leveraging these existing metrics gives you useful benchmarks, guidelines and trends that are well-known across your industry.

There are many noteworthy publications and reports that may be useful in this way, such as the latest Cost of a Data Breach report, IBM X-Force Threat Intelligence Index, ISACA State of Cybersecurity report and many more that share valuable information on current challenges, security breach costs, trends, recommendations and more. Some of these key metrics are explained below.

Cost of a Data Breach

According to the 2019 Cost of a Data Breach Report, the average cost of a data breach was $8.19 million in the U.S. and $3.9 million globally. This is the single-most important metric that senior executives are interested in tracking for their organizations. They can set a benchmark against the U.S. or global number and then implement initiatives to insulate their organization from these costs. Several factors contribute to a data breach’s direct costs (e.g. fines and settlements) and indirect costs (e.g. reputational damage).

Dwell Time

Another important metric that senior executives use is dwell time, the amount of time a cyberattacker has access to the environment. Unsurprisingly, the more quickly a breach is spotted and plugged, the lower the potential costs. Dwell time is actually the sum of two important metrics: MTTD and MTTR. These are explained below.

  • Mean Time to Detect (MTTD): The time it takes, on average, to detect a security incident from the time the network was compromised to the time it was detected.
  • Mean Time to Respond/Remediate (MTTR): The time it takes, on average, to respond to or remediate a breach from the time it was detected.

Other metrics to define include the impact on SOC analyst productivity after AI implementation, the total cost of configuration and ongoing management, and any outsourcing fees directly incurred by AI installation and maintenance, to name a few.

At the End of the Day

Security professionals need to be able to prove the value that any new tool brings and demonstrate the revenue gained or losses prevented by their decisions. This means having to clearly demonstrate the benefits — to the SOC and company at large — derived by implementing a new, security-focused AI solution and quantify cost savings.

To read more about how one organization calculated the ROI of their newly implemented AI solution, read The Total Economic Impact (TEI) of IBM QRadar Advisor With Watson.

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today