In a previous blog post, I covered some of the challenges encountered by security operations centers (SOCs) and how leveraging artificial intelligence (AI) can help alleviate these challenges, including the cybersecurity skills shortage, unaddressed security risks and long dwell times. According to ISACA’s State of Cybersecurity Report, 78 percent of respondents expect the demand for technical cybersecurity roles to increase in the future. The report also mentions that the effects of the skills shortage are going to get worse.

This is where AI can step in and help lighten the load considerably.

Justify Your Spend

During a time of tight budgets and IT spend, there is no doubt that any new expenditures must have solid business justifications. When considering any new security initiatives or solutions, it’s imperative that its improvements help with business-critical decision-making. Further, if your organization is going to leverage a new AI tool (or any new solution or approach) there has to be a way to confirm that the new method clearly outperforms the old.

Typically, you will need to be able to clearly demonstrate these performance improvements in the business world and deliver reports to several different stakeholders who each look at different metrics based on their roles. Below are some guidelines to consider when establishing or reassessing performance metrics when implementing an AI solution to bolster your organization’s security.

Establish Realistic Metrics

You may already have an idea of what metrics you’d like to evaluate. If not, then now is a good time to consider them. Metrics need to be relevant, timely and trackable. Always establish a baseline before implementing your new AI so you can compare your SOC’s performance before and after implementing the new tool and track future improvement at regular intervals as the AI learns. Obtaining these figures should be relatively simple and not overly reliant on manual processes, which can be time-consuming and prone to error.

Define Success for Different Stakeholders

Metrics presented to the board and C-Suite are usually different from metrics regularly needed by the SOC analyst team. Though chief information security officers (CISOs) are typically interested in bottom-line numbers, SOC analysts typically look at metrics on a more granular level.

For example, security analysts focus on the security posture of the organization and look at the number of AI security alerts, it’s average time to investigate incidents, percentage of incidents that it correctly escalates to upper-tier analysts and percentage of false positives, while senior executives, such as CISOs, CEOs and board members, are more interested in outcome-centric metrics like dwell time, mean time to detect (MTTD), mean time to respond/remediate (MTTR) and what a security breach could potentially cost the organization. Be sure to have a plan for distilling these high-level insights from the in-the-weeds figures.

Don’t Reinvent the Wheel

There are some metrics that have already been established and are being used widely in cybersecurity. Leveraging these existing metrics gives you useful benchmarks, guidelines and trends that are well-known across your industry.

There are many noteworthy publications and reports that may be useful in this way, such as the latest Cost of a Data Breach report, IBM X-Force Threat Intelligence Index, ISACA State of Cybersecurity report and many more that share valuable information on current challenges, security breach costs, trends, recommendations and more. Some of these key metrics are explained below.

Cost of a Data Breach

According to the 2019 Cost of a Data Breach Report, the average cost of a data breach was $8.19 million in the U.S. and $3.9 million globally. This is the single-most important metric that senior executives are interested in tracking for their organizations. They can set a benchmark against the U.S. or global number and then implement initiatives to insulate their organization from these costs. Several factors contribute to a data breach’s direct costs (e.g. fines and settlements) and indirect costs (e.g. reputational damage).

Dwell Time

Another important metric that senior executives use is dwell time, the amount of time a cyberattacker has access to the environment. Unsurprisingly, the more quickly a breach is spotted and plugged, the lower the potential costs. Dwell time is actually the sum of two important metrics: MTTD and MTTR. These are explained below.

  • Mean Time to Detect (MTTD): The time it takes, on average, to detect a security incident from the time the network was compromised to the time it was detected.
  • Mean Time to Respond/Remediate (MTTR): The time it takes, on average, to respond to or remediate a breach from the time it was detected.

Other metrics to define include the impact on SOC analyst productivity after AI implementation, the total cost of configuration and ongoing management, and any outsourcing fees directly incurred by AI installation and maintenance, to name a few.

At the End of the Day

Security professionals need to be able to prove the value that any new tool brings and demonstrate the revenue gained or losses prevented by their decisions. This means having to clearly demonstrate the benefits — to the SOC and company at large — derived by implementing a new, security-focused AI solution and quantify cost savings.

To read more about how one organization calculated the ROI of their newly implemented AI solution, read The Total Economic Impact (TEI) of IBM QRadar Advisor With Watson.

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today