In a previous blog post, I covered some of the challenges encountered by security operations centers (SOCs) and how leveraging artificial intelligence (AI) can help alleviate these challenges, including the cybersecurity skills shortage, unaddressed security risks and long dwell times. According to ISACA’s State of Cybersecurity Report, 78 percent of respondents expect the demand for technical cybersecurity roles to increase in the future. The report also mentions that the effects of the skills shortage are going to get worse.

This is where AI can step in and help lighten the load considerably.

Justify Your Spend

During a time of tight budgets and IT spend, there is no doubt that any new expenditures must have solid business justifications. When considering any new security initiatives or solutions, it’s imperative that its improvements help with business-critical decision-making. Further, if your organization is going to leverage a new AI tool (or any new solution or approach) there has to be a way to confirm that the new method clearly outperforms the old.

Typically, you will need to be able to clearly demonstrate these performance improvements in the business world and deliver reports to several different stakeholders who each look at different metrics based on their roles. Below are some guidelines to consider when establishing or reassessing performance metrics when implementing an AI solution to bolster your organization’s security.

Establish Realistic Metrics

You may already have an idea of what metrics you’d like to evaluate. If not, then now is a good time to consider them. Metrics need to be relevant, timely and trackable. Always establish a baseline before implementing your new AI so you can compare your SOC’s performance before and after implementing the new tool and track future improvement at regular intervals as the AI learns. Obtaining these figures should be relatively simple and not overly reliant on manual processes, which can be time-consuming and prone to error.

Define Success for Different Stakeholders

Metrics presented to the board and C-Suite are usually different from metrics regularly needed by the SOC analyst team. Though chief information security officers (CISOs) are typically interested in bottom-line numbers, SOC analysts typically look at metrics on a more granular level.

For example, security analysts focus on the security posture of the organization and look at the number of AI security alerts, it’s average time to investigate incidents, percentage of incidents that it correctly escalates to upper-tier analysts and percentage of false positives, while senior executives, such as CISOs, CEOs and board members, are more interested in outcome-centric metrics like dwell time, mean time to detect (MTTD), mean time to respond/remediate (MTTR) and what a security breach could potentially cost the organization. Be sure to have a plan for distilling these high-level insights from the in-the-weeds figures.

Don’t Reinvent the Wheel

There are some metrics that have already been established and are being used widely in cybersecurity. Leveraging these existing metrics gives you useful benchmarks, guidelines and trends that are well-known across your industry.

There are many noteworthy publications and reports that may be useful in this way, such as the latest Cost of a Data Breach report, IBM X-Force Threat Intelligence Index, ISACA State of Cybersecurity report and many more that share valuable information on current challenges, security breach costs, trends, recommendations and more. Some of these key metrics are explained below.

Cost of a Data Breach

According to the 2019 Cost of a Data Breach Report, the average cost of a data breach was $8.19 million in the U.S. and $3.9 million globally. This is the single-most important metric that senior executives are interested in tracking for their organizations. They can set a benchmark against the U.S. or global number and then implement initiatives to insulate their organization from these costs. Several factors contribute to a data breach’s direct costs (e.g. fines and settlements) and indirect costs (e.g. reputational damage).

Dwell Time

Another important metric that senior executives use is dwell time, the amount of time a cyberattacker has access to the environment. Unsurprisingly, the more quickly a breach is spotted and plugged, the lower the potential costs. Dwell time is actually the sum of two important metrics: MTTD and MTTR. These are explained below.

  • Mean Time to Detect (MTTD): The time it takes, on average, to detect a security incident from the time the network was compromised to the time it was detected.
  • Mean Time to Respond/Remediate (MTTR): The time it takes, on average, to respond to or remediate a breach from the time it was detected.

Other metrics to define include the impact on SOC analyst productivity after AI implementation, the total cost of configuration and ongoing management, and any outsourcing fees directly incurred by AI installation and maintenance, to name a few.

At the End of the Day

Security professionals need to be able to prove the value that any new tool brings and demonstrate the revenue gained or losses prevented by their decisions. This means having to clearly demonstrate the benefits — to the SOC and company at large — derived by implementing a new, security-focused AI solution and quantify cost savings.

To read more about how one organization calculated the ROI of their newly implemented AI solution, read The Total Economic Impact (TEI) of IBM QRadar Advisor With Watson.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today