Security leaders aren’t the only ones who seek valuable metrics from accurate instruments. If you remember analog dashboards in cars, you might also remember how unreliable fuel gauges tended to be. As a result, it was impossible to know when you needed to stop for gas.
So when we can’t trust our instruments, how sure can we be of what we think we know? In many ways, organizations today are coming to a similar realization about their security metrics. So what are we to do? Junk the car? Cover up the gauge? Replace it at a significant cost? How about learning to live with it once we realize it still has some value, but needs to be framed in the right way?
If some of the readings from your dashboard have varying levels of accuracy and lag, you might be able to adjust your interpretations of them to correct for those discrepancies.
Understand Measures, Metrics and Their Value
According to the National Institute of Standards and Technology (NIST)’s “Framework for Improving Critical Infrastructure Cybersecurity,” measures are “quantifiable, observable, objective data supporting metrics” and thus “most closely aligned with technical controls.” By contrast, metrics are used to “facilitate decision making and improve performance and accountability” and thus “create meaning and awareness of organizational security postures by aggregating and correlating measures,” according to the framework.
In other words, underlying measures express technical readings, as well as the proper configuration and effectiveness of your controls. The metrics, which are reported to top leadership, should be grounded in context and relevance to the business and its objectives.
Naturally, greater accuracy yields greater value. But complete context requires a diversity of insights. What if you can’t have both?
Why We Need to Check What We Think We Know
In a 2016 white paper titled “Unified Security Metrics,” Cisco shared its experience as the organization sought to unify its metrics to handle security concerns “much more strategically than reactively.” Since implementing its unified security metrics (USM) approach, the chief information officer (CIO) has been able to develop “an overall picture of the business risk,” which enables security professionals to remediate issues more quickly and better align their strategies with business objectives.
But along the way, Cisco’s experience also taught its leaders the importance of analyzing and documenting the quality and feasibility of measurements to determine whether they are “available, trusted and accessible,” and whether the collection and reporting of such measures could be automated. This helped set the company on a path to not only continuously monitor its posture, but also to seek to continuously improve its metrics.
What Happens When You Use Imperfect Measures in Decision-Making?
Before rolling up security measures into metrics, organizations should undergo a systematic review process as described in the Cisco white paper. Look closely at the measurement data collected, the questions that those data points help answer, the category of that measurement (people, process, technology, etc.), as well as attributes such as availability, scalability and, finally, quality.
One example of a process measure to consider is the number of closed and open incidents. Look particularly at whether the root cause and long-term solution of an incident has been identified and tracked to closure. While Cisco reported that this measure is readily available, it is only partially automated, and its quality is deemed as “partly” satisfactory.
Yet even this imperfect data — neither fully automated nor of high quality — has value to the organization when tracked and reported across time. Taken alongside other metrics, it can still contribute to a clearer picture of the organization’s cybersecurity posture and its handling of incidents.
Build Context to Initiate Conversations With Top Leadership
Although an isolated security measure might be an imperfect reflection of reality, it may yet still present enough value to the organization to be useful as one part of a metric that is shared with top leadership and the board. In the previous example, the number of open incidents still being investigated provides an important view into the organization’s ability to identify root causes, which allows the security team to continuously improve its ability to defend against similar attacks in the future.
Because some metrics are of higher quality than other, it is important to document which ones are based on near real-time, rock-solid measurements, and which ones are collected manually and of less-than-perfect quality. The key benefit of cyber risk metrics is to initiate conversations between security leadership and the board. This is the essential springboard for all future security projects.
InfoSec, Risk, and Privacy Strategist - Minnesota State University, Mankato