For almost 15 years now, companies have been using the Common Vulnerability Scoring System (CVSS) to determine the criticality of security vulnerabilities. Ten is the highest score, meaning the most severe, while zero is the lowest. Over time, the CVSS has become something of a de facto industry standard used by most major vendors as well as the National Vulnerability Database (NVD).

CVSS scores were designed to measure the level of severity of an identified vulnerability in relation to where that vulnerability was found. Many organizations have used the scores to prioritize which vulnerabilities to fix first, which is an essential component of vulnerability management. However, the CVSS was never meant to be used on its own for prioritization, and such use has created many debates within the security community, particularly where those assessing risk require more context about vulnerabilities than a technical score.

In December 2018, the Software Engineering Institute at Carnegie Mellon published a report titled “Towards Improving CVSS.” The authors explain how the CVSS is being misused as a risk score and dive into why the output can be unreliable when prioritizing vulnerability fixes is based solely on the CVSS. A few excerpts from the report align with some of the existing lines of thought about why the CVSS alone is not enough for vulnerability prioritization.

For example, some organizations use the CVSS as the sole driver for prioritizing vulnerability patching policies, while what they truly need is to work on the bigger risk picture, of which the CVSS is but one component of many.

To that effect, the report states, “CVSS is designed to identify the technical severity of a vulnerability. What people seem to want to know, instead, is the risk a vulnerability or flaw poses to them, or how quickly they should respond to a vulnerability. If so, then either CVSS needs to change or the community needs a new system.”

For the most part, IBM Security’s team of veteran hackers, X-Force Red, agrees with that conclusion. A new system is needed, and it should incorporate more risk-focused contextual information when prioritizing which vulnerabilities to fix first. However, it’s important to keep in mind that the CVSS was never meant to measure risk to a certain organization; it was meant to measure the severity of the vulnerability. As such, upending the scoring system isn’t the solution. Instead, we need a new method for prioritizing vulnerabilities that incorporates the CVSS and contextual factors specific to each organization’s environment.

The CVSS Specification Document states that the CVSS score is composed of three metric groups: base, temporal and environmental. Most published CVSS scores only report the base metric, which describes characteristics of the vulnerability that are constant over time. The temporal group includes exploit code maturity and available remediations that are independent of a particular environment. Finally, the environmental metric can only be assessed with knowledge of the environment where a vulnerable system resides.

The researcher who creates the CVSS for the discovered vulnerability will oftentimes not factor in the environmental score or mark it as “zero” because he or she is not familiar with each individual environment. In order for the environmental score to be taken into account, a security analyst in each affected organization would need to assess his or her environment and change the score. Being that many organizations are strapped with limited resources, time and skill sets, having an analyst who can do this is not likely.

The three metric groups of the CVSS do not account for the risk posed based on the business value of an asset, nor were they ever supposed to. The CVSS is a severity rating, not a risk score. The environmental score can modify the base score by taking into consideration local mitigation factors and configuration details. It can also adjust the impact to an asset’s confidentiality, integrity and availability (CIA) if the vulnerability were exploited. However, it is still a measure of severity and does not consider the value of the exposed asset to the organization, which is a key risk factor.

For example, consider two different vulnerabilities. One vulnerability has a low impact on the availability of web servers; the other exposes users’ email addresses (low confidentiality). Both could easily have the same CVSS base score of 5.3, but the availability issue may receive an environmental score of 5.8 because it affects a customer service portal and management considers the availability requirement to be high.

Even if the confidentiality issue received a similar adjustment, there is no metric to measure the impact General Data Protection Regulation (GDPR) requirements may add to the risk of exposing customer data, which significantly increases the value of the email addresses. The report backs up this argument, stating: “We have no evidence that CVSS accounts for any of the following: the type of data an information system (typically) processes, the proper operation of the system, the context in which the vulnerable software is used, or the material consequences of an attack.”

The report also highlights that the CVSS does not consider relationships between vulnerabilities that allow an attacker to pivot or escalate privileges as well as security issues that are not strictly defined as vulnerabilities, such as insecure misconfigurations. These all play a role in evaluating risk status and response prioritization.

“In general, severity should only be a part of vulnerability response prioritization,” the report notes. “One might also consider exploit likelihood or whether exploits are publicly available. The Exploit Code Maturity vector (in Temporal Metrics) attempts to address exploit likelihood, but the default value assumes widespread exploitation, which is not realistic.”

The temporal metrics, however, are only designed to lower the base score and are rarely updated, if published at all. Asset value and exploitation are common risk factors and must be considered when prioritizing vulnerabilities.

The Subjective Angle of the CVSS

At X-Force Red, we are hired to break into organizations to uncover risky vulnerabilities that criminal attackers may use for their gain. Our team works with a plethora of vulnerabilities on a daily basis, and we do agree that while the CVSS has its role, it should not be the only factor in determining how to prioritize vulnerabilities. Two of the factors that need a more objective metric are the potential for exploitation and the criticality of the asset.

Let’s consider the fact that CVSS scores are assigned by vendors or researchers at a point in time and rarely rescored as circumstances change. For example, the temporal metric, if considered at all, is based on exploitation that is known at the time. As a result, CVSS scores can be more subjective and may not consider critical context around the assets the vulnerability is exposing, or whether criminals are actively weaponizing the vulnerability.

When it comes to using the CVSS to prioritize response to vulnerabilities, our X-Force Red hackers have seen time and time again vulnerabilities with CVSS scores of 10 bumped to the top of the priority list even though the assets they could affect would only cause minimal impact to the business if they were compromised. High scores were also attributed to vulnerabilities even when criminals were not exploiting them in the wild at that time.

Meanwhile, vulnerabilities with CVSS scores of 5 sit lower on the priority list even though they could expose high-value assets and are being actively weaponized by criminals. As a result, the vulnerability scored as 10 gets fixed first, leaving criminals ample time to exploit the more detrimental issues that scored a mere 5.

Figure 1: This graphic shows that even though the vulnerability MS17-010 has the most correlated exploits (41), it still sits lower on the priority list because it has a lower CVSS score. Meanwhile, the vulnerability at the top of the list has fewer correlated exploits yet has a CVSS score of 10.

Striking a New Balance With Vulnerability Prioritization

At X-Force Red, we believe vulnerabilities should be ranked based on the importance of the exposed asset to the organization and whether the vulnerability is being weaponized by criminals.

To help security professionals prioritize remediation, our team built a proprietary algorithm, which is part of X-Force Red’s Vulnerability Management Services (VMS). This algorithm automatically prioritizes vulnerabilities considering those contextual factors — asset value and whether the vulnerability is being weaponized — in addition to the CVSS score.

With X-Force Red Vulnerability Management Services, the chart above would look like this:


Figure 2: Vulnerability prioritization matrix when using X-Force Red’s VMS. Notice the vulnerability with the most correlated exploits — MS17-010 — is at the top even though the CVSS score is lower than others on the list.

Consider the risk equation, which, depending on who you ask, may vary. While the classic risk calculation is risk = likelihood x impact, some risk experts describe it as:

Risk = threat + vulnerability + asset of value.

Without a threat exploiting a vulnerability, the risk should definitely be scored lower. If the vulnerability doesn’t affect an asset of value, it should not rank highest on the prioritization list until that situation changes.

By considering the impact to the business, if the exposed asset were compromised, and if the vulnerability is being exploited, vulnerabilities could be prioritized based on the actual risk to critical assets, data or business operations. It is this sort of context we would like to bring into the vulnerability factor of the overall risk equation.

Learn more about X-Force Red Vulnerability Management Services

More from Software Vulnerabilities

FYSA – Critical RCE Flaw in GNU-Linux Systems

2 min read - Summary The first of a series of blog posts has been published detailing a vulnerability in the Common Unix Printing System (CUPS), which purportedly allows attackers to gain remote access to UNIX-based systems. The vulnerability, which affects various UNIX-based operating systems, can be exploited by sending a specially crafted HTTP request to the CUPS service. Threat Topography Threat Type: Remote code execution vulnerability in CUPS service Industries Impacted: UNIX-based systems across various industries, including but not limited to, finance, healthcare,…

X-Force discovers new vulnerabilities in smart treadmill

7 min read - This research was made possible thanks to contributions from Joshua Merrill. Smart gym equipment is seeing rapid growth in the fitness industry, enabling users to follow customized workouts, stream entertainment on the built-in display, and conveniently track their progress. With the multitude of features available on these internet-connected machines, a group of researchers at IBM X-Force Red considered whether user data was secure and, more importantly, whether there was any risk to the physical safety of users. One of the most…

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today