You may recall this blog post from March 2020. It highlighted the importance of factoring in clinical, organizational, financial and regulatory impact when determining which medical Internet-of-Things (IoMT) security vulnerabilities should be fixed first. Consider this post a part two. Whereas the previous post focused on the fact that IoMT devices are here to stay and finding and prioritizing vulnerabilities based on impact cannot be overlooked, this post highlights an up and coming security challenge.

Interconnected Devices Need Interconnected Risk Measurement

The healthcare system today uses various security technologies for connected devices, many of which assign a risk score to vulnerabilities. The score is meant to help hospital security teams understand and prioritize vulnerabilities that elevate risk. Those technologies, however, use different formulas to calculate the risk score. Furthermore, they are often focused on technical risk rather than the clinical impact on the hospital in terms of patient safety or disruption of a physician’s workflow.

For example, while some scanning tools provide a score based on the Common Vulnerability Scoring System (CVSS), medical device security platforms (MDSPs) monitor what devices are doing, collect data, apply machine learning, build behavioral models and calculate a risk score. Both technologies view risk through a technical lens.

The U.S. Food and Drug Administration (FDA) also has its own health care device classification formula. It focuses on a vulnerability’s associated exploit, what an attacker can do with the exploit and the potential harm that can be done. Again, these elements are viewed through a technical lens, not including the clinical impact on the hospital.

Three challenges arise with these scoring technologies. First, they do not consider clinical impact. Second, while scanner scores, MDSPs and FDA classification are all important pieces of information for determining risk, it is difficult for hospitals to know which score to use as a blueprint for vulnerability prioritization and remediation. Lastly, each MDSP and scanning tool uses different risk-calculating methodologies, which is why there is no standard model for prioritizing vulnerabilities within the field.

With a system like this, prioritization is unnecessarily fragmented.

One Recipe for Calculating Risk

Hospitals need one view of risk that merges the technical risk scores and the clinical impact that would take place if a device is compromised. In other words, throw MDSP, scanning, FDA classification and clinical impact data into a soup pot, add seasoning (enrich the data), and voila. With that recipe, healthcare providers can see which vulnerabilities pose the highest risks to patient safety, so they know where to start with remediation.

Security teams can also apply this approach beyond IoMT devices. Other connected devices within the healthcare environment, such as workstations, network infrastructure and even coffee makers — anything that connects to the hospital’s network — should also be scored and prioritized based on a singular recipe for calculating risk. Scanning tools and MDSPs will assign risk scores in any network-connected device. Those technical scores should be merged along with a clinical impact score to determine which vulnerabilities matter most. That uniform way of scoring can help drive the remediation process by understanding the clinical workflow context of each endpoint detected across a hospital network.

Learn how X-Force Red, IBM Security’s team of hackers, in partnership with The AbedGraham Group, a physician-led global security organization, are working to help hospitals overcome the problems of siloed and incomplete risk scoring that they face today. Together, they have developed a solution to merge technical risk scoring data with clinical impact data to identify the vulnerabilities that matter most.

More from Data Protection

The compelling need for cloud-native data protection

4 min read - Cloud environments were frequent targets for cyber attackers in 2023. Eighty-two percent of breaches that involved data stored in the cloud were in public, private or multi-cloud environments. Attackers gained the most access to multi-cloud environments, with 39% of breaches spanning multi-cloud environments because of the more complicated security issues. The cost of these cloud breaches totaled $4.75 million, higher than the average cost of $4.45 million for all data breaches.The reason for this high cost is not only the…

Data residency: What is it and why it is important?

3 min read - Data residency is a hot topic, especially for cloud data. The reason is multi-faceted, but the focus has been driven by the General Data Protection Regulation (GDPR), which governs information privacy in the European Union and the European Economic Area.The GDPR defines the requirement that users’ personal data and privacy be adequately protected by organizations that gather, process and store that data. After the GDPR rolled out, other countries such as Australia, Brazil, Canada, Japan, South Africa and the UAE…

Third-party breaches hit 90% of top global energy companies

3 min read - A new report from SecurityScorecard reveals a startling trend among the world’s top energy companies, with 90% suffering from data breaches through third parties over the last year. This statistic is particularly concerning given the crucial function these companies serve in everyday life.Their increased dependence on digital systems facilitates the increase in attacks on infrastructure networks. This sheds light on the need for these energy companies to adopt a proactive approach to securing their networks and customer information.2023 industry recap:…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today