There are currently ongoing discussions about vulnerability disclosures and what is right, what is responsible and who has the interest of securing the Internet from the evils of bad coding or software design. Much of this is a good discussion, while some of it is a rehash of old arguments. The last seminal work on disclosures was performed by the National Infrastructure Advisory Council (NIAC), which published the “Vulnerability Disclosure Guidelines” back in 2004. I had the opportunity to contribute to these efforts. However, a lot has happened over the past decade in IT, so this discussion is probably worth revisiting — and overdue.

Bilateral and Responsible Vulnerability Disclosure

What have we learned over time? What is out of date? These questions are now part of the discussion. I can say that we have moved from a responsible disclosure to a coordinated disclosure thought process, but we have not yet documented such an approach. The moniker of being “responsible” seemingly indicates that only one method is responsible. If everyone else has different approaches, does that mean they are irresponsible? I should say not. As we know in IT, there are many paths one can take to solve an issue, and a vulnerability disclosure is not wholly different here.

So what have we learned over the past decade to change that? Essentially, one must coordinate a vulnerability disclosure with others. In other words, there is no one set or correct path for all to follow — especially not rigidly. We are all situation-dependent, which brings us to the bilateral issue. In the NIAC approach, we essentially looked at a situation as being between two parties. If one party did not know the other or there was a lack trust, we invoked a coordinator to assist us through the process, such as the Computer Emergency Response Team Coordination Center or the Computer Security Incident Response Team to do that for us. Occasionally, it would involve two or three vendors and a researcher, as the intermediary became the trusted communicator and coordinator. It worked for the most part. They had the inside line on what was ground truth and knew the parties involved.

What Changed?

First, we have more parties that have to fix vulnerable conditions. Second, we have built and now use more complex applications. So far, there has been no real change, so there’s no real problem, right? However, then some other things came along. We have applications that run entirely on one platform and are delivered centrally. All that is controlled is updated almost simultaneously. However, we also have another set of conditions. Enterprises now take major applications and add their own code on top of that. In other words, there are major differences in what is deployed from one organization to another, and we haven’t even touched on the international aspect. Furthermore, we have now embedded third-party software — you know, all those special calls and other interfaces that allow communications across applications and platforms. We have seen quite a few large-scale issues this past year with this problem.

This brings us back to the recent set of discussions about disclosures. We now have multiparty, multifaceted coordination needs. These are cross-industry requirements, which means we need to now consider phasing our disclosures. This requires us to open the genie box and reconsider our approach in a more organized manner. No longer can a researcher jump out and save the Internet from itself, since its complexity is beyond that stage. A researcher may understand the bug, but the system of systems and the interactions require a broader group effort.

By the way, “organization” does not refer to government-run and coordinated organizations, although they certainly have an aspect to play — just not a central role. This is a group effort that requires thought-out discussions and consideration. I applaud the efforts of all who are part of the discussion — especially the Forum for Incident Response and Security Teams and the Industry Consortium for Advancement of Security on the Internet for lending a hand to convene the discussion. How far we get depends on how much effort we put into this. However, keep in mind that the Internet now has a lot of business that crosses through it, and we need to consider that while we protect everyone.

more from Application Security

Controlling the Source: Abusing Source Code Management Systems

For full details on this research, see the X-Force Red whitepaper “Controlling the Source: Abusing Source Code Management Systems”. This material is also being presented at Black Hat USA 2022. Source Code Management (SCM) systems play a vital role within organizations and have been an afterthought in terms of defenses compared to other critical enterprise systems such as Active Directory.…

Black Hat 2022 Sneak Peek: How to Build a Threat Hunting Program

You may recall my previous blog post about how our X-Force veteran threat hunter Neil Wyler (a.k.a “Grifter”) discovered nation-state attackers exfiltrating unencrypted, personally identifiable information (PII) from a company’s network, unbeknownst to the security team. The post highlighted why threat hunting should be a baseline activity in any environment. Before you can embark on a threat hunting exercise, however,…