Major Vulnerabilities as the New Norm

It’s been such a fun year, with two major, Internet shaking vulnerabilities called Heartbleed and Shellshock. In years past either one would have been the news of the year in security and software by themselves, but together, they equate to a level of vulnerability we’ve rarely seen. But the truly frightening part is that both of these vulnerabilities are things we have to get used to, since they represent the new norm in vulnerabilities going forward. In other words, we should plan on seeing several of these class break level of vulnerabilities every year from now on.

Just as the Internet was not originally built with security in mind, it wasn’t built with what we currently consider to be good programming practices either. As Robert Graham notes, Shellshock isn’t a blip, it’s a blimp, “a huge nasty spot on the radar warning of big things to come.” His analysis of the code in the Bourne Again Shell found significant amounts of antiquated coding practices, functions that are obsolete or banned, variables in all the wrong places and any number of coding mistakes anyone who’s received training in software development would (or at least should) never make in the 21st century. It’s not a condemnation of the people who originally wrote bash, it’s simply a sign of how we’ve changed the way we code in the last 25 years.

Here’s the problem: much of the code that we use on a daily basis, the code that we rely on to run the economic engine that is the Internet, was built during the same time period, using the same methodology and written to the same standards. The same spaghetti code exists in hundreds of other pieces of code that we’re using not only on the surface of the Internet, but also in the tiny little nooks and crannies that no one ever looks in. There are billions of lines of code that need to be reviewed using modern standards and then updated to make them safe and secure to continue using.

Back to Robert Graham, who states that the “many eyes” theory of open source has been conclusively disproven. While we’ve been saying for many years that open source projects have multiple people who are reviewing code and that it’s therefore as secure as possible, the truth is there are only a few people who have the time, skills and inclination to review most of the code that makes up modern open source projects. A lot of the old code is hard to understand, so developers are loathe to make changes for the sake of cleaning it up. They’d rather only make the changes they need to implement new features, which means that even if people are reviewing code written by people 25 years ago, it’s simply to make sure that new code works without breaking, rather than to make sure that old code wasn’t broken to begin with.

Why Now?

The good news is that more people are starting to look at all this old code to see where the vulnerabilities are and how they can be fixed. The bad news is that at the same time, there are more people who are looking at all this old code to see where the vulnerabilities are and how they can be exploited. It’s much sexier to spend time looking at the old code than it’s ever been in the past. In part this is because that’s where all the attention from the press and management is right now. But a much larger part is because there is a treasure trove of hidden vulnerabilities that when found will garner the discoverer the attention of the press and the community, or allow attackers to have unfettered access to vulnerable systems until the security community catches up.

We have many tools to discover vulnerabilities that didn’t exist even a decade ago. Static code analysis has progressed significantly, automated tools to analyze code are getting better every day and fuzzing was conceptualized in the late 80’s but didn’t become a common practice until relatively recent times. Not only have the tools become much better, the skills of the people testing, breaking and fixing software have progressed significantly. We have economies for both black hat and white hat hackers who find vulnerabilities, whether it’s underground forums or bug bounty programs that pay for each vulnerability and exploit found. It’s possible to make a lucrative living as a bug hunter, something inconceivable 25 years ago.

The quality of the code that’s coming out with vulnerabilities is also improving in quality. It’s no longer good enough for many people to turn out a vulnerability, it often needs to progress beyond a proof of concept and into a fully functional exploit to be taken seriously. As one co-worker pointed out, this will produce exploits similar to quality to those created for the annual Pwn2Own event at CanSecWest. These exploits are stable, repeatable and implemented as fully functional tools in many cases. Especially on the criminal side of the exploit market, this is likely to be the standard going forward, since it pays better to have a prepackaged exploit toolkit, rather than simply having a known vulnerability.

In many ways Free Open Source Software (FOSS) is in a state similar to where Microsoft found itself a dozen years ago. Researchers and attackers had been beating at Microsoft software and found many major vulnerabilities. Code Red and Nimda were the most public examples of those vulnerabilities being exploited and caused Microsoft to create their Trustworthy Computing initiative. It took many years of Microsoft working hard internally, re-educating their developers on how to code securely, creating new tools and processes to develop more secure code and basically making security one of their primary concerns.

The FOSS movement is at the start of this same cycle, where a number of breaks in the foundation of open source will have to be made before resources and time are dedicated to securing software as a concern of primary importance. This may be the event that kills FOSS. The alternative is that many of the companies that rely on FOSS start putting more resources back into the community and support the software movement that makes much of the Internet possible.

We Need to Plan

In the meantime, it’s imperative that businesses and consumers understand that we’re just at the beginning of a cycle of vulnerabilities being found in the software we rely on every day. We have to plan on vulnerabilities the magnitude of Shellshock and Heartbleed are going to become common events, perhaps as often as one or more a quarter for the foreseeable future. Rather than rely on the tactics of fire drills and patching, businesses need to start planning to deal with a series of unknown and unknowable vulnerabilities in their networks that go beyond the incident response plans most businesses currently have in place. It’s time to re-examine the links the security team has to other incident response teams within the corporation. It’s not enough to know which department is responsible for each type of system; to be capable of dealing with repeated vulnerabilities, we have to have to be capable of finding the specific administrators who are responsible for specific systems and how to communicate with them. We often think we have these contacts, but experience is a harsh teacher when a new vulnerability is disclosed and it takes days, or even weeks, to hunt down the current people responsible for a system because no one’s updated the administrators list in months or possibly years. It’s better to test these lines of communication now, while the stress is still relatively low, rather than wait until communication becomes a crucial issue.

Second, the technical tools to detect compromises due to an unknown vulnerability have to be in place and functioning. Each organization has it’s own way of monitoring for suspicious activity, whether it’s baselines on CPU usage and network traffic, or a series of honey pots that alert teams when suspicious activity is happening. While there are few organizations that have the resources to do full packet capture for more than a few days worth of traffic, if any at all, sflow or netflow monitoring can help to detect suspicious traffic caused by a vulnerability that’s being exploited but might not be public knowledge yet.

Businesses and security teams need to brace themselves for impact of this coming torrent of new vulnerabilities. Management at all levels need to understand that it’s not going to be business as usual for some time to come, that more resources, more downtime and more disruptions are going to happen because of the vulnerabilities that are being discovered. Warnings now to expect further disruptions can help reduce the stress the business as a whole experiences and help smooth the communication efforts with other teams in the enterprise who need to respond to such incidents. Knowing there’s more to come might not make it easier for them to accept your help now, but it might make them less likely to say “Another vulnerability?”

Security teams also need to learn to rotate incident responders out of high stress situations. Shellshock and Heartbleed have taught us that it takes a lot of time and energy to respond to events of this magnitude. The stress that it puts on an individual and a team to deal with the event has to be planned for and managed well, otherwise incident responders will burn out, won’t be able to react as well to the next emergency and no one will be trained to take their place when it’s absolutely necessary everyone is able to do their best. Teach more people to handle events and to take over in case the subject matter expert is on vacation, sick or simply left the company two weeks before.

Conclusion

Incidents like Heartbleed and Shellshock are going to be more common in the near future, but hopefully there is a limited time frame for this level of vulnerability discovery. There is a diverse amount of software the next vulnerability could come from, whether it’s DNS, SSL, Apache, Nginx or some obscure piece of code that we use almost daily but rarely think of as a threat. The time for creating plans to deal with these vulnerabilities is now, before it’s an emergency. Decide who’s going to take charge, how communication channels will be used, how the patches will be deployed and how to rotate people into and out of the fire. But also understand how to deal with situations where there may be no patch, no work around and the only option may be to abandon software that is terminally broken.

More from Software Vulnerabilities

FYSA – Critical RCE Flaw in GNU-Linux Systems

2 min read - Summary The first of a series of blog posts has been published detailing a vulnerability in the Common Unix Printing System (CUPS), which purportedly allows attackers to gain remote access to UNIX-based systems. The vulnerability, which affects various UNIX-based operating systems, can be exploited by sending a specially crafted HTTP request to the CUPS service. Threat Topography Threat Type: Remote code execution vulnerability in CUPS service Industries Impacted: UNIX-based systems across various industries, including but not limited to, finance, healthcare,…

X-Force discovers new vulnerabilities in smart treadmill

7 min read - This research was made possible thanks to contributions from Joshua Merrill. Smart gym equipment is seeing rapid growth in the fitness industry, enabling users to follow customized workouts, stream entertainment on the built-in display, and conveniently track their progress. With the multitude of features available on these internet-connected machines, a group of researchers at IBM X-Force Red considered whether user data was secure and, more importantly, whether there was any risk to the physical safety of users. One of the most…

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today