A 2005 Slashdot post recently highlighted on “This Day on Slashdot” discussed a Microsoft executive who allegedly said, “Linux security is a myth.” Nick McGrath was Microsoft United Kingdom’s head of platform strategy, and he actually said something a little different. The article quotes him as saying, “There are a lot of myths out there. … Another is that there are no viruses for Linux.”
Obviously, Linux wasn’t immune to viruses and vulnerabilities then, and it is not immune to them now; neither are Windows, Android, iOS and Mac OS. All have defects, and some defects expose security vulnerabilities. The Vnunet.com website hosted the original article, but it no longer responds. Luckily, the Internet Archive’s Wayback Machine maintains a copy.
The Source Code Dilemma
So far, bugs are a fact of life in the software industry. An ongoing industry disagreement revolves around the third-party review of source code. The open source camp says openness both encourages and results in more people reviewing the code, finding more bugs and the security vulnerabilities some of them expose. The proprietary camp replies that having more people looking doesn’t make much of a difference if most of them can’t understand the code, much less identify defects in it.
Both sides have reasonable points. Learning enough to read and understand source code and the surrounding system well enough to spot defects presents a large hurdle. In a commercial development environment, the team members who can do it always have new tasks, so they rarely have time to spend on existing, “working” code. Security issues don’t often interfere with the operation of the system until an attack, and sometimes not even then. The system appears to work, so even interested and qualified third parties or team members aren’t likely to spend much time hunting for problems that may not exist. Even when everyone can access the source code, few will unless they have some incentive, such as a bug that prevents them from using the software as they wish.
In both open and proprietary projects, bugs can live for a long time — even ones that expose vulnerabilities. Most of these examples come from the open source world, mainly because everyone can see the different versions of the source and compare them. With proprietary systems, we usually have to infer the age based on testing different versions of the target when they are available.
Looking at widely publicized disclosures in just the past year, there are three examples of long-lived vulnerabilities. The Ghost (CVE-2015-0235) vulnerability lasted unnoticed in the code for about 13 years. However, Shellshock (CVE-2014-6271 and others) set the record at 25 years. That vulnerability had a silver anniversary!. On the proprietary side, X-Force Research Manager Robert Freeman discovered a Windows vulnerability in existence since Windows 95 (CVE-2014-6332), with a longevity somewhere between Ghost and Shellshock.
Ghost provides a somewhat unique example in another way. A project developer noticed the defect and fixed it about a year and a half before the vulnerability was disclosed. However, no one recognized the security implications of the bug, so the fix wasn’t widely deployed until Ghost was publicly disclosed.
A Subtle Example
The Ghost vulnerability provides a good example of how subtle these defects can be. The overall logic where the defect occurred allocates a buffer, does some other initialization, then starts filling the buffer. The defect happened due to a mismatch between the size allocated and the way it used the buffer. The flawed statement, formatted for readability, read as follows:
size_needed = sizeof (*host_addr) + sizeof (*h_addr_ptrs) + strlen (name) + 1;
The corrected statement is:
size_needed = sizeof (*host_addr) + sizeof (*h_addr_ptrs) + sizeof (*h_alias_ptr) + strlen (name) + 1;
That is a fairly subtle difference, especially when the allocation and use of the buffer are separated a bit. The logic resulted in a buffer that is slightly too short. In some cases, the logic goes past the end of the buffer, overwriting memory: It could essentially overwrite the number of bytes in h_alias_ptr with 4 bytes on 32-bit systems and 8 on 64-bit systems. In the right circumstances, that 4-byte to 8-byte overflow lets the attacker achieve remote control of an application process. If the application runs as root, or the attacker can escalate privileges, the attacker can “own” the entire system.
What Can You Do to Find Vulnerabilities?
Obviously, organizations must continue doing the things they do now to protect their systems, such as keeping up with security intelligence, deploying endpoint and network defenses, maintaining an accurate systems and services inventory and deploying patches and updates. If your organization relies on open source projects, it can influence them in a variety of ways, from donating money to donating talent or resources.
An obvious gap in the industry concerns training for the entire development staff. Until very recently, few software development or computer science curricula spent time on security issues, so most developers lack knowledge about the field. The security issues of the past few years have led to greater availability of specialized training and changes to college curricula. Taking a “train the trainer” approach, a small investment in formalized training can quickly filter out to your entire staff.
The training should include secure development techniques as well as Secure Software Development Life Cycle (SDLC). Start planning a Secure SDLC as soon as your key trainers are educated so they can tailor internal training to the organization’s situation. Training should also include common exploitation techniques and appropriate defenses. SQL injections still provide the attack vector in many of the breaches disclosed each year. After this long, you would think there would be software that is impervious to SQL injection attacks. This happens partly because many developers don’t understand the risks or how the exploitation works. A secure developer needs appropriate levels of both paranoia and understanding of the threat.
It is important to realize that even though developer training and manual reviews do not eliminate the risk, they do reduce it. Testing automation can provide complementary benefits and reduce the load on your staff and your risk.
Enhanced Quality Assurance Testing
An entire industry evolved to relative maturity around dynamic testing automation, and plenty of proprietary and open solutions exist. Some organizations have developed quite mature testing regimens, and nearly every shop does some. Unfortunately, most testing focuses on the features and functions of the product that a user experiences. Testing things such as resistance to Secure Sockets Layer (SSL) man-in-the-middle (MitM) attacks doesn’t often occur to the development staff, but security testing needs greater attention in today’s world. A lot of security testing requires specialized knowledge, so it could surely benefit from more specialized tools, which have been largely lacking or underutilized in the past.
Will Dormann at the Computer Emergency Readiness Team Coordination Center (Updated:CERT/CC) produced a very useful tool to automate some of this testing. Dormann wondered how many Android apps properly validate the SSL certificates presented to them. The ones that did not would be vulnerable to SSL MitM attacks. At first, he poked at them manually, using an Android virtual machine. Realizing that manually testing the huge number of apps in the Google Play Store would require a lot of time and tedium, Dormann automated the testing. In the process, he created the CERT Tapioca tool for testing Transport Layer Security MitM attacks.
More to the point, though, Dormann’s efforts illustrate the leverage that dynamic testing automation can provide. In just a few months, his tools have processed just over 1 million of the Android apps in the Google Play Store. Among them, it has identified a bit more than 23,000 as vulnerable to MitM attacks because they do not properly validate SSL certificates. Turning these tests on their head, a SDLC integrates these tools to address a range of situations rather than one situation across a range of targets. Instead of testing a million apps for the same behavior, the life cycle uses them to process a million use cases through your apps.
Static Software Analysis
An InfoWorld article from 2004 summed things up nicely when it said, “Before computers were pervasively interconnected, a buffer overflow was an inconvenience but not necessarily a disaster. Now, such errors are routinely exploited by attackers.”
Automated static analysis tools have a long history, going back to the original lint tool for Unix and before. Static source code analysis tools gained an early reputation as reporting too many false positives, which held back their adoption. However, effort has improved them considerably since then. They can find real bugs, and the false positive rates are much better than they were in the past. Additionally, tools specifically addressing security testing have become more common in the past few years.
Some proprietary and open projects even integrate these tools directly into their build or release processes. A number of these tools exist — some open and some proprietary — for a variety of implementation languages. Some of each type are very good, and some organizations end up using more than one, since each has its own strengths. Many currently focus on Web applications due to their ubiquity, high profile and high exposure. The Open Web Application Security Project website maintains a current list of Web application-scanning tools.
IBM offers a strong competitor in this market, IBM Security AppScan. The AppScan product line offers static analysis of your software’s code, and dynamic analysis of your software’s behavior. The tools audit the source code and operation in terms both of generic code quality and specific issues that create security vulnerabilities. Unfortunately, time constraints for this article prevented developing a broad based set of comparison data using AppScan. To get that broad based data set, I turned to a publicly available data set of analyzed open source projects.
Coverity produces a suite of applications that perform static and dynamic security analysis of a program’s source code. They also provide the Coverity Scan static analysis service for free to open source projects. For the past several years, they have produced public reports detailing the trends they find with their tools. The Coverity Scan reports from 2011, 2012 and 2013 showed the open source and proprietary projects having very similar defect densities when compared to similarly sized projects. The proprietary projects don’t seem to get much added benefit from having more experienced “eyes,” and the open projects don’t seem to get much added benefit from having more “eyes.” Other differences “level” the outcomes. However, the reports do indicate that projects integrating such analysis directly into their development process seem to show measurable gains in defect density fairly quickly.
Summing Up
We all live somewhat at the mercy of the developers of our infrastructure software. For open projects, project users can positively influence the project through participation and donations. The Coverity effort to provide access to its scan product free of charge for approved open source projects represents an excellent example. Organizations that develop software themselves have more opportunities to control their own destiny. Improving the security awareness and training of the development staff has basically become a requirement for organizations producing software systems that interact with or over the Internet. Further, events show that improved security testing has also essentially become a requirement for software that faces the Internet.
Research Technologist, IBM Security X-Force