March 21, 2016 By Koen Van Impe 5 min read

Introduction

The Google Security Blog published a post in mid-February 2016 on a critical issue found in glibc getaddrinfo (CVE-2015-7547). The mention of a vulnerability in glibc should ring a bell: One year ago, everyone put on their patching gloves when a critical problem was found in gethostbyname, also part of the glibc library. This one was dubbed GHOST (CVE-2015-0235).

Is this new GHOST 2.0 the same as the older GHOST bug or something different? Do you need to care if you previously patched the old GHOST problem? You can find answers below.

How Different Are the Problems?

Both vulnerabilities are located in the same library — glibc — and are found in a similar function. GHOST was found in the gethostbyname() function, whereas the new GHOST 2.0 is found in the getaddrinfo() function.

Why are these functions even there? A Linux system has a client-side resolver that can be used by other software to perform DNS lookups. Because of this, software packages do not need to include their own resolver code; they can use the one that is provided by the base system.

The Problem With Getaddrinfo

One of the DNS functions that can be used is the getaddrinfo() library function. It will return information on a particular hostname, such as its IP address.

The vulnerability, a stack-based buffer overflow, is located in this getaddrinfo() function. This basically means that software will write data outside the space that was initially foreseen. This data is supplied by an untrusted source and can potentially crash the machine or execute the extra code that was sent to the buffer.

In this case, the vulnerability for CVE-2015-7547 is caused by this sequence of events:

  1. Glibc reserves memory space (2048 bytes) for DNS answers.
  2. A first query is done, and the supplied first response fills the entire buffer so that exactly 0 bytes are left.
  3. A new memory space (64k) is assigned, but due to a bug, the old allocated space is reused and not the new memory space.
  4. The second response should be flawed so that it causes another query.
  5. The third response will hold the exploit code, which can be 64k in total. Note that the first 2 bytes of this response need to be a valid DNS response.

The full details are available via the glibc mailing list notice.

For exploitation, the attacker needs to send three responses. Only the third response can contain the exploit code. Having 64k of space for exploit code is a lot of room, but do take into account that the attacker also has to bypass other possible protection mechanisms such as address space layout randomization.

GHOST Gethostbyname

The GHOST vulnerability in gethostbyname() was also related to DNS lookups and buffer overflows. This vulnerability, however, was caused by a feature in gethostbyname() that helps avoid unnecessary DNS lookups if the provided value to the function was already an IPv4 or IPv6 address.

Who Is Affected by the Vulnerability?

DNS Is Everywhere!

You might think this isn’t such a big deal if you don’t do DNS queries; you wouldn’t be affected, right?

This is not entirely true. It’s fairly easy to force a system to do a DNS query. For example, queries may be connected to:

  • A user surfing a website that includes a resource (i.e., an image or a script file) on another hosted system;
  • A corporate central proxy that verifies content (including URLs) that users want to visit;
  • Web frameworks that use external resources;
  • System administrators using SSH where reverse DNS lookups are performed on each login;
  • A corporate antispam service that verifies the source of an email; and
  • Everything that uses DNS requests.

Systems

Basically all systems, both client and server, that use glibc (Linux) are affected. Note that some software packages can include their own versions, but this is not the best practice because it goes against the advantage of having a central library.

There are no system credentials needed to exploit these vulnerabilities. Potential exploitation happens via a local or remote network connection.

Vulnerable Glibc

The issue affects all versions of glibc since 2.9. You should update if you are on older versions of glibc anyway.

You can verify the glibc version with the command:

ldd --version

When updating, you should only rely on the packages provided by your Linux vendor.

Recommendation

There is only one valid recommendation: Patch your systems! The best way to accomplish this is by having good patch management procedures and an up-to-date view of your infrastructure. Ideally, you also restart the services that rely on the vulnerable glibc code after patching them.

It is a good practice to make sure that all your systems use a specific central resolver and that you block all other outgoing DNS traffic that is not passing through this resolver. Doing so has two extra advantages:

  • You have a supplemental set of information giving you insight on what is going on in your network. You can use this information to set up alerts when DNS queries match information that you gathered previously in your threat sharing solution.
  • A central DNS server allows you to implement DNS blacklists.

Dnsmasq can also be used to limit the response size accepted by local DNS servers.

Patch Management

If you only manage a couple of systems, patching can be a swift process. However, if you manage a large set of machines, then you have to rely on decent patch management procedures. Patch management without knowing exactly what services and servers you are running makes no sense. You will also have to rely on a good inventory of your infrastructure.

Patch management and inventory management go hand in hand. If you have no clear view on how your infrastructure is organized, then it becomes very difficult to apply the patches to all your machines. An attacker only needs one entry to your environment. Overlooking that single system that no one knows about could be just the access point an attacker needs to wreak havoc on your systems.

The Process

Patching starts with setting priorities. Obviously, you can’t patch everything at once. Define what is important in your environment first. This is not only a technical decision, but also a business decision.

A good patch management strategy also incorporates test environments that test patches before they are applied in a production environment. Your customers would become very unhappy if you have to explain to them that a service is down because you applied an untested patch.

This is especially true for patching glibc. It’s an essential, central piece of a Linux system. A failed glibc patch on a production system will be difficult to recover from.

After applying the patch, it is necessary to verify that the update process succeeded. If you assume that a system is patched and the process failed in some way, the situation gets worse. You assume that everything is fine, whereas, from an attacker’s point of view, the game is still on.

While You Patch

Applying the patches is also a good opportunity to check that the systems still have the correct (and centrally registered) configuration.

If you inventoried your systems previously for patching the GHOST vulnerability, you can now reuse your notes to go through the same process again. Patching your systems for CVE-2015-7547 requires you to review the same systems that you handled one year ago. Things to look for include:

  • What systems use glibc?
  • Is there software in use that includes its own version of glibc?

When you deployed the patches last year, you probably encountered one or more issues. If you updated your documentation and adapted the procedures accordingly, then the patching process will now be a lot easier.

This time around, take good notes and do a recap of the lessons learned afterward. It is likely that similar vulnerabilities will be found in glibc in the future, requiring you to go through the whole update process again.

You could also use a brute-force approach by scanning the network and then testing every system with exploit code. For example, for GHOST, there is a Metasploit module that you can use to test if a system is exploitable. This is a dangerous approach and certainly not advisable on production networks since an exploit can sometimes fail and make your system unavailable.

Conclusion

This vulnerability is probably won’t be the last found in glibc. Because of the widespread use of the library, quick patching of every system can be a daunting task. You can ease that task both now and in the future and reduce the required effort by having good, tested patching procedures.

This particular vulnerability is serious, but exploitation is not that easy. Patching your systems should take place as soon as possible, but do make sure that you include this step in a tested patch management procedure.

More from Risk Management

Unveiling the latest banking trojan threats in LATAM

9 min read - This post was made possible through the research contributions of Amir Gendler.In our most recent research in the Latin American (LATAM) region, we at IBM Security Lab have observed a surge in campaigns linked with malicious Chrome extensions. These campaigns primarily target Latin America, with a particular emphasis on its financial institutions.In this blog post, we’ll shed light on the group responsible for disseminating this campaign. We’ll delve into the method of web injects and Man in the Browser, and…

Crisis communication: What NOT to do

4 min read - Read the 1st blog in this series, Cybersecurity crisis communication: What to doWhen an organization experiences a cyberattack, tensions are high, customers are concerned and the business is typically not operating at full capacity. Every move you make at this point makes a difference to your company’s future, and even a seemingly small mistake can cause permanent reputational damage.Because of the stress and many moving parts that are involved, businesses often fall short when it comes to communication in a crisis.…

Digital solidarity vs. digital sovereignty: Which side are you on?

4 min read - The landscape of international cyber policy continues to evolve rapidly, reflecting the dynamic nature of technology and global geopolitics. Central to this evolution are two competing concepts: digital solidarity and digital sovereignty.The U.S. Department of State, through its newly released International Cyberspace and Digital Policy Strategy, has articulated a clear preference for digital solidarity, positioning it as a counterpoint to the protectionist approach of digital sovereignty.What are the main differences between these two concepts, and why does it matter? Let’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today