October 20, 2016 By Larry Loeb 2 min read

Address space layout randomization (ASLR) is a widely-used method that prevents memory corruption attacks by arranging the address space positions of key data areas in a random manner. This technique can be found in the security designs of many applications and well-known operating systems.

But according to new research, it seems there is a new ASLR bypass out there.

A Map to an Unknown Destination

Attackers have bypassed ASLR in the past, but research teams from the State University of New York at Binghamton and the University of California, Riverside teamed up to devise a new ASLR bypass method based on what’s going on in the hardware of the central processing unit (CPU) that runs the code.

In their paper, “Jump Over ASLR: Attacking Branch Predictors to Bypass ASLR,” the researchers described their novel side-channel attack, which focuses on the branch target buffer (BTB) of the CPU. The BTB helps to predict where the next branch of code will be executed, which speeds up a CPU’s throughput.

The paper explained how an attacker could create BTB collisions between two user-level processes (or between a kernel and a user process) in a “controlled and robust manner.” The researchers found that “identifying the BTB collisions allows the attacker to determine the exact locations of known branch instructions in the code segment of the kernel or of the victim process, thus disclosing the ASLR offset.”

The whole point is to learn the offset length generated by the ASLR. It’s like having a map to an unknown destination that points out the way.

According to SecurityWeek, the researchers tested the attack on a Haswell CPU running a current version of Linux. They seemed confident that the attack would also work for Windows and Android. Not only that, but they felt it would work on a Kernel-based virtual machine (KVM), which would open up a new attack vector targeting virtualized computers.

Blocking ASLR Bypass

Some software changes might help here. One option is to switch to a finer-grain ASLR that changes on functions rather than only once during startup. This would at least make the attack harder to perform.

Primary mitigation, however, requires hardware changes. It may be necessary, for example, to change the affected addressing mechanism to prevent these collisions in a BTB. It can also be helpful to use separate indexing functions for user or kernel-level code. Whether the hardware manufacturers think this is serious enough to retool a CPU’s hardware remains to be seen.

More from

Debate rages over DMCA Section 1201 exemption for generative AI

2 min read - The Digital Millennium Copyright Act (DMCA) is a federal law that protects copyright holders from online theft. The DMCA covers music, movies, text and anything else under copyright.The DMCA also makes it illegal to hack technologies that copyright owners use to protect their works against infringement. These technologies can include encryption, password protection or other measures. These provisions are commonly referred to as the “Anti-Circumvention” provisions or “Section 1201”.Now, a fierce debate is brewing over whether to allow independent hackers…

CISA Malware Next-Gen Analysis now available to public sector

2 min read - One of the main goals of the Cybersecurity and Infrastructure Security Agency (CISA) is to promote security collaboration across the public and private sectors. CISA firmly believes that partnerships and effective coordination are essential to maintaining critical infrastructure security and cyber resilience.In faithfulness to this mission, CISA is now offering the Malware Next-Generation Analysis program to businesses and other organizations. This service has been available to government and military workers since November 2023 but is now available to the private…

Social engineering in the era of generative AI: Predictions for 2024

5 min read - Breakthroughs in large language models (LLMs) are driving an arms race between cybersecurity and social engineering scammers. Here’s how it’s set to play out in 2024.For businesses, generative AI is both a curse and an opportunity. As enterprises race to adopt the technology, they also take on a whole new layer of cyber risk. The constant fear of missing out isn’t helping either. But it’s not just AI models themselves that cyber criminals are targeting. In a time when fakery…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today