August 12, 2021 By David Bisson 2 min read

There’s something spooky going on. New research from the Ubiquitous System Security Lab, Zhejiang University Security and Privacy Research Group and the University of Michigan found ‘poltergeist’ (PG) attacks can fool autonomous vehicles in a way that hasn’t been seen before. Take a look at what the researchers found about how this attack works.

Vehicles with a self-driving feature rely on computer-enabled, object-based detection. This classifies objects, deciding what is an obstacle and what is a normal road condition. Using those decisions, autonomous vehicles make moves on their own. Poltergeist attackers tamper with those classification results.

Bombarding Self-Driving Cars With Acoustic Signals

To be specific, the poltergeist attack affects the stabilization of images detected by a vehicle. In their paper, the researchers noted this isn’t the same as past studies in which people showed the security risks of self-driving cars by targeting the main image sensors, such as the complementary metal-oxide semiconductor. Instead, they singled out inertial sensors. These provide an image stabilizer with motion feedback that it can use to reduce blur.

The researchers designed their PG attack to target those initial sensors with resonant acoustic signals. In doing so, they found that someone could gain control of the stabilizer. From there, the attacker could then perform one of the following three types of attacks:

  • Hiding Attacks: A threat actor could make a detected object, such as the rear of a car, disappear.
  • Creating Attacks: Someone could fool the computer detection systems into detecting an object that isn’t really there.
  • Altering Attacks: An attacker could cause the computer detection systems to classify one object as another.

In testing those attacks, the researchers saw a 100% success rate with people, cars, trucks, buses, traffic lights and stop signs with hiding attacks. The other two attack scenarios varied in success depending on which objects were involved and the extent to which they were targeted.

Researchers Leading Vehicle Hacking

Fooling object detection systems is just one of the types of attacks threat actors could use to prey upon self-driving vehicles. Others include using beams of light and adversarial machine learning to tamper with the vehicles’ decisions and/or performance.

Back in 2018, for instance, a hacker found that a threat actor could embed a custom piece of hardware into a self-driving vehicle. Then, they could use it to control almost any component of the car, including the brakes and speed.

In February 2020, another group of hackers made one type of autonomous vehicle speed up to 85 mph in a 35 mph zone.

Toward Better Cybersecurity in Autonomous Vehicles

The researchers working on the PG problem also offered some solutions. Vehicle makers who include a self-driving feature should include safeguards, such as using a microphone to detect acoustic injection attacks. They can also add adversarial training into their object detection algorithms.

In addition, autonomous vehicle manufacturers should ensure that third-party providers and others along their supply chains follow security best practices. This could keep malicious actors out of the supplier’s network, removing the chance for follow-up attacks.

Self-driving cars may seem like a sign of the future, but keeping threat actors from taking control of them is a problem researchers have been working on for years. This new type of attack is just one example of that.

More from News

FBI, CISA issue warning for cross Apple-Android texting

3 min read - CISA and the FBI recently released a joint statement that the People's Republic of China (PRC) is targeting commercial telecommunications infrastructure as part of a significant cyber espionage campaign. As a result, the agencies released a joint guide, Enhanced Visibility and Hardening Guidance for Communications Infrastructure, with best practices organizations and agencies should adopt to protect against this espionage threat. According to the statement, PRC-affiliated actors compromised networks at multiple telecommunication companies. They stole customer call records data as well…

Zero-day exploits underscore rising risks for internet-facing interfaces

3 min read - Recent reports confirm the active exploitation of a critical zero-day vulnerability targeting Palo Alto Networks’ Next-Generation Firewalls (NGFW) management interfaces. While Palo Alto’s swift advisories and mitigation guidance offer a starting point for remediation, the broader implications of such vulnerabilities demand attention from organizations globally. The surge in attacks on internet-facing management interfaces highlights an evolving threat landscape and necessitates rethinking how organizations secure critical assets. Who is exploiting the NGFW zero-day? As of now, little is known about the…

Will arresting the National Public Data threat actor make a difference?

3 min read - The arrest of USDoD, the mastermind behind the colossal National Public Data breach, was a victory for law enforcement. It also raises some fundamental questions. Do arrests and takedowns truly deter cyberattacks? Or do they merely mark the end of one criminal’s chapter while others rise to take their place? As authorities continue to crack down on cyber criminals, the arrest of high-profile threat actors like USDoD reveals a deeper, more complex reality about the state of global cyber crime.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today