August 12, 2021 By David Bisson 2 min read

There’s something spooky going on. New research from the Ubiquitous System Security Lab, Zhejiang University Security and Privacy Research Group and the University of Michigan found ‘poltergeist’ (PG) attacks can fool autonomous vehicles in a way that hasn’t been seen before. Take a look at what the researchers found about how this attack works.

Vehicles with a self-driving feature rely on computer-enabled, object-based detection. This classifies objects, deciding what is an obstacle and what is a normal road condition. Using those decisions, autonomous vehicles make moves on their own. Poltergeist attackers tamper with those classification results.

Bombarding Self-Driving Cars With Acoustic Signals

To be specific, the poltergeist attack affects the stabilization of images detected by a vehicle. In their paper, the researchers noted this isn’t the same as past studies in which people showed the security risks of self-driving cars by targeting the main image sensors, such as the complementary metal-oxide semiconductor. Instead, they singled out inertial sensors. These provide an image stabilizer with motion feedback that it can use to reduce blur.

The researchers designed their PG attack to target those initial sensors with resonant acoustic signals. In doing so, they found that someone could gain control of the stabilizer. From there, the attacker could then perform one of the following three types of attacks:

  • Hiding Attacks: A threat actor could make a detected object, such as the rear of a car, disappear.
  • Creating Attacks: Someone could fool the computer detection systems into detecting an object that isn’t really there.
  • Altering Attacks: An attacker could cause the computer detection systems to classify one object as another.

In testing those attacks, the researchers saw a 100% success rate with people, cars, trucks, buses, traffic lights and stop signs with hiding attacks. The other two attack scenarios varied in success depending on which objects were involved and the extent to which they were targeted.

Researchers Leading Vehicle Hacking

Fooling object detection systems is just one of the types of attacks threat actors could use to prey upon self-driving vehicles. Others include using beams of light and adversarial machine learning to tamper with the vehicles’ decisions and/or performance.

Back in 2018, for instance, a hacker found that a threat actor could embed a custom piece of hardware into a self-driving vehicle. Then, they could use it to control almost any component of the car, including the brakes and speed.

In February 2020, another group of hackers made one type of autonomous vehicle speed up to 85 mph in a 35 mph zone.

Toward Better Cybersecurity in Autonomous Vehicles

The researchers working on the PG problem also offered some solutions. Vehicle makers who include a self-driving feature should include safeguards, such as using a microphone to detect acoustic injection attacks. They can also add adversarial training into their object detection algorithms.

In addition, autonomous vehicle manufacturers should ensure that third-party providers and others along their supply chains follow security best practices. This could keep malicious actors out of the supplier’s network, removing the chance for follow-up attacks.

Self-driving cars may seem like a sign of the future, but keeping threat actors from taking control of them is a problem researchers have been working on for years. This new type of attack is just one example of that.

More from News

Biden-⁠Harris administration releases roadmap to enhance internet routing

2 min read - The Biden-Harris Administration has taken another step toward improving the nation’s cybersecurity. In September, the White House Office of the National Cyber Director (ONCD) announced it was putting policies in place to address a key security vulnerability associated with the Border Gateway Protocol (BGP). BGP is a set of rules that helps the internet work by selecting the best route for data to travel between networks. It is a fundamental protocol that allows networks to communicate with each other. However,…

CISA warns about credential access in FY23 risk & vulnerability assessment

3 min read - CISA released its Fiscal Year 2023 (FY23) Risk and Vulnerability Assessments (RVA) Analysis, providing a crucial look into the tactics and techniques threat actors employed to compromise critical infrastructure. The report is part of the agency’s ongoing effort to improve national cybersecurity through assessments of vulnerabilities in key sectors. Meanwhile, IBM’s X-Force Threat Intelligence Index 2024 has identified credential access as one of the most significant risks to organizations. Both reports shed light on the persistent and growing threat of…

CISA launches portal to simplify cyber incident reporting

2 min read - Information sharing just got more efficient. In August, the Cybersecurity and Infrastructure Security Agency (CISA) launched the CISA Services Portal. “The new CISA Services Portal improves the reporting process and offers more features for our voluntary reporters. We ask organizations reporting an incident to provide information on the impacted entity, contact information, description of the incident, technical indications and steps taken,” a CISA spokesperson said in an email statement. “Reported incidents enable CISA and our partners to help victims mitigate…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today