There’s something spooky going on. New research from the Ubiquitous System Security Lab, Zhejiang University Security and Privacy Research Group and the University of Michigan found ‘poltergeist’ (PG) attacks can fool autonomous vehicles in a way that hasn’t been seen before. Take a look at what the researchers found about how this attack works.

Vehicles with a self-driving feature rely on computer-enabled, object-based detection. This classifies objects, deciding what is an obstacle and what is a normal road condition. Using those decisions, autonomous vehicles make moves on their own. Poltergeist attackers tamper with those classification results.

Bombarding Self-Driving Cars With Acoustic Signals

To be specific, the poltergeist attack affects the stabilization of images detected by a vehicle. In their paper, the researchers noted this isn’t the same as past studies in which people showed the security risks of self-driving cars by targeting the main image sensors, such as the complementary metal-oxide semiconductor. Instead, they singled out inertial sensors. These provide an image stabilizer with motion feedback that it can use to reduce blur.

The researchers designed their PG attack to target those initial sensors with resonant acoustic signals. In doing so, they found that someone could gain control of the stabilizer. From there, the attacker could then perform one of the following three types of attacks:

  • Hiding Attacks: A threat actor could make a detected object, such as the rear of a car, disappear.
  • Creating Attacks: Someone could fool the computer detection systems into detecting an object that isn’t really there.
  • Altering Attacks: An attacker could cause the computer detection systems to classify one object as another.

In testing those attacks, the researchers saw a 100% success rate with people, cars, trucks, buses, traffic lights and stop signs with hiding attacks. The other two attack scenarios varied in success depending on which objects were involved and the extent to which they were targeted.

Researchers Leading Vehicle Hacking

Fooling object detection systems is just one of the types of attacks threat actors could use to prey upon self-driving vehicles. Others include using beams of light and adversarial machine learning to tamper with the vehicles’ decisions and/or performance.

Back in 2018, for instance, a hacker found that a threat actor could embed a custom piece of hardware into a self-driving vehicle. Then, they could use it to control almost any component of the car, including the brakes and speed.

In February 2020, another group of hackers made one type of autonomous vehicle speed up to 85 mph in a 35 mph zone.

Toward Better Cybersecurity in Autonomous Vehicles

The researchers working on the PG problem also offered some solutions. Vehicle makers who include a self-driving feature should include safeguards, such as using a microphone to detect acoustic injection attacks. They can also add adversarial training into their object detection algorithms.

In addition, autonomous vehicle manufacturers should ensure that third-party providers and others along their supply chains follow security best practices. This could keep malicious actors out of the supplier’s network, removing the chance for follow-up attacks.

Self-driving cars may seem like a sign of the future, but keeping threat actors from taking control of them is a problem researchers have been working on for years. This new type of attack is just one example of that.

More from News

The White House on Quantum Encryption and IoT Labels

A recent White House Fact Sheet outlined the current and future U.S. cybersecurity priorities. While most of the topics covered were in line with expectations, others drew more attention. The emphasis on critical infrastructure protection is clearly a top national priority. However, the plan is to create a labeling system for IoT devices, identifying the ones with the highest cybersecurity standards. Few expected that news. The topic of quantum-resistant encryption reveals that such concerns may become a reality sooner than…

Malware-as-a-Service Flaunts Its Tally of Users and Victims

As time passes, the security landscape keeps getting stranger and scarier. How long did the “not if, but when” mentality towards cyberattacks last — a few years, maybe? Now, security pros think in terms of how often will their organization be attacked and at what cost. Or they consider how the difference between legitimate Software-as-a-Service (SaaS) brands and Malware-as-a-Service (MaaS) gangs keeps getting blurrier. MaaS operators provide web-based services, slick UX, tiered subscriptions, newsletters and Telegram channels that keep users…

New Survey Shows Burnout May Lead to Attrition

For many organizations and the cybersecurity industry as a whole, improving retention and reducing the skills gap is a top priority. Mimecast’s The State of Ransomware Readiness 2022: Reducing the Personal and Business Cost points to another growing concern — burnout that leads to attrition. Without skilled employees, organizations cannot protect their data and infrastructure from increasing cybersecurity attacks. According to Mimecast’s report, 77% of cybersecurity leaders say the number of cyberattacks against their company has increased or stayed the…

Alleged FBI Database Breach Exposes Agents and InfraGard

Recently the feds suffered a big hack, not once, but twice. First, the FBI-run InfraGard program suffered a breach. InfraGard aims to strengthen partnerships with the private sector to share information about cyber and physical threats. That organization experienced a major breach in early December, according to a KrebsOnSecurity report. Allegedly, the InfraGard database — containing contact information of over 80,000 members — appeared up for sale on a cyber crime forum. Also, the hackers have reportedly been communicating with…