Security researchers have demonstrated how it is possible to use stickers to get computer vision systems in autonomous vehicles to wrongly identify road signs.

Researchers from the University of Washington and other schools recently published a paper that describes a new attack algorithm, known as Robust Physical Perturbation (RP2). The report, “Robust Physical-World Attacks on Machine Learning Models,” detailed how the algorithm makes it possible for errant individuals to alter standard road signs and create havoc for self-driving car systems.

How Does the Attack Work?

The algorithm works in combination with printed images attached to road signs. These images, which could in theory be created by anyone with access to a color printer, confuse the cameras in autonomous vehicles.

The attack relies on undermining the computer vision systems of autonomous vehicles that have been taught to recognize items on or alongside roads using cameras. Computer vision systems in self-driving cars usually rely on an object detector, which identifies pedestrians, signs and vehicles, and a classifier, which works out the nature of the objects and the meaning of the signs.

Systems may be responsive to small alterations to their inputs, known as perturbations, that can cause the vehicles to operate in unexpected ways, reported Car and Driver. Actors would need to access the classifier and then use the RP2 algorithm to create a new, customized image of the existing road sign.

How the Computer Vision Systems Were Tricked

In one of the attacks, the researchers used the RP2 algorithm to create and print a full-size road sign that was placed over an existing warning sign. They created a stop sign that only looked faded to human eyes but was always read as a Speed Limit 45 sign by the computer vision system.

A second technique relied on placing small black-and-white stickers on a stop sign that, once again, led the computer vision system to wrongly identify a Speed Limit 45 sign.

The researchers reported the attacks were effective at a range of distances and angles. In the conclusion to their paper, they stated that they plan to test their algorithm further by altering other conditions that were not included this time around, such as sign occlusion and alterations to other warning signs.

The Implications for Autonomous Vehicle Design

Security fears over autonomous vehicle technology are nothing new. Experts have long directed attention toward the risk of hacks to in-car systems. Earlier this month, in fact, reports centered on a vulnerability in the Controller Area Network (CAN) Bus standard that could impact the security of connected automobiles.

However, this work demonstrated that computer vision systems can also be put at risk. The potential dangers are clear, particularly for vehicles that already use automatic sign recognition. An attacker with access to both the algorithm and the classifier in the in-car system could trick vehicles into responding incorrectly to signs.

While autonomous vehicle development is still at an early stage, self-driving car designers and in-car system manufacturers should take note of the potential dangers. Tarek El-Gaaly, senior research scientist at Voyage, told Car and Driver that such attacks were cause for concern and they could be easier to imitate in the future.

While the risk is limited now, the research highlighted how autonomous vehicle systems could be at risk from malicious actions in the future. Self-driving vehicle manufacturers and computer vision systems designers should take note.

More from

Most organizations want security vendor consolidation

4 min read - Cybersecurity is complicated, to say the least. Maintaining a strong security posture goes far beyond knowing about attack groups and their devious TTPs. Merely understanding, coordinating and unifying security tools can be challenging.We quickly passed through the “not if, but when” stage of cyberattacks. Now, it’s commonplace for companies to have experienced multiple breaches. Today, cybersecurity has taken a seat in core business strategy discussions as the risks and costs have risen dramatically.For this reason, 75% of organizations seek to…

How IBM secures the U.S. Open

2 min read - More than 15 million tennis fans around the world visited the US Open app and website this year, checking scores, poring over statistics and watching highlights from hundreds of matches over the two weeks of the tournament. To help develop this world-class digital experience, IBM Consulting worked closely with the USTA, developing powerful generative AI models that transform tennis data into insights and original content. Using IBM watsonx, a next-generation AI and data platform, the team built and managed the entire…

How the FBI Fights Back Against Worldwide Cyberattacks

5 min read - In the worldwide battle against malicious cyberattacks, there is no organization more central to the fight than the Federal Bureau of Investigation (FBI). And recent years have proven that the bureau still has some surprises up its sleeve. In early May, the U.S. Department of Justice announced the conclusion of a U.S. government operation called MEDUSA. The operation disrupted a global peer-to-peer network of computers compromised by malware called Snake. Attributed to a unit of the Russian government Security Service,…

How NIST Cybersecurity Framework 2.0 Tackles Risk Management

4 min read - The NIST Cybersecurity Framework 2.0 (CSF) is moving into its final stages before its 2024 implementation. After the public discussion period to inform decisions for the framework closed in May, it’s time to learn more about what to expect from the changes to the guidelines. The updated CSF is being aligned with the Biden Administration’s National Cybersecurity Strategy, according to Cherilyn Pascoe, senior technology policy advisor with NIST, at the 2023 RSA Conference. This sets up the new CSF to…