September 11, 2017 By Shane Schick 2 min read

Consumers love the convenience of virtual assistants such as Siri, Alexa and Cortana, but a group of researchers has discovered an easy way to compromise the software behind them by using ultrasounds that are inaudible to the human ear.

DolphinAttack Experiment Breaches Speech Recognition Software

Six scientists from Zhejiang University in China posted a video that showed how these inaudible voice commands can occur. The researchers dubbed their experiment the “DolphinAttack” because of the way dolphins seem to communicate without making noises. By using simple off-the-shelf hardware that costs only $3, they were able to breach speech recognition software from Apple, Google, Amazon and others. Turning voice commands into ultrasound frequencies allowed them to take over smartphones, speakers and even a smart car model from Audi.

Although most of what the researchers did was fairly innocuous, such as launching music or video calling applications, malicious actors could turn to DolphinAttacks for far more nefarious purposes, Bleeping Computer pointed out. Speech recognition software could be used to spy on device users, force their browser toward malware-laden URLs or even control where a smart car moves.

As security expert Tavish Vaidya told New Scientist, security is such an issue because voice assistants are capable of much more nowadays than setting an alarm or playing music. The DolphinAttack technique has emerged at a time when speech recognition software is available in a wide variety of applications designed with convenience in mind. Besides looking up information online, for example, many people can now use tools such as Google Now or Siri to manage digital accounts for payments and other transactions.

Attack Limitations and Remaining Threats

Fortunately, there are some limitations to a DolphinAttack. Would-be threat actors would have to be only a few feet away from a device, and the attack might not work if the surrounding environment is very loud, The Verge reported. While the audio equipment used to break into the speech recognition software was cheap, it might need to be customized for a specific device based on the ideal frequencies a particular microphone might pick up. Of course, savvy consumers could also notice the attacks and might need to confirm a command or unlock their device before anything bad could happen.

Still, the researchers were even able to demonstrate how recording a potential victim’s voice could be used to override controls on speech recognition software, such as Siri, that is tailored to a specific user. The Hacker News suggested the best way to prevent a DolphinAttack is to turn off voice commands or wait for vendors to ensure ultrasounds can’t be turned against their customers.

More from

What we can learn from the best collegiate cyber defenders

3 min read - This year marked the 19th season of the National Collegiate Cyber Defense Competition (NCCDC). For those unfamiliar, CCDC is a competition that puts student teams in charge of managing IT for a fictitious company as the network is undergoing a fundamental transformation. This year the challenge involved a common scenario: a merger. Ten finalist teams were tasked with managing IT infrastructure during this migrational period and, as an added bonus, the networks were simultaneously attacked by a group of red…

A spotlight on Akira ransomware from X-Force Incident Response and Threat Intelligence

7 min read - This article was made possible thanks to contributions from Aaron Gdanski.IBM X-Force Incident Response and Threat Intelligence teams have investigated several Akira ransomware attacks since this threat actor group emerged in March 2023. This blog will share X-Force’s unique perspective on Akira gained while observing the threat actors behind this ransomware, including commands used to deploy the ransomware, active exploitation of CVE-2023-20269 and analysis of the ransomware binary.The Akira ransomware group has gained notoriety in the current cybersecurity landscape, underscored…

New proposed federal data privacy law suggests big changes

3 min read - After years of work and unsuccessful attempts at legislation, a draft of a federal data privacy law was recently released. The United States House Committee on Energy and Commerce released the American Privacy Rights Act on April 7, 2024. Several issues stood in the way of passing legislation in the past, such as whether states could issue tougher rules and if individuals could sue companies for privacy violations. With the American Privacy Rights Act of 2024, the U.S. government established…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today