Consumers love the convenience of virtual assistants such as Siri, Alexa and Cortana, but a group of researchers has discovered an easy way to compromise the software behind them by using ultrasounds that are inaudible to the human ear.

DolphinAttack Experiment Breaches Speech Recognition Software

Six scientists from Zhejiang University in China posted a video that showed how these inaudible voice commands can occur. The researchers dubbed their experiment the “DolphinAttack” because of the way dolphins seem to communicate without making noises. By using simple off-the-shelf hardware that costs only $3, they were able to breach speech recognition software from Apple, Google, Amazon and others. Turning voice commands into ultrasound frequencies allowed them to take over smartphones, speakers and even a smart car model from Audi.

Although most of what the researchers did was fairly innocuous, such as launching music or video calling applications, malicious actors could turn to DolphinAttacks for far more nefarious purposes, Bleeping Computer pointed out. Speech recognition software could be used to spy on device users, force their browser toward malware-laden URLs or even control where a smart car moves.

As security expert Tavish Vaidya told New Scientist, security is such an issue because voice assistants are capable of much more nowadays than setting an alarm or playing music. The DolphinAttack technique has emerged at a time when speech recognition software is available in a wide variety of applications designed with convenience in mind. Besides looking up information online, for example, many people can now use tools such as Google Now or Siri to manage digital accounts for payments and other transactions.

Attack Limitations and Remaining Threats

Fortunately, there are some limitations to a DolphinAttack. Would-be threat actors would have to be only a few feet away from a device, and the attack might not work if the surrounding environment is very loud, The Verge reported. While the audio equipment used to break into the speech recognition software was cheap, it might need to be customized for a specific device based on the ideal frequencies a particular microphone might pick up. Of course, savvy consumers could also notice the attacks and might need to confirm a command or unlock their device before anything bad could happen.

Still, the researchers were even able to demonstrate how recording a potential victim’s voice could be used to override controls on speech recognition software, such as Siri, that is tailored to a specific user. The Hacker News suggested the best way to prevent a DolphinAttack is to turn off voice commands or wait for vendors to ensure ultrasounds can’t be turned against their customers.

More from

How to Spot a Nefarious Cryptocurrency Platform

Do you ever wonder if your cryptocurrency platform cashes in ransomware payments? Maybe not, but it might be worth investigating. Bitcoin-associated ransomware continues to plague companies, government agencies and individuals with no signs of letting up. And if your platform gets sanctioned, you may instantly lose access to all your funds. What exchanges or platforms do criminals use to cash out or launder ransomware payments? And what implications does this have for people who use exchanges legitimately? Blacklisted Exchanges and Mixers…

Are Threat Actors Using ChatGPT to Hack Your Network?

Though the technology has only been widely available for a couple of months, everyone is talking about ChatGPT. If you are one of the few people unfamiliar with ChatGPT, it is an OpenAI language model with the “ability to generate human-like text responses to prompts.” It could be a game-changer wherever AI meshes with human interaction, like chatbots. Some are even using it to build editorial content. But, as with any popular technology, what makes it great can also make…

Why Crowdsourced Security is Devastating to Threat Actors

Almost every day, my spouse and I have a conversation about spam. Not the canned meat, but the number of unwelcomed emails and text messages we receive. He gets several nefarious text messages a day, while I maybe get one a week. Phishing emails come in waves — right now, I’m getting daily warnings that my AV software license is about to expire. Blocking or filtering has limited success and, as often as not, flags wanted rather than unwanted messages.…

Bridging the 3.4 Million Workforce Gap in Cybersecurity

As new cybersecurity threats continue to loom, the industry is running short of workers to face them. The 2022 (ISC)2 Cybersecurity Workforce Study identified a 3.4 million worldwide cybersecurity worker gap; the total existing workforce is estimated at 4.7 million. Yet despite adding workers this past year, that gap continued to widen. Nearly 12,000 participants in that study felt that additional staff would have a hugely positive impact on their ability to perform their duties. More hires would boost proper…