October 22, 2019 By Shane Schick 2 min read

Backends that let developers create custom apps are also exposing Alexa and Google Home devices to eavesdropping and phishing attacks, security researchers discovered.

Details of the vulnerabilities were first disclosed in a blog post by the team at SRLabs. The findings suggest that the backends could let hackers take advantage of how smart devices take in and reply to commands.

By inserting a special character to induce silence from an app, for instance, cybercriminals could dupe users into thinking the device has failed, prompting them to hand over their Alexa or Google Home access credentials.

‘Please Tell Me Your Password’

By using the “U+DB01, dot, space” sequence — which also looks like a question mark inside a black diamond shape — an Alexa or Google Home device might pause unexpectedly. In a video where the researchers demonstrated an attack, however, a blue status light on an Alexa speaker shows it is still active. Bad actors could then dupe users into believing their device was running an update and asking for their passwords.

The same character sequence could be used to continue listening and record what users say, even after they’ve finished giving a command. Whatever was recorded during the eavesdropping session could then be sent to a third-party command and control (C&C) server.

Fears about phishing and eavesdropping attacks via Alexa and Google devices have been raised several times by other researchers over the past year. While vendors typically verify the security of custom apps when they first become part of a platform, the issue is whether the security is vetted again when they are updated later on.

Google and Amazon responded to the SRLabs report in emails to ZDNet, saying they had been made aware of the findings and would “put additional mechanisms in place to prevent these issues from occurring in the future.”

Professional and Personal Risks of Smart Devices

Even though they tend to be considered personal technologies, Alexa and Google Home products can wind up extending the boundaries of work. Experts suggest this may require thinking about how to ensure enterprise mobility management (EMM) tools mitigate the risks from emerging device categories.

For everyday people, meanwhile, using smart speakers and related internet of things (IoT) devices is still a relatively recent phenomenon and may require a little self-training to learn common security risks. Just as banks won’t ask for account credentials via an email or text message, for example, vendors won’t ask for usernames and passwords through their device. If your device is asking for that kind of personal data, talk to an expert for help.

More from

Stress-testing multimodal AI applications is a new frontier for red teams

5 min read - Human communication is multimodal. We receive information in many different ways, allowing our brains to see the world from various angles and turn these different "modes" of information into a consolidated picture of reality.We’ve now reached the point where artificial intelligence (AI) can do the same, at least to a degree. Much like our brains, multimodal AI applications process different types — or modalities — of data. For example, OpenAI’s ChatGPT 4.0 can reason across text, vision and audio, granting…

Cybersecurity awareness: Apple’s cloud-based AI security system

3 min read - The rising influence of artificial intelligence (AI) has many organizations scrambling to address the new cybersecurity and data privacy concerns created by the technology, especially as AI is used in cloud systems. Apple addresses AI’s security and privacy issues head-on with its Private Cloud Compute (PCC) system.Apple seems to have solved the problem of offering cloud services without undermining user privacy or adding additional layers of insecurity. It had to do so, as Apple needed to create a cloud infrastructure…

How AI-driven SOC co-pilots will change security center operations

4 min read - Have you ever wished you had an assistant at your security operations centers (SOCs) — especially one who never calls in sick, has a bad day or takes a long lunch? Your wish may come true soon. Not surprisingly, AI-driven SOC “co-pilots” are topping the lists for cybersecurity predictions in 2025, which often describe these tools as game-changers.“AI-driven SOC co-pilots will make a significant impact in 2025, helping security teams prioritize threats and turn overwhelming amounts of data into actionable…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today