January 14, 2019 By Shane Schick 2 min read

University of Maryland researchers warn that with limited resources, threat actors could launch a successful cyberattack on Google’s bot-detecting reCaptcha service.

In an academic paper detailing their findings, the researchers discuss how they created a tool called unCaptcha, which uses audio files in conjunction with artificial intelligence (AI) technologies such as speech-to-text software to bypass the Google security mechanism.

Over more than 450 tests, the unCaptcha tool defeated reCaptcha with 85 percent accuracy in 5.42 seconds, on average. This study proved that threat actors could potentially break into web-based services, pursue automated account creation and more.

How Researchers Got Around reCaptcha

Online users will recognize reCaptcha as a small box that appears on many websites when signing up or logging in to digital services. Website visitors are typically asked to solve a challenge to prove they’re human, whether it’s typing in letters next to a distorted rendering of the letters, answering a question or clicking on images.

In this case, the University of Maryland researchers took advantage of the fact that Google’s system offers an audio version of its challenges for those who may be visually impaired. The attack method involved navigating to Google’s reCaptcha demo site, finding the audio challenge and downloading it, then putting it through a speech-to-text engine. After an answer had been parsed, it could be typed in and submitted.

While Google initially responded by creating a new version of reCaptcha, the researchers did the same thing with unCaptcha and were even more successful. In an interview with BleepingComputer, one of the researchers said the new version had a success rate of around 91 percent after more than 600 attempts.

Securing the Web Without CAPTCHAs

The research paper recommends a number of possible countermeasures to a tool such as unCaptcha, including broadening the sound bytes of reCaptcha audio challenges and adding distortion. CAPTCHAs are far from the only option available to protect digital services, however.

IBM Security experts, for example, discussed the promise of managed identity and access management (IAM), which allows organizations to not only protect online services with additional layers of security, but also have a third party deal with operational chores such as patching and resolving upcoming incidents. If a group of academics can automate attacks on CAPTCHA systems this successfully, it may be time for security leaders and their teams to look for something more sophisticated.

More from

The major hardware flaw in Apple M-series chips

3 min read - The “need for speed” is having a negative impact on many Mac users right now.The Apple M-series chips, which are designed to deliver more consistent and faster performance than the Intel processors used in the past, have a vulnerability that can expose cryptographic keys, leading an attacker to reveal encrypted data. This critical security flaw, known as GoFetch, exploits a vulnerability found in the M-chips data memory-dependent prefetcher (DMP).DMP’s benefits and vulnerabilitiesDMP predicts memory addresses that the code is most…

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today