September 28, 2018 By David Bisson 2 min read

A large tech support scam operation called Partnerstroka recently targeted unsuspecting users with a new browser locking technique.

Security researchers at Malwarebytes Labs regularly monitor threat actors using malvertising and other techniques to expose users to a tech support scam. The latest campaign stood out for its incorporation of a browser lock specific to Google Chrome that hijacked the user’s cursor, turned it into an invisible square box and displayed a low-resolution image of a cursor, according to the researchers. It also relocated mouse clicks to somewhere else on the page without the user’s knowledge, preventing victims from closing the scam.

The infrastructure of the campaign relied on dozens of Gmail accounts, each of which was tied to anywhere from a few to several thousand .club domains that abused the GoDaddy registrar/hosting platform. In total, the researchers detected more than 16,000 malicious domains associated with the campaign, but the actual number could be much higher.

How Much Can a Tech Support Scam Cost?

These findings come amid a rise in tech support scams around the world. In 2017, Microsoft received 153,000 reports from customers who fell victim to a tech support scam, a 24 percent increase from the previous year. Of those victims, 15 percent lost between $200–$400, and the technology giant received one report of a victim losing more than $100,000 to a tech support scammer in December 2017.

Furthermore, the Better Business Bureau tracked 41,435 scam complaints received by the Federal Bureau of Investigation (FBI) and Federal Trade Commission (FTC) last year. Those complaints related to more than $21 million lost to tech support scams in just the first nine months of 2017, and that’s only counting reported crimes.

Combat Scams Through Education and Awareness

The IBM X-Force Exchange threat alert associated with this scam advised security teams to keep operating systems and antivirus tools up to date. Organizations should also scan their environments for the specific indicators of compromise (IoCs) uncovered by Malwarebytes Labs.

When it comes to tech support scams specifically, security experts recommend regularly educating users about cyberthreats and training employees to be skeptical about any unsolicited communications, whether online or over the phone.

Sources: Malwarebytes Labs, Microsoft, Better Business Bureau

More from

Does your business have an AI blind spot? Navigating the risks of shadow AI

4 min read - With AI now an integral part of business operations, shadow AI has become the next frontier in information security. Here’s what that means for managing risk.For many organizations, 2023 was the breakout year for generative AI. Now, large language models (LLMs) like ChatGPT have become household names. In the business world, they’re already deeply ingrained in numerous workflows, whether you know about it or not. According to a report by Deloitte, over 60% of employees now use generative AI tools…

It all adds up: Pretexting in executive compromise

4 min read - Executives hold the keys to the corporate kingdom. If attackers can gain the trust of executives using layered social engineering techniques, they may be able to access sensitive corporate information such as intellectual property, financial data or administrative control logins and passwords.While phishing remains the primary pathway to executive compromise, increasing C-suite awareness of this risk requires a more in-depth approach from attackers: Pretexting.What is pretexting?Pretexting is the use of a fabricated story or narrative — a “pretext” — to…

ChatGPT 4 can exploit 87% of one-day vulnerabilities

2 min read - Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to determine the answer. The conclusion: They are very effective.ChatGPT 4 quickly exploited one-day vulnerabilitiesDuring the study, the team used 15 one-day vulnerabilities that occurred in…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today