September 26, 2018 By Shane Schick 2 min read

It’s not unusual to see phishing cases on the rise during tax time, but cybercriminals are getting an early start by promising U.K. computer users a sizable refund in an attempt to steal personal data.

Recipients of the email scam, which appeared to come from Her Majesty’s Revenue and Customs (HMRC), the U.K. government department responsible for collecting taxes, were told to visit a gateway portal to receive a tax refund of around 542 pounds, according to Malwarebytes Labs.

Unlike other phishing cases, the timeline was particularly tight: The cybercriminals instructed potential victims to act within the same day they received the email.

How the Threat Actors Make the Scam Look Legitimate

Before reaching the phony gateway portal, the threat actors took victims to a replica Microsoft Outlook login page, which allowed them to harvest usernames and passwords. Once at the bogus HMRC site, victims were asked to fill out a comprehensive form that ended with fields to enter their credit card details. Much like legitimate government forms, researchers noted that the site validated what people entered to ensure they were inputting accurate information, including phone numbers and dates of birth.

Although tax refunds are of obvious interest to consumers, there are plenty of people who might be logged in to their personal email accounts at work, meaning phishing cases like these could potentially threaten an entire organization. The challenge is to understand what’s going on at the moment an attack occurs.

Why You Should Make It Easy to Report Phishing Cases

To ward off this type of attack, IBM experts recommend conducting regular internal phishing assessments and making use of open source intelligence. Companies should also make it easy for users to report phishing cases — and that doesn’t mean simply telling employees to contact IT. Instead, instructions should be as specific as possible within company policies.

Effective strategies include giving staff a hotline to call or chatbot to text and providing contact details for a specific employee who specializes in IT security issues. When the details are granular and there’s no fear of repercussions, employees are more likely to come forward when something happens and security teams can more quickly respond to threats.

Source: Malwarebytes Labs

More from

Will AI threaten the role of human creativity in cyber threat detection?

4 min read - Cybersecurity requires creativity and thinking outside the box. It’s why more organizations are looking at people with soft skills and coming from outside the tech industry to address the cyber skills gap. As the threat landscape becomes more complex and nation-state actors launch innovative cyberattacks against critical infrastructure, there is a need for cybersecurity professionals who can anticipate these attacks and develop creative preventive solutions.Of course, a lot of cybersecurity work is mundane and repetitive — monitoring logs, sniffing out…

Hacking the mind: Why psychology matters to cybersecurity

4 min read - In cybersecurity, too often, the emphasis is placed on advanced technology meant to shield digital infrastructure from external threats. Yet, an equally crucial — and underestimated — factor lies at the heart of all digital interactions: the human mind. Behind every breach is a calculated manipulation, and behind every defense, a strategic response. The psychology of cyber crime, the resilience of security professionals and the behaviors of everyday users combine to form the human element of cybersecurity. Arguably, it's the…

Stress-testing multimodal AI applications is a new frontier for red teams

5 min read - Human communication is multimodal. We receive information in many different ways, allowing our brains to see the world from various angles and turn these different "modes" of information into a consolidated picture of reality.We’ve now reached the point where artificial intelligence (AI) can do the same, at least to a degree. Much like our brains, multimodal AI applications process different types — or modalities — of data. For example, OpenAI’s ChatGPT 4.0 can reason across text, vision and audio, granting…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today