March 5, 2019 By Shane Schick 2 min read

Security researchers say a Turkish-speaking group of cybercriminals is using an Instagram hack to dupe social media influencers into handing over money and even nude photographs as part of a digital extortion campaign.

According to Trend Micro, the attack begins with a simple phishing email that prompts users who have a large following on the Facebook-owned photo-sharing service to obtain a verification badge for their account profile. A “verified” badge is designed to help distinguish a well-known person’s account from potential fakes or other users with a similar name.

How the Instagram Hack Works

The phishing message prompts users to enter their login credentials, email and date of birth, among other information. After submitting the form, victims are shown a verification badge for a few seconds and then directed back to Instagram. Behind the scenes, the researchers observed the attackers switching the names of profiles, defacing profile pictures and flooding inboxes with security alerts.

In some cases, the attackers proceeded to add and then remove fake followers to a stolen account, as well as some possibly legitimate ones. Some victims were prompted to produce nude photos and videos as well as monetary payment in exchange for access to their accounts. If they failed to do so, the attackers threatened to hold the accounts hostage permanently or even delete them entirely.

An investigation into the attack discovered the words “account” and “eternal” written in Turkish on one of the victim’s profiles. This led to an online forum where other cybercriminals were discussing ways to steal accounts and prevent them from being recovered.

The Big Picture on Social Media Security

Users should be aware that Instagram wouldn’t ask for their login credentials as part of the process of receiving a “verified” badge, but it’s still easy to fall for phishing schemes when the domain names or landing pages look like the real thing. IBM experts suggest using ahead-of-threat detection to identify malicious URLs, scan images for hidden code and more before the actual threat becomes visible.

More from

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Passwords, passkeys and familiarity bias

5 min read - As passkey (passwordless authentication) adoption proceeds, misconceptions abound. There appears to be a widespread impression that passkeys may be more convenient and less secure than passwords. The reality is that they are both more secure and more convenient — possibly a first in cybersecurity.Most of us could be forgiven for not realizing passwordless authentication is more secure than passwords. Thinking back to the first couple of use cases I was exposed to — a phone operating system (OS) and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today