Cybersecurity professionals are already losing sleep over data breaches and how to best protect their employers from attacks. Now they have another nightmare to stress over — how to spot a deepfake.

Deepfakes are different because attackers can easily use data and images as a weapon. And those using deepfake technology can be someone from inside the organization as well as outside.

How attackers use deepfake attacks

Earlier in 2021, the FBI released a warning about the rising threat of synthetic content, which includes deepfakes, describing it as “the broad spectrum of generated or manipulated digital content, which includes images, video, audio and text.” People can create the most simple types of synthetic content with software like Photoshop. Deepfake attackers are becoming more sophisticated using technologies like artificial intelligence (AI) and machine learning (ML). Now, these can create realistic images and videos.

Remember, attackers are in the cyber theft business to make money. Ransomware tends to be successful. So, it was a logical move for them to use deepfakes as a new ransomware tool. In the traditional way of sharing ransomware, attackers launch a phishing attack with malware embedded in an enticing deepfake video. There’s also the new way to leverage deepfakes. Attackers can show people or businesses in all sorts of illicit (but fake) behaviors that could damage their reputation if the images went public. Pay the ransom, and the videos stay private.

Besides ransomware, synthetic content is used in other ways. Threat actors might weaponize data and images to spread lies and scam employees, clients and others, or to extort them.

Attackers might use all three of these attack styles together or on their own. Remember, scams have been around for a long time. Phishing attacks are quite ruthless in attempting to scam users already. However, defenders aren’t paying enough attention to the rise of AI/ML to spread misinformation and extortion tactics. Today, attackers can even use apps designed to create pornographic images from real photographs and videos.

Preventing deepfake attacks

Users are already duped by phishing attacks, so deepfake phishing attempts will be even more difficult for the average user to detect. Cybersecurity awareness training is a must in any good security program. Make sure it includes how to tell a fake from the real deal.

This is easier than you might expect. The tech behind these attacks is good, but it isn’t perfect. In a webinar, Raymond Lee, CEO of FakeNet.AI and Etay Maor and senior director of security strategy at Cato Networks, explained that facial features are very difficult to perfect, especially the eyes. If the eyes look unnatural or the movement of facial features seem to be off, chances are good that it is an altered image.

Best practices apply here, too

Another way to detect the deepfake from the real is to use cybersecurity best practices and a zero trust philosophy. Verify whatever you see. Double and triple check the source of the message. Do an image search to find the original, if possible.

When it comes to your own images, use a digital fingerprint or watermark that makes it more difficult for someone to create synthetic content from them.

Overall, the defense systems already in place will work to prevent deepfake phishing and social engineering attacks. Deepfakes are still in the earliest stages as an attack vector, so cybersecurity teams have the advantage of preparing defenses as the tools improve. It really should be one less thing to lose sleep over.

More from Security Services

What should Security Operations teams take away from the IBM X-Force 2024 Threat Intelligence Index?

3 min read - The IBM X-Force 2024 Threat Intelligence Index has been released. The headlines are in and among them are the fact that a global identity crisis is emerging. X-Force noted a 71% increase year-to-year in attacks using valid credentials.In this blog post, I’ll explore three cybersecurity recommendations from the Threat Intelligence Index, and define a checklist your Security Operations Center (SOC) should consider as you help your organization manage identity risk.The report identified six action items:Remove identity silosReduce the risk of…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

Ermac malware: The other side of the code

6 min read - When the Cerberus code was leaked in late 2020, IBM Trusteer researchers projected that a new Cerberus mutation was just a matter of time. Multiple actors used the leaked Cerberus code but without significant changes to the malware. However, the MalwareHunterTeam discovered a new variant of Cerberus — known as Ermac (also known as Hook) — in late September of 2022.To better understand the new version of Cerberus, we can attempt to shed light on the behind-the-scenes operations of the…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today