Cybersecurity professionals are already losing sleep over data breaches and how to best protect their employers from attacks. Now they have another nightmare to stress over — how to spot a deepfake.
Deepfakes are different because attackers can easily use data and images as a weapon. And those using deepfake technology can be someone from inside the organization as well as outside.
How attackers use deepfake attacks
Earlier in 2021, the FBI released a warning about the rising threat of synthetic content, which includes deepfakes, describing it as “the broad spectrum of generated or manipulated digital content, which includes images, video, audio and text.” People can create the most simple types of synthetic content with software like Photoshop. Deepfake attackers are becoming more sophisticated using technologies like artificial intelligence (AI) and machine learning (ML). Now, these can create realistic images and videos.
Remember, attackers are in the cyber theft business to make money. Ransomware tends to be successful. So, it was a logical move for them to use deepfakes as a new ransomware tool. In the traditional way of sharing ransomware, attackers launch a phishing attack with malware embedded in an enticing deepfake video. There’s also the new way to leverage deepfakes. Attackers can show people or businesses in all sorts of illicit (but fake) behaviors that could damage their reputation if the images went public. Pay the ransom, and the videos stay private.
Besides ransomware, synthetic content is used in other ways. Threat actors might weaponize data and images to spread lies and scam employees, clients and others, or to extort them.
Attackers might use all three of these attack styles together or on their own. Remember, scams have been around for a long time. Phishing attacks are quite ruthless in attempting to scam users already. However, defenders aren’t paying enough attention to the rise of AI/ML to spread misinformation and extortion tactics. Today, attackers can even use apps designed to create pornographic images from real photographs and videos.
Preventing deepfake attacks
Users are already duped by phishing attacks, so deepfake phishing attempts will be even more difficult for the average user to detect. Cybersecurity awareness training is a must in any good security program. Make sure it includes how to tell a fake from the real deal.
This is easier than you might expect. The tech behind these attacks is good, but it isn’t perfect. In a webinar, Raymond Lee, CEO of FakeNet.AI and Etay Maor and senior director of security strategy at Cato Networks, explained that facial features are very difficult to perfect, especially the eyes. If the eyes look unnatural or the movement of facial features seem to be off, chances are good that it is an altered image.
Best practices apply here, too
Another way to detect the deepfake from the real is to use cybersecurity best practices and a zero trust philosophy. Verify whatever you see. Double and triple check the source of the message. Do an image search to find the original, if possible.
When it comes to your own images, use a digital fingerprint or watermark that makes it more difficult for someone to create synthetic content from them.
Overall, the defense systems already in place will work to prevent deepfake phishing and social engineering attacks. Deepfakes are still in the earliest stages as an attack vector, so cybersecurity teams have the advantage of preparing defenses as the tools improve. It really should be one less thing to lose sleep over.