March 6, 2019 By Mike Elgan 5 min read

In 2017, an anonymous Reddit user under the pseudonym “deepfakes” posted links to pornographic videos that appeared to feature famous mainstream celebrities. The videos were fake. And the user created them using off-the-shelf artificial intelligence (AI) tools.

Two months later, Reddit banned the deepfakes account and related subreddit. But the ensuing scandal revealed a range of university, corporate and government research projects under way to perfect both the creation and detection of deepfake videos.

Where Deepfakes Come From (and Where They’re Going)

Deepfakes are created using AI technology called generative adversarial networks (GANs), which can be used broadly to create fake data that can pass as real data. To oversimplify how GANs work, two machine learning (ML) algorithms are pitted against each other. One creates fake data and the other judges the quality of that fake data against a set of real data. They continue this contest at massive scale, continually getting better at making fake data and judging it. When both algorithms become extremely good at their respective tasks, the product is a set of high-quality fake data.

In the case of deepfakes, the authentic data set consists of hundreds or thousands of still photographs of a person’s face, so the algorithm has a wide selection of images showing the face from different angles and with different facial expressions to choose from and judge against to experimentally add to the video during the learning phase.

Carnegie Mellon University scientists even figured out how to impose the style of one video onto another using a technique called Recycle-GAN. Instead of convincingly replacing someone’s face with another, the Recycle-GAN process enables the target to be used like a puppet, imitating every head movement, facial expression and mouth movement in the exact way as the source video. This process is also more automated than previous methods.

Most of these videos today are either pornography featuring celebrities, satire videos created for entertainment or research projects showing rapidly advancing techniques. But deepfakes are likely to become a major security concern in the future. Today’s security systems rely heavily on surveillance video and image-based biometric security. Since the majority of breaches occur because of social engineering-based phishing attacks, it’s certain that criminals will turn to deepfakes for this purpose.

Deepfake Videos Are Getting Really Good, Really Fast

The earliest publicly demonstrated deepfake videos tended to show talking heads, with the subjects seated. Now, full-body deepfakes developed in separate research projects at Heidelberg University and the University of California, Berkeley are able to transfer the movements of one person to another. One form of authentication involves gait analysis. These kinds of full-body deepfakes suggest that the gait of an authorized person could be transferred in video to an unauthorized person.

Here’s another example: Many cryptocurrency exchanges authenticate users by making them photograph themselves holding up their passport or some other form of identification as well as a piece of paper with something like the current date written on it. This can be easily foiled with Photoshop. Some exchanges, such as Binance, found many attempts by criminals to access accounts using doctored photos, so they and others moved to video instead of photos. Security analysts worry that it’s only a matter of time before deepfakes will become so good that neither photos nor videos like these will be reliable.

The biggest immediate threat for deepfakes and security, however, is in the realm of social engineering. Imagine a video call or message that appears to be your work supervisor or IT administrator, instructing you to divulge a password or send a sensitive file. That’s a scary future.

What’s Being Done About It?

Increasingly realistic deepfakes have enormous implications for fake news, propaganda, social disruption, reputational damage, evidence tampering, evidence fabrication, blackmail and election meddling. Another concern is that the perfection and mainstreaming of deepfakes will cause the public to doubt the authenticity of all videos.

Security specialists, of course, will need to have such doubts as a basic job requirement. Deepfakes are a major concern for digital security specifically, but also for society at large. So what can be done?

University Research

Some researchers say that analyzing the way a person in a video blinks, or how often they blink, is one way to detect a deepfake. In general, deepfakes show insufficient or even nonexistent blinking, and the blinking that does occur often appears unnatural. Breathing is another movement usually not present in deepfakes, along with hair (it often looks blurry or painted on).

Researchers from the State University of New York (SUNY) at Albany developed a deepfake detection method that uses AI technology to look for natural blinking, breathing and even a pulse. It’s only a matter of time, however, before deepfakes make these characteristics look truly “natural.”

Government Action

The U.S. government is also taking precautions: Congress could consider a bill in the coming months to criminalize both the creation and distribution of deepfakes. Such a law would likely be challenged in court as a violation of the First Amendment, and would be difficult to enforce without automated technology for identifying deepfakes.

The government is working on the technology problem, too. The National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA) and Intelligence Advanced Research Projects Agency (IARPA) are looking for technology to automate the identification of deepfakes. DARPA alone has reportedly spent $68 million on a media forensics capability to spot deepfakes, according to CBC.

Private Technology

Private companies are also getting in on the action. A new cryptographic authentication tool called Amber Authenticate can run in the background while a device records video. As reported by Wired, the tool generates hashes — “scrambled representations” — of the data at user-determined intervals, which are then recorded on a public blockchain. If the video is manipulated in any way, the hashes change, alerting the viewer to the probability that the video has been tampered with. A dedicated player feature shows a green frame for portions of video that are faithful to the origina, and a red frame around video segments that have been altered. The system has been proposed for police body cams and surveillance video.

A similar approach was taken by a company called Factom, whose blockchain technology is being tested for border video by the Department of Homeland Security (DHS), according to Wired.

Security Teams Should Prepare for Anything and Everything

The solution to deepfakes may lie in some combination of education, technology and legislation — but none of these will work without the technology part. Because when deepfakes get really good, as they inevitably will, only machines will be able to tell the real videos from the fake ones. This deepfake technology is coming, but nobody knows when. We should also assume that an arms race will arise with malicious deepfake actors inventing new methods to overcome the latest detection systems.

Security professionals need to consider the coming deepfake wars when analyzing future security systems. If they’re video or image based — everything from facial recognition to gait analysis — additional scrutiny is warranted.

In addition, you should add video to the long list of media you cannot trust. Just as training programs and digital policies make clear that email may not come from who it appears to come from, video will need to be met with similar skepticism, no matter how convincing the footage. Deepfake technology will also inevitably be deployed for blackmail purposes, which will be used for extracting sensitive information from companies and individuals.

The bottom line is that deepfake videos that are indistinguishable from authentic videos are coming, and we can scarcely imagine what they’ll be used for. We should start preparing for the worst.

More from Artificial Intelligence

Autonomous security for cloud in AWS: Harnessing the power of AI for a secure future

3 min read - As the digital world evolves, businesses increasingly rely on cloud solutions to store data, run operations and manage applications. However, with this growth comes the challenge of ensuring that cloud environments remain secure and compliant with ever-changing regulations. This is where the idea of autonomous security for cloud (ASC) comes into play.Security and compliance aren't just technical buzzwords; they are crucial for businesses of all sizes. With data breaches and cyber threats on the rise, having systems that ensure your…

Cybersecurity Awareness Month: 5 new AI skills cyber pros need

4 min read - The rapid integration of artificial intelligence (AI) across industries, including cybersecurity, has sparked a sense of urgency among professionals. As organizations increasingly adopt AI tools to bolster security defenses, cyber professionals now face a pivotal question: What new skills do I need to stay relevant?October is Cybersecurity Awareness Month, which makes it the perfect time to address this pressing issue. With AI transforming threat detection, prevention and response, what better moment to explore the essential skills professionals might require?Whether you're…

3 proven use cases for AI in preventative cybersecurity

3 min read - IBM’s Cost of a Data Breach Report 2024 highlights a ground-breaking finding: The application of AI-powered automation in prevention has saved organizations an average of $2.2 million.Enterprises have been using AI for years in detection, investigation and response. However, as attack surfaces expand, security leaders must adopt a more proactive stance.Here are three ways how AI is helping to make that possible:1. Attack surface management: Proactive defense with AIIncreased complexity and interconnectedness are a growing headache for security teams, and…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today