We’ve written about deepfakes before, but there’s one overlooked side effect that must be brought to our attention: As the technology improves and becomes more commonplace, what’s stopping anyone from claiming that what they definitively said was the result of a deepfake?

While watching a recent episode of The New York Times’ “The Weekly” show about deepfake technology, what stood out to me over any of the technology was the troubling potential for collateral damage. For example, what if an enterprise was victimized by a major data breach? What if its C-suite executive is at first honest about the attack and then decides to claim they were the victim of a deepfake? Which story would customers believe?

This concept has been discussed in legal circles and is referred to as the “liar’s dividend.” If anyone can claim that what they said is the result of a deepfake, how do we distinguish the truth anymore? The ramifications in the political world are significant, but that’s another discussion. We must probe this issue from the perspective of enterprise cybersecurity, because there’s a lot to chew on.

Deepfakes Are Cutting Even Deeper

Robert Chesney, associate dean for academic affairs at the University of Texas School of Law, is the leading source for analysis, commentary and news related to law and national security. Chesney’s concern about deepfakes prompted him to co-author a paper with his colleague Danielle Citron titled “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” While the focus of the paper is national security, Chesney told me that there are critical enterprise security implications for this troubling turn in technology.

So what exactly is the liar’s dividend?

“To the extent education efforts succeed in persuading the public that video and audio can be faked in such a convincing and credible way, it won’t just help inoculate against deepfakes,” Chesney explained. “Unfortunately, this will result in a general disposition to be more skeptical about evidence, to the great advantage of those who would lie to escape accountability for things they really did say or do. The liar’s dividend reflects that unwanted consequence.”

Before discussing the complications this could bring on, we cannot neglect the imminent threat of the deepfake to the enterprise, which is actual. Recently, Chesney spoke before the National Association of Corporate Directors and warned them about the inherent risks.

The Core Conundrum of Deepfake Attacks

Organizations, whether for profit or nonprofit, face exposure to sabotage attempts that could be personally or commercially motivated by rivals or individuals who just don’t like what they’re doing, Chesney told his audience.

“The sabotage risk of deepfakes are just as serious for organizations as they are for individuals,” he noted.

One cybersecurity risk that needs more attention has to do with targeted phishing attacks. Far too often, people are induced to cut a check or initiate a wire transfer because they have fallen for a written communication that persuaded them it was from the boss or a relevant decision-maker.

Deepfakes can come into play here if the attacker goes a step further. The victim might say, “I wouldn’t do that without a phone call from Jim.” But what if you get a voicemail from Jim that sounds exactly like him? Chesney said fraudulent recorded audio is possible with current technology, and he refers to this type of attack as deep fraud or deep phishing.

“While this is a threat to look out for today, we are probably a long way from getting a real-time, interactive and plausible audio fake,” he added. “But you could definitely get a convincing voicemail.” To safeguard against this emerging threat, monitoring user behavior for risk will be critical.

Will the Real Victim Please Stand Up

The liar’s dividend may be more distinguishable in the political arena, but the consequences aren’t as simple in a business setting. If someone from an organization claims to be the victim of a deepfake, with which department does the responsibility to investigate fall? Is it a management issue? An HR issue? Should the employee be punished? Fired?

“Companies will be faced with interesting types of legal and HR questions,” said Chesney. “Where is the burden of proof? Does it land on the corporation or the employee?”

The biggest obstacle Chesney foresees is that the employee will need to produce a forensic justification or a plausible alibi — a potentially expensive and taxing endeavor to settle a dispute where truly sabotaged employees are put in an impossible bind.

“They probably can’t afford to put up a defense from being fraudulently attacked,” he said. “That seems wrong. Hopefully, HR systems are designed at a certain level to deal with this.”

Chesney’s concern is that, should these cases end up in court, evidence issues are going to gradually become more complicated. Any audio and video recordings used as evidence on either side could be accurate or fraudulent. It’s enough to make anyone question what to believe and whom to trust.

Strategy and Education Can Limit the Impact on the Enterprise

I realize that we may be getting ahead of ourselves with what could be conjecture at this point, but deepfake technology — while relatively new to the threat landscape — has already arrived on many cybersecurity radar systems.

For the enterprise, security education and awareness can only take us so far. Even so, they are still crucial to any defensive efforts.

“Education helps a little, but we have to be realistic about how effective this can be,” said Chesney. “Just look at the compliance errors that routinely happen by mistake. Still, it’s especially important for entity leaders to be made mindful of the risk of being duped by some bespoke fake intended to generate a money transfer, a commercial decision or a hiring decision.”

Deepfakes will likely trouble us for the foreseeable future, but if you’re concerned about whether we’ll reach a point where nobody knows what to believe anymore, there is hope. Remember that there are also technological solutions being developed to combat deepfakes.

Experts like Chesney believe that, while this issue may escalate in the near future, someday a disruptive solution will emerge. Until then, full-scale awareness around the existence of these threats may be our best shot.

More from Artificial Intelligence

NIST’s role in the global tech race against AI

4 min read - Last year, the United States Secretary of Commerce announced that the National Institute of Standards and Technology (NIST) has been put in charge of launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology.However, recent budget cuts at NIST, along with a lack of strategy implementation, have called into question the agency’s ability to lead this critical effort. Ultimately, the success…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today