We’ve written about deepfakes before, but there’s one overlooked side effect that must be brought to our attention: As the technology improves and becomes more commonplace, what’s stopping anyone from claiming that what they definitively said was the result of a deepfake?

While watching a recent episode of The New York Times’ “The Weekly” show about deepfake technology, what stood out to me over any of the technology was the troubling potential for collateral damage. For example, what if an enterprise was victimized by a major data breach? What if its C-suite executive is at first honest about the attack and then decides to claim they were the victim of a deepfake? Which story would customers believe?

This concept has been discussed in legal circles and is referred to as the “liar’s dividend.” If anyone can claim that what they said is the result of a deepfake, how do we distinguish the truth anymore? The ramifications in the political world are significant, but that’s another discussion. We must probe this issue from the perspective of enterprise cybersecurity, because there’s a lot to chew on.

Deepfakes Are Cutting Even Deeper

Robert Chesney, associate dean for academic affairs at the University of Texas School of Law, is the leading source for analysis, commentary and news related to law and national security. Chesney’s concern about deepfakes prompted him to co-author a paper with his colleague Danielle Citron titled “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” While the focus of the paper is national security, Chesney told me that there are critical enterprise security implications for this troubling turn in technology.

So what exactly is the liar’s dividend?

“To the extent education efforts succeed in persuading the public that video and audio can be faked in such a convincing and credible way, it won’t just help inoculate against deepfakes,” Chesney explained. “Unfortunately, this will result in a general disposition to be more skeptical about evidence, to the great advantage of those who would lie to escape accountability for things they really did say or do. The liar’s dividend reflects that unwanted consequence.”

Before discussing the complications this could bring on, we cannot neglect the imminent threat of the deepfake to the enterprise, which is actual. Recently, Chesney spoke before the National Association of Corporate Directors and warned them about the inherent risks.

The Core Conundrum of Deepfake Attacks

Organizations, whether for profit or nonprofit, face exposure to sabotage attempts that could be personally or commercially motivated by rivals or individuals who just don’t like what they’re doing, Chesney told his audience.

“The sabotage risk of deepfakes are just as serious for organizations as they are for individuals,” he noted.

One cybersecurity risk that needs more attention has to do with targeted phishing attacks. Far too often, people are induced to cut a check or initiate a wire transfer because they have fallen for a written communication that persuaded them it was from the boss or a relevant decision-maker.

Deepfakes can come into play here if the attacker goes a step further. The victim might say, “I wouldn’t do that without a phone call from Jim.” But what if you get a voicemail from Jim that sounds exactly like him? Chesney said fraudulent recorded audio is possible with current technology, and he refers to this type of attack as deep fraud or deep phishing.

“While this is a threat to look out for today, we are probably a long way from getting a real-time, interactive and plausible audio fake,” he added. “But you could definitely get a convincing voicemail.” To safeguard against this emerging threat, monitoring user behavior for risk will be critical.

Will the Real Victim Please Stand Up

The liar’s dividend may be more distinguishable in the political arena, but the consequences aren’t as simple in a business setting. If someone from an organization claims to be the victim of a deepfake, with which department does the responsibility to investigate fall? Is it a management issue? An HR issue? Should the employee be punished? Fired?

“Companies will be faced with interesting types of legal and HR questions,” said Chesney. “Where is the burden of proof? Does it land on the corporation or the employee?”

The biggest obstacle Chesney foresees is that the employee will need to produce a forensic justification or a plausible alibi — a potentially expensive and taxing endeavor to settle a dispute where truly sabotaged employees are put in an impossible bind.

“They probably can’t afford to put up a defense from being fraudulently attacked,” he said. “That seems wrong. Hopefully, HR systems are designed at a certain level to deal with this.”

Chesney’s concern is that, should these cases end up in court, evidence issues are going to gradually become more complicated. Any audio and video recordings used as evidence on either side could be accurate or fraudulent. It’s enough to make anyone question what to believe and whom to trust.

Strategy and Education Can Limit the Impact on the Enterprise

I realize that we may be getting ahead of ourselves with what could be conjecture at this point, but deepfake technology — while relatively new to the threat landscape — has already arrived on many cybersecurity radar systems.

For the enterprise, security education and awareness can only take us so far. Even so, they are still crucial to any defensive efforts.

“Education helps a little, but we have to be realistic about how effective this can be,” said Chesney. “Just look at the compliance errors that routinely happen by mistake. Still, it’s especially important for entity leaders to be made mindful of the risk of being duped by some bespoke fake intended to generate a money transfer, a commercial decision or a hiring decision.”

Deepfakes will likely trouble us for the foreseeable future, but if you’re concerned about whether we’ll reach a point where nobody knows what to believe anymore, there is hope. Remember that there are also technological solutions being developed to combat deepfakes.

Experts like Chesney believe that, while this issue may escalate in the near future, someday a disruptive solution will emerge. Until then, full-scale awareness around the existence of these threats may be our best shot.

More from Artificial Intelligence

Trends: Hardware gets AI updates in 2024

4 min read - The surge in artificial intelligence (AI) usage over the past two and a half years has dramatically changed not only software but hardware as well. As AI usage continues to evolve, PC makers have found in AI an opportunity to improve end-user devices by offering AI-specific hardware and marketing them as "AI PCs."Pre-AI hardware, adapted for AIA few years ago, AI often depended on hardware that was not explicitly designed for AI. One example is graphics processors. Nvidia Graphics Processing…

SANS Institute: Top 5 dangerous cyberattack techniques in 2024

4 min read - The SANS Institute — a leading authority in cybersecurity research, education and certification — released its annual Top Attacks and Threats Report. This report provides insights into the evolving threat landscape, identifying the most prevalent and dangerous cyberattack techniques that organizations need to prepare for.This year’s report also highlighted the main takeaways from the SANS keynote hosted at the annual conference. During the keynote presentation, five new cybersecurity attacks were identified and discussed by key SANS members along with suggested…

CISA chief AI officer follow-up: Current state of the role (and where it’s heading)

4 min read - At the beginning of August, CISA announced that it had appointed Lisa Einstein, Senior Advisor of its artificial intelligence division, as its new chief AI officer. This announcement came following several new initiatives in the last couple of years focused on gaining a clearer understanding of the potential security impacts of AI.With the National Cybersecurity Strategy and the supporting National Cybersecurity Strategy Implementation Plan still evolving, there has been increased awareness of the value of organizations establishing an executive seat…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today