We’ve written about deepfakes before, but there’s one overlooked side effect that must be brought to our attention: As the technology improves and becomes more commonplace, what’s stopping anyone from claiming that what they definitively said was the result of a deepfake?

While watching a recent episode of The New York Times’ “The Weekly” show about deepfake technology, what stood out to me over any of the technology was the troubling potential for collateral damage. For example, what if an enterprise was victimized by a major data breach? What if its C-suite executive is at first honest about the attack and then decides to claim they were the victim of a deepfake? Which story would customers believe?

This concept has been discussed in legal circles and is referred to as the “liar’s dividend.” If anyone can claim that what they said is the result of a deepfake, how do we distinguish the truth anymore? The ramifications in the political world are significant, but that’s another discussion. We must probe this issue from the perspective of enterprise cybersecurity, because there’s a lot to chew on.

Deepfakes Are Cutting Even Deeper

Robert Chesney, associate dean for academic affairs at the University of Texas School of Law, is the leading source for analysis, commentary and news related to law and national security. Chesney’s concern about deepfakes prompted him to co-author a paper with his colleague Danielle Citron titled “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.” While the focus of the paper is national security, Chesney told me that there are critical enterprise security implications for this troubling turn in technology.

So what exactly is the liar’s dividend?

“To the extent education efforts succeed in persuading the public that video and audio can be faked in such a convincing and credible way, it won’t just help inoculate against deepfakes,” Chesney explained. “Unfortunately, this will result in a general disposition to be more skeptical about evidence, to the great advantage of those who would lie to escape accountability for things they really did say or do. The liar’s dividend reflects that unwanted consequence.”

Before discussing the complications this could bring on, we cannot neglect the imminent threat of the deepfake to the enterprise, which is actual. Recently, Chesney spoke before the National Association of Corporate Directors and warned them about the inherent risks.

The Core Conundrum of Deepfake Attacks

Organizations, whether for profit or nonprofit, face exposure to sabotage attempts that could be personally or commercially motivated by rivals or individuals who just don’t like what they’re doing, Chesney told his audience.

“The sabotage risk of deepfakes are just as serious for organizations as they are for individuals,” he noted.

One cybersecurity risk that needs more attention has to do with targeted phishing attacks. Far too often, people are induced to cut a check or initiate a wire transfer because they have fallen for a written communication that persuaded them it was from the boss or a relevant decision-maker.

Deepfakes can come into play here if the attacker goes a step further. The victim might say, “I wouldn’t do that without a phone call from Jim.” But what if you get a voicemail from Jim that sounds exactly like him? Chesney said fraudulent recorded audio is possible with current technology, and he refers to this type of attack as deep fraud or deep phishing.

“While this is a threat to look out for today, we are probably a long way from getting a real-time, interactive and plausible audio fake,” he added. “But you could definitely get a convincing voicemail.” To safeguard against this emerging threat, monitoring user behavior for risk will be critical.

Will the Real Victim Please Stand Up

The liar’s dividend may be more distinguishable in the political arena, but the consequences aren’t as simple in a business setting. If someone from an organization claims to be the victim of a deepfake, with which department does the responsibility to investigate fall? Is it a management issue? An HR issue? Should the employee be punished? Fired?

“Companies will be faced with interesting types of legal and HR questions,” said Chesney. “Where is the burden of proof? Does it land on the corporation or the employee?”

The biggest obstacle Chesney foresees is that the employee will need to produce a forensic justification or a plausible alibi — a potentially expensive and taxing endeavor to settle a dispute where truly sabotaged employees are put in an impossible bind.

“They probably can’t afford to put up a defense from being fraudulently attacked,” he said. “That seems wrong. Hopefully, HR systems are designed at a certain level to deal with this.”

Chesney’s concern is that, should these cases end up in court, evidence issues are going to gradually become more complicated. Any audio and video recordings used as evidence on either side could be accurate or fraudulent. It’s enough to make anyone question what to believe and whom to trust.

Strategy and Education Can Limit the Impact on the Enterprise

I realize that we may be getting ahead of ourselves with what could be conjecture at this point, but deepfake technology — while relatively new to the threat landscape — has already arrived on many cybersecurity radar systems.

For the enterprise, security education and awareness can only take us so far. Even so, they are still crucial to any defensive efforts.

“Education helps a little, but we have to be realistic about how effective this can be,” said Chesney. “Just look at the compliance errors that routinely happen by mistake. Still, it’s especially important for entity leaders to be made mindful of the risk of being duped by some bespoke fake intended to generate a money transfer, a commercial decision or a hiring decision.”

Deepfakes will likely trouble us for the foreseeable future, but if you’re concerned about whether we’ll reach a point where nobody knows what to believe anymore, there is hope. Remember that there are also technological solutions being developed to combat deepfakes.

Experts like Chesney believe that, while this issue may escalate in the near future, someday a disruptive solution will emerge. Until then, full-scale awareness around the existence of these threats may be our best shot.

More from Artificial Intelligence

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

How I got started: Cyber AI/ML engineer

3 min read - As generative AI goes mainstream, it highlights the increasing demand for AI cybersecurity professionals like Maria Pospelova. Pospelova is currently a senior data scientist, and data science team lead at OpenText Cybersecurity. She also worked at Interset, an AI cybersecurity company acquired by MicroFocus and then by OpenText. She continues as part of that team today. Did you go to college? What did you go to school for? Pospelova: I graduated with a bachelor’s degree in computer science and a…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today