Just when you thought you had enough to keep you up at night, there’s another threat to add to the list of enterprise security nightmares lurking under the bed. The deepfake, once a threat only to celebrities, has now transcended into the realm of potential risks to the organization.

According to Axios, deepfake audio technology has already begun wreaking havoc on the business world, as threat actors use the tech to impersonate CEOs. Symantec has reported three successful audio attacks on private companies that involved a call from the “CEO” to a senior financial officer requesting an urgent money transfer. Just imagine how an incident like this would affect your company.

Make no mistake: The threat is real. Especially because we don’t yet have tools reliable enough to distinguish between deepfake audio and the genuine article. So what can the enterprise do? Are there any steps we can take to mitigate the risk?

Taking Social Engineering to the Next Level

Independent cybersecurity expert Rod Soto views deepfakes as the next level of social engineering attacks.

“Deepfakes, either in video or audio form, go far beyond the simple email link, well-crafted SMS/text, or a phone call that many criminals use to abuse people’s trust and mislead them into harmful actions,” Soto said. “They can indeed extend the way social engineering techniques are employed.”

Simulated leaked audio may happen sooner than later, possibly featuring cloned recordings of executives with their entire conversation altered for malicious purposes. This information could easily affect investments and present situations in which a company’s competitors try to inflict reputational damage.

Soto’s primary concern when he first read about this is that we are not prepared for this type of attack, and it is only a matter of time until we start seeing significant consequences.

“Further on, as the technologies to create these audios and videos become more prevalent and easy to use, the attacks will become more widespread, affecting more than just executives, VIPs or government officials,” he said.

Soto is even aware of deepfake technology that can successfully emulate or clone people’s voices. Even without perfect technology, hackers can effectively add other artifacts to a cloned voice, such as airport background noise or car-driving noises. Obfuscating the voice in these ways, Soto noted, may affect the ability of a potential victim to identify the cloned voice and believe the message.

The Silver Linings

Unlike zero-day attacks, one thing we have going for us is time. As deepfake audio technology stands today, threat actors need sophisticated tools to pull one over on unsuspecting victims. Moreover, the barrier to entry is higher than the average attack available to anyone with cash to spend on the darknet.

Another positive is that training a very convincing deepfake audio model costs thousands of dollars in computing resources, according to CPO Magazine. However, if there’s a threat group with lots of money behind it, isn’t that cause for concern?

“There is certainly a computational cost and technology that is likely not available for the common criminal or script kiddie-type of threat actor,” said Soto. “But higher levels of organized crime or professional criminals can absolutely do it. As long as they have resources, it is possible to perform these types of attacks.”

Ultimately, the technology is still in development and, at this point, social engineering attacks couldn’t rely only on deepfake technology, as trained eyes and ears can still detect them. However, as Soto warned, “this may not be the case in the near future.”

How to Fend Off Deepfake Audio Attacks

Even if the audio is convincing enough to dupe most employees, all hope is not lost.

“For this type of attack to be successful, it needs to be supported by other social engineering means, such as emails or texts,” Soto explained. “As these technologies advance and become more difficult to detect, it will become necessary to create anti-deepfake protocols, which will probably involve multiple checks and verifications.”

As with similar attacks, you can train employees not to execute or follow instructions based only on audio or email messages. It is crucial for organizations to enhance enterprise security by ensuring that employees learn the lingo and understand cutting-edge social engineering methods. And the enterprise isn’t limited to awareness as the sole prevention strategy.

“While awareness always works, when facing these types of threats, it is necessary to develop anti-deepfake protocols that can provide users and employees with tools to detect or mitigate these types of attacks,” he said.

In addition to deepfake protocols, Soto sees the need for multifactor authentication (MFA) across the corporate environment, because most attacks are combined with other social engineering techniques that can be prevented — or, at least, mitigated — with solid identity and access management (IAM) solutions.

“This will force all of us to implement new verification protocols, in addition to simply listening to a voice mail, or reading an email or text message,” he said. “Regulation will likely be needed as well to address the widespread use of these technologies that can be weaponized and, potentially, cause harm.”

While I’m not trying to paint a picture of doom and gloom here, recent deepfake audio and video trends should serve as serious warnings to the enterprise. The deepfake threat is real, but with airtight security awareness training, carefully developed protocols and advanced security tools, organizations can greatly increase their chances of defeating any deepfake-based attacks.

More from Artificial Intelligence

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today