Just when you thought you had enough to keep you up at night, there’s another threat to add to the list of enterprise security nightmares lurking under the bed. The deepfake, once a threat only to celebrities, has now transcended into the realm of potential risks to the organization.

According to Axios, deepfake audio technology has already begun wreaking havoc on the business world, as threat actors use the tech to impersonate CEOs. Symantec has reported three successful audio attacks on private companies that involved a call from the “CEO” to a senior financial officer requesting an urgent money transfer. Just imagine how an incident like this would affect your company.

Make no mistake: The threat is real. Especially because we don’t yet have tools reliable enough to distinguish between deepfake audio and the genuine article. So what can the enterprise do? Are there any steps we can take to mitigate the risk?

Taking Social Engineering to the Next Level

Independent cybersecurity expert Rod Soto views deepfakes as the next level of social engineering attacks.

“Deepfakes, either in video or audio form, go far beyond the simple email link, well-crafted SMS/text, or a phone call that many criminals use to abuse people’s trust and mislead them into harmful actions,” Soto said. “They can indeed extend the way social engineering techniques are employed.”

Simulated leaked audio may happen sooner than later, possibly featuring cloned recordings of executives with their entire conversation altered for malicious purposes. This information could easily affect investments and present situations in which a company’s competitors try to inflict reputational damage.

Soto’s primary concern when he first read about this is that we are not prepared for this type of attack, and it is only a matter of time until we start seeing significant consequences.

“Further on, as the technologies to create these audios and videos become more prevalent and easy to use, the attacks will become more widespread, affecting more than just executives, VIPs or government officials,” he said.

Soto is even aware of deepfake technology that can successfully emulate or clone people’s voices. Even without perfect technology, hackers can effectively add other artifacts to a cloned voice, such as airport background noise or car-driving noises. Obfuscating the voice in these ways, Soto noted, may affect the ability of a potential victim to identify the cloned voice and believe the message.

The Silver Linings

Unlike zero-day attacks, one thing we have going for us is time. As deepfake audio technology stands today, threat actors need sophisticated tools to pull one over on unsuspecting victims. Moreover, the barrier to entry is higher than the average attack available to anyone with cash to spend on the darknet.

Another positive is that training a very convincing deepfake audio model costs thousands of dollars in computing resources, according to CPO Magazine. However, if there’s a threat group with lots of money behind it, isn’t that cause for concern?

“There is certainly a computational cost and technology that is likely not available for the common criminal or script kiddie-type of threat actor,” said Soto. “But higher levels of organized crime or professional criminals can absolutely do it. As long as they have resources, it is possible to perform these types of attacks.”

Ultimately, the technology is still in development and, at this point, social engineering attacks couldn’t rely only on deepfake technology, as trained eyes and ears can still detect them. However, as Soto warned, “this may not be the case in the near future.”

How to Fend Off Deepfake Audio Attacks

Even if the audio is convincing enough to dupe most employees, all hope is not lost.

“For this type of attack to be successful, it needs to be supported by other social engineering means, such as emails or texts,” Soto explained. “As these technologies advance and become more difficult to detect, it will become necessary to create anti-deepfake protocols, which will probably involve multiple checks and verifications.”

As with similar attacks, you can train employees not to execute or follow instructions based only on audio or email messages. It is crucial for organizations to enhance enterprise security by ensuring that employees learn the lingo and understand cutting-edge social engineering methods. And the enterprise isn’t limited to awareness as the sole prevention strategy.

“While awareness always works, when facing these types of threats, it is necessary to develop anti-deepfake protocols that can provide users and employees with tools to detect or mitigate these types of attacks,” he said.

In addition to deepfake protocols, Soto sees the need for multifactor authentication (MFA) across the corporate environment, because most attacks are combined with other social engineering techniques that can be prevented — or, at least, mitigated — with solid identity and access management (IAM) solutions.

“This will force all of us to implement new verification protocols, in addition to simply listening to a voice mail, or reading an email or text message,” he said. “Regulation will likely be needed as well to address the widespread use of these technologies that can be weaponized and, potentially, cause harm.”

While I’m not trying to paint a picture of doom and gloom here, recent deepfake audio and video trends should serve as serious warnings to the enterprise. The deepfake threat is real, but with airtight security awareness training, carefully developed protocols and advanced security tools, organizations can greatly increase their chances of defeating any deepfake-based attacks.

More from Artificial Intelligence

Cloud Threat Landscape Report: AI-generated attacks low for the cloud

2 min read - For the last couple of years, a lot of attention has been placed on the evolutionary state of artificial intelligence (AI) technology and its impact on cybersecurity. In many industries, the risks associated with AI-generated attacks are still present and concerning, especially with the global average of data breach costs increasing by 10% from last year.However, according to the most recent Cloud Threat Landscape Report released by IBM’s X-Force team, the near-term threat of an AI-generated attack targeting cloud computing…

Testing the limits of generative AI: How red teaming exposes vulnerabilities in AI models

4 min read - With generative artificial intelligence (gen AI) on the frontlines of information security, red teams play an essential role in identifying vulnerabilities that others can overlook.With the average cost of a data breach reaching an all-time high of $4.88 million in 2024, businesses need to know exactly where their vulnerabilities lie. Given the remarkable pace at which they’re adopting gen AI, there’s a good chance that some of those vulnerabilities lie in AI models themselves — or the data used to…

Security roundup: Top AI stories in 2024

3 min read - 2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.With the AI landscape rapidly evolving, it's worth looking back before moving forward. Here are our top five AI security stories for 2024.Can you hear me now? Hackers hijack audio with AIAttackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today