Americas

Asia

Oceania

Grant Gross
Senior Writer

Deepfakes: Coming soon to a company near you

Feature
14 Jun 20248 mins
CybercrimePhishing

AI-powered deepfake technology is rapidly advancing, and it’s only a matter of time before cybercriminals find a business model they can use, some security experts say.

Laptop screen view confident male boss leader holding videoconference business negotiations with female partner worker employee due to coronavirus covid19 world outbreak quarantine, remote online job.
Credit: fizkes / Shutterstock

Deepfakes, the bane of celebrities and the fear of politicians, are poised to take off in the corporate world, as cybercriminals see them as a new way to make easy money, some security experts say.

CIOs, CISOs, and other corporate leaders need to be ready for AI-assisted attacks that use realistic, but faked, voice calls, video clips, and live videoconferencing calls, says Michael Hasse, a longtime cybersecurity and IT consultant.

Deepfakes involving voice calls are nothing new. Hasse recalls giving a presentation on the topic to asset management firms back in 2015, after some companies in the industry had fallen victim to voice-based scams.

Since 2015, however, the AI-based technologies used for deepfakes have not only gotten better by magnitudes, but they also have become widely available, he notes. The main factor holding back widespread use of deepfakes by cybercriminals is the absence of a packaged, easy-to-use tool to create faked audio and video, Hasse says.

But such a deepfakes package is coming soon, Hasse predicts, with it likely to start circulating in the criminal underground before the US elections in November, with political campaigns as the first targets.

“Every single piece that’s needed is there,” Hasse says. “The only thing that has kept us from seeing it just flooding everything is that it takes time for the for the bad guys to incorporate stuff like this.”

Deepfakes as credit risks

It’s not just cybersecurity experts who are warning of the corporate risk from deepfakes. In May, credit ratings firm Moody’s issued a warning about deepfakes, saying they create new credit risks. The Moody’s report details a handful of attempted deepfake scams, including faked video calls, that have targeted the financial sector in the past two years.

“Financial losses attributed to deepfake frauds are rapidly emerging as a prominent threat from this advancing technology,” the report says. “Deepfakes can be used to create fraudulent videos of bank officials, company executives, or government functionaries to direct financial transactions or carry out payment frauds.”

Deepfake scams are already happening, but the size of the problem is difficult to estimate, says Jake Williams, a faculty member at IANS Research, a cybersecurity research and advisory firm. In some cases, the scams go unreported to save the victim’s reputation, and in other cases, victims of other types of scams may blame deepfakes as a convenient cover for their actions, he says.

At the same time, any technological defenses against deepfakes will be cumbersome — imagine a deepfakes detection tool listening in on every phone call made by employees — and they may have a limited shelf life, with AI technologies rapidly advancing.

“It’s hard to measure because we don’t have effective detection tools, nor will we,” says Williams, a former hacker at the US National Security Agency. “It’s going to be difficult for us to keep track of over time.”

While some hackers may not yet have access to high-quality deepfake technology, faking voices or images on low-bandwidth video calls has become trivial, Williams adds. Unless your Zoom meeting is of HD or better quality, a face swap may be good enough to fool most people.

You’re not my admin assistant

Kevin Surace, chairman of multifactor authentication vendor Token, can provide firsthand testimony to the potential of voice-based deepfakes. He recently received an email from the administrative assistant of one of Token’s investors, but he immediately identified the email as an obvious phishing scam.

Surace called the administrative assistant to warn her that phishing emails were being sent from her account, and the voice on the other end of the call sounded exactly like the employee, he says. When the voice on the other end of the call started responding oddly during the conversation, he asked about her coworkers, and the voice didn’t recognize their names.

It turns out that the phone number included in the phishing email was one digit off from the administrative assistant’s real number. The fake phone number stopped working a couple of hours after Surace detected the problem.

Criminals who want to fake a voice now need only a few seconds of a recording, and technology to create realistic live video deepfakes is getting better and better, says Surace, known as the father of the virtual assistant for his work on Portico at General Magic in the 1990s.

“People are going to say, ‘Oh, this can’t be happening,’” he says. “It has now happened to a few people, and if it happened to three people, it’s going to be 300, it’s going to be 3,000, and so on.”

So far, deepfakes targeting the corporate world have focused on tricking employees into transferring money to the criminals. But Surace can see deepfakes used for blackmail schemes or stock manipulation as well. If the blackmail amount is low enough, CEOs or other targeted people may decide to pay the fee instead of trying to explain that the person on the compromising video isn’t really them.

Like Hasse, Surace sees a deepfakes wave coming soon. He expects that there’s a lot of scam attempts, like the one targeting him, already being attempted.

“People don’t want to tell anyone it’s happening,” he says. “You pay 10 grand, and you just write it off and say, ‘It’s the last thing I want to tell the press about.’”

Widespread use of deepfakes may be close, but there are a few impediments remaining, beyond the lack of an easy-to-use deepfakes package, Hasse says. Convincing deepfakes can require a level of computing power that some cybercriminals do not have.

In addition, deepfake scams tend to work as targeted attacks, such as whale phishing, and it takes some time to research the quarry.

Potential victims, however, are helping cybercriminals by providing a wealth of information about their lives on social media. “The bad guys really don’t have a super-streamlined way to collect victim data and generate the deepfakes in a sufficiently automated fashion yet, but it’s coming,” Hasse says.

No easy fixes

With more deepfake scams likely coming to the corporate world, the question is how to deal with this growing threat. With deepfake technology continuously getting better, easy answers don’t exist.

Hasse believes awareness and employee training will be important. Employees and executives need to be aware of potential deepfake scams, he says, and when a company insider asks them to do something suspicious, even if it’s on a video call, check back with them to verify the request. Making another phone call or verifying the request with a face-to-face conversation is an old-school form of multi-factor authentication, but it works, he says.

When the asset management industry first began to fall victim to voice scams nearly a decade ago, advisors started to take their know-your-customer approaches to new heights. Conversations with clients began to start with conversations about their families, their hobbies, and other personal information to help verify their identities.

Another defense for company executives and other critical employees may be to intentionally lie on social media to throw off deepfake attacks. “My guess is at some point there will be certain roles within companies where that is actually required,” he says. “If you’re in a sufficiently sensitive role in a sufficiently large corporation, there may be some kind of a level of scrutiny on the social media where a social media czar watches all the accounts.”

CIOs, CISOs, and other company executives need to be aware of the threat and realize they could be targeted, Surace adds.

His company sells a wearable multi-factor authentication device based on fingerprints, and he believes next-generation MFA products can help defend against deepfake scams. Next-gen MFA needs to be able to quickly and securely verify identities, such as every time employees log in to a Zoom meeting, he says.

IANS’ Williams isn’t sure new technologies or employee training will be effective fixes. Some people will resist using a new authentication device, and cybersecurity training has been around for a long time, with limited success, he notes.

Instead, companies need to put processes in place, such as using a secure application when employees transfer large sums of money. Using email or a voice call to ask for a huge money transfer isn’t secure, but some organizations still do it.

For centuries, people have used voices and images to authenticate each other, but that time is ending, he says.

“The reality is that using somebody’s voice or image likeness to authenticate that person has always been, if you look at it through a security perspective, inadequate,” Williams adds. “Technology is catching up with our substandard or ineffective processes.”