Follow Datanami:
April 21, 2022

Deepfakes, Digital Twins, and the Authentication Challenge

Deepfakes essentially are unauthorized digital twins created by malicious actors. The AI behind the two phenomena has gotten so good that the human eye can’t tell the difference between them. So how will we separate the legitimate wheat from the evil chafe in the metaverse?

One of the technologists exploring the relationship between deepfakes and digital twins is Neil Sahota, the chief innovation officer at the University of California Irvine School of Law and the CEO of ASCILabs. Sahota recently was a guest on Bernard Marr’s podcast, where Marr discussed his own digital twin, which Marr has trained to answer emails and interact with people online.

“If he’s not available, you can still interact with his digital twin, which to some degree would mimic and say and share what he would normally do,” Sahota says. “He says his digital twin really upped his bandwidth.”

There is clearly an upside to digital twins, especially for famous folks like Marr and singer Taylor Swift. “She’s really big about engaging with her fans and tries to be active with them,” Sahota says. “It’s obviously tough for her, but if she would invest in a digital twin, she could increase her bandwidth in terms of her fan engagement.”

There is plenty of footage of Swift on the Internet, which unfortunately opens her up to the dark side of digital twins: deepfakes.

A deepfake of Tom Cruise (Image courtesy DeepTomCruise)

Deep Faking

The age of deepfakes began around 2017, when researchers at the University of Washington released a video of former President Barrack Obama. By training a deep neural net on existing video of Obama speaking, the researchers created an AI model that allowed them to generate new videos in which Obama said whatever they wanted him to say.

Since then, use of the open-source technology has proliferated, and people have created all sorts deep fakes. There are TikTok videos that purport to show Tom Cruise doing regular-person things out the world–playing rock-paper-scissors on Sunset Boulevard, swinging a golf club, or strumming a guitar. These deepfakes are relatively harmless gags, and even TikTok says the DeepTomCruise account doesn’t violate its terms and conditions.

But deepfakes are also becoming popular among criminal entities as well as among foreign governments looking to sway public opinion by any means necessary. The technology has been co-opted for the so-called “revenge porn” industry, in which individuals release videos that appear to feature their former lovers. And in March, a deepfake video of Ukrainian President Volodymyr Zelensky asking his people to “lay down your weapons and go back to your families” has all the earmarks of a Russian military disinformation campaign.

What’s to stop a malicious user from creating an authorized digital twin–a deepfake–and passing it off as the real deal? Not much, Sahota says.

“This is a problem we have to jump out in front of,” Sahota says. “The last thing you want is you’re in this metaverse and you’re wondering ‘Is the person I’m dealing with, is that really the person, or is this a deepfake?’”

Deep Fake Detection

According to Sahota, humans increasingly can’t tell the difference between deepfakes and reality.

“That’s the big problem with deepfakes is they’ve gotten so good,” Sahota says. “AI’s gotten so good at understanding not just how someone speaks, but their body language and motions. It’s hard to tell sometimes, is that really the person or is that an AI deepfake?”

Tech companies have tried to tackle the problem in several ways. In September 2020, Microsoft launched a video authenticator tool that can analyze a photo or a video to determine whether it has been artificially manipulated. That tool, which was trained on public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset, works by “detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye,” the company said in a blog post.

Is this Bernard Marr or his digital twin? (Image courtesy Bernard Marr)

A poorly constructed deepfake, such as the Zelensky video, is still relatively easy to spot. But more advanced deepfakes require something more powerful, such as another AI program, Sahota says.

“Unfortunately, it’s an arms race,” he says. “As deepfakes get better, we have to create better systems to detect deepfakes. As good as deepfakes have gotten, there’s probably some subtleties there that we as humans can’t pick up, but a machine could. And as we do that, they’re going to improve their deepfakes and we’ll improve our detection. It becomes a never-ending cycle unfortunately.”

Last year, Facebook announced a partnership with Michigan State University to help detect deepfakes by using a reverse engineering method that relies on “uncovering the unique patterns behind the AI model used to generate a single deepfake image,” the researchers wrote. The US Army is also backed a University of Southern California group that’s using a Successive Subspace Learning (SSL) technique to improve signal transformation.

However, these days, even the good-guy AI can’t detect the deepfakes created by bad-guy AI. “That’s the real issue now,” Sahota says. “Some of these things look so realistic that these subtleties that we would normally pick up, you can’t find them anymore.”

Mitigating the Fake

There’s a lot of research being done and a lot of ideas are being tossed around to solve this problem, Sahota says. Lots of it hinges on better authentication mechanisms for validating authentic content. Anything without the stamp of approval would be deemed suspect.

For example, some folks are looking to leverage the blockchain to prove the validity of a given digital twin or piece of content. While it sounds promising, it probably won’t work at this point in time.

The rise of deepfake and the the advent of synthetic data used to train neural nets are closely intertwined

“In theory we can” use the blockchain. “In practicality, blockchain isn’t quite mature enough as a technology yet. It doesn’t still scale that well and still got some security issues of its own. It’s great for simple transactions, but more complex stuff? It needs a bit more maturity.”

Back in 2020, Microsoft launched a new feature in Azure that allows content producer to add digital hashes and certificates to a piece of content, which then travel with the content as metadata. Microsoft also debuted a browser-based reader that checks the certificates and matches the hashes to let a user know if it’s legitimate.

In the future, people in the metaverse may have a “ticket” that contains some special encoding, much as today’s new mobile tickets have constantly changing barcodes or other features that are tough to replicate. Advanced encryption is essentially uncrackable by hackers today, but it may not be practical for day-to-day interactions in the metaverse.

“The question is how big does that string have to be to make it hard to hack into and replicate, and are people going to be good about actually taking these extra steps?” Sahota says. “It’s going to be a big change maybe psychologically for most of us, that every time now we interact with someone or something, we have to authenticate with each other.”

For now, the best approach for organizations to fight deepfakes is to detect them and deal with them as fast as you can. Government agencies and large corporations are building war rooms to quickly countermand deepfakes when they pop up in the wild.

“You need a crack team, and you have AI bots monitoring the news channels and newsfeeds to see if something comes out, so at least you get alerted quickly,” he says.

Related Items:

U.S. Army Employs Machine Learning for Deepfake Detection

New AI Model From Facebook, Michigan State Detects & Attributes Deepfakes

Faking It: Dealing with Counterfeits in the Age of AI

 

 

Datanami