CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Deepfakes are stealing the show on ‘America’s Got Talent.’ Will they soon steal a lot more too?

September 6, 2022, 5:19 PM UTC
Photo of Metaphysic's deepfake performance on "America's Got Talent"
Startup Metaphysic has made it to the finals of "America's Got Talent" with live deepfakes of Simon Cowell and the other celebrity judges. But some fear the technology will supercharge fraud.
Chris Haston/NBC—Getty Images

Deepfakes are getting scarily good. If there were any doubt about it, this season’s America’s Got Talent should serve as a wakeup call. A startup called Metaphysic has managed to advance to the talent competition’s final round, which will air next week, by producing remarkable deepfakes of Simon Cowell and the other contest judges in real-time. The judges have been blown-away by seeing performers who have only the vaguest resemblance to them—a somewhat similar face and body shape—suddenly transform into their digital doppelgangers, right before their eyes.

Welcome to the world of live deepfakes. Two years ago, most deepfake software couldn’t create a convincing likeliness of someone without a lot of images of the deepfake target—which is why celebrities were often used for deepfakes, since plenty of photos of them from a variety of angles are readily available. What’s more, to get the details right—particularly around the mouth and eyes and jawline—so that the deepfake was really convincing, took a fair bit of post-production work. Finally, the A.I. models that created the deepfake couldn’t be run fast enough to produce the deepfake reliably in real-time over a broadcast video. Today, none of those things are true. Believable deepfakes can be deployed on a live video transmission.

Now, that’s not to say all live deepfakes are as good as the Cowell deepfake on America’s Got Talent. That’s because the creative genius behind Metaphysic is none other than Chris Ume, one of the world’s top deepfake artists (he produced those viral Tom Cruise deepfakes that broke the internet 18 months ago.) Ume prides himself on creating deepfakes that are as flawless as possible and sweats every wrinkle and micro-gesture he is trying to replicate. But software is freely available on the internet to allow someone with almost no technical skill to produce a not-half-bad live deepfake. And the technology is rapidly improving.

In fact, it’s getting better so quickly that even those who have been monitoring the field closely have been surprised by its rapid advance. “I remember someone asking me about live deepfakes about two years ago, and I said this is going to take about five years. I was wrong,” Hany Farid, a computer scientist at the University of California at Berkeley who is one of the world’s foremost experts on digital image authentication, tells me.

Tom Graham, the Australian lawyer-turned-cryptocurrency-investor who is Ume’s business partner in Metaphysic, tells me that there has been, in his view, a 20-fold increase in the quality of the deepfakes his company can create in just the past twelve months. Graham says the improvement has come both from a method Metaphysic has used in which the deepfakes are not created from a single A.I. model, but from a composite where different A.I. models handle different parts of a person’s face (for instance, one model might just work to perfect the eyebrow movements, and another the lips, etc.) But he also says that what makes live deepfakes like the ones Metaphysic is showcasing on America’s Got Talent possible are big advances in being able to have the A.I. systems render the deepfake images fast enough to work well over real-time video.

Graham says that part of the reason Metaphysic wanted to go on the talent show was to raise public awareness about how good deepfakes are getting and to prod people to start abandoning the old idea that “seeing is believing.”

Metaphysic is making money right now by creating deepfakes for Hollywood and the advertising industry. But it also has started a side project called Every Anyone that allows a person to upload a simple selfie and generate a realistic deepfake of themselves that is also linked to a non-fungible token (NFT). The idea, according to Graham, is that this will give people ownership over their own digital likenesses and provide them a means to control how they are used. (Because the deepfake—Graham and Ume prefer the term “hyper-real digital avatar”— is cryptographically written to an unalterable digital ledger, it will in theory provide a way for people to know they are interacting with you—or at least the authorized fake version of you—as opposed to some impostor pretending to be you.)

Graham thinks this technology will be a key to making the metaverse a reality. He also thinks this kind of metaverse, where everyone mints their own realistic deepfake of themselves, is far better than allowing giant social media companies to create avatars for us—and own all of our biometric data in the process. “Any Everyone is a vehicle to empower individuals to become real actors inside Web3 economies,” he tells me. “In Web2, we are users and consumers and products. But in Web3, we need to be partners not products.”

That all sounds great—but what about the way in which live deepfakes are likely to supercharge fraud? Already there are reports of people using live deepfakes on Zoom calls for both financial crime and political disinformation. Last month, a top executive of the cryptocurrency exchange Binance said that fraudsters had used a sophisticated deepfake “hologram” of him to scam several cryptocurrency projects. (Those claims have not been verified by independent experts.) In July, the FBI warned that people could use deepfakes in job interviews conducted over video conferencing software. A month earlier, several European mayors said they were initially fooled by a deepfake video call purporting to be with Ukrainian President Volodymyr Zelensky.

Graham insists that Metaphysic is dedicated to ensuring its own technology is never used for such purposes. And the company has helped create a forum called Synthetic Futures for all those involved in using deepfakes for entertainment or other legitimate purposes to discuss standards to prevent abuse of the A.I. technology.

But Berkeley’s Farid says he’s worried such standards may come too late and be too weak to prevent widespread abuse. “This is like phishing scams on steroids,” he says of live deepfakes. (For now, there are some simple methods someone can use on a video call to give themselves some reasonable assurance they are not conversing with a deepfake. I talked about some of those methods last week in this story.)

Meanwhile, actors are already worried that the technology will put them out of work—or that young actors will be pressured into selling entertainment companies the rights to use their biometric data in perpetuity. Graham admits that all these issues need to be addressed and that “there is a role for regulation” as well as standard setting. “Everyone in this industry needs to work hard to build in safeguards,” he says.

Whether those safeguards will arrive in time to save us all from serious harm remains to be seen. Here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Walmart faces a class action lawsuit for allegedly violating Illinois's data privacy law by using surveillance cameras and Clearview AI's software. The retail giant is being sued in Illinois for allegedly using surveillance cameras and Clearview's controversial facial recognition technology in its stores in that state in violation of its strict Biometric Privacy Act, according to Insider. The company is also named in another lawsuit against multiple companies, including Home Depot, Best Buy, Kohl's, and AT&T,  for allegedly using Clearview AI's facial recognition illegally in Illinois. That suit alleges Clearview has violated a settlement the company reached in May with the American Civil Liberties Union in which it said it would no longer sell its facial database to U.S. businesses. Clearview AI did not immediately respond for comment. A Walmart spokesperson told Insider that the company is "not a Clearview client."

U.S. orders Nvidia and other A.I. chip makers to stop exporting to China. Washington has banned chipmakers Nvidia and AMD from exporting graphics processing unit (GPU) computer chips frequently used for A.I. applications to China, in an escalation of the country's technological cold war with China. An analysis from Reuters notes that the chip export ban is likely to be a major blow to the country's A.I. research efforts. Meanwhile, The Financial Times reports that the ban is likely to be a boon to China's domestic semiconductor companies, which have been trying without much success to match the kind of chips for A.I. that Nvidia and others produce.

A.I.-generated art won a competition in Colorado, infuriating many human artists. The Washington Post was among the news outlets covering the controversy that ensued when Jason Allen won the Colorado State Fair's art competition with a work called “Théâtre D’opéra Spatial” that had been created by the A.I. software Midjourney. Midjourney and competing systems, such as DALL-E and Stable Diffusion, are trained on billions of images from the internet, including many historical and contemporary works of art, and can then generate images in response to text descriptions of what the image should look like. Allen was criticized for essentially cheating—and for contributing to "the death of artistry." Allen himself says he's not a trained artist, but he sees A.I. as simply another tool for making art, no different from a paintbrush or camera. 

Tesla reportedly in hiring spree for engineers to work on its Optimus humanoid robot. The Daily Mail picked up on the story after noticing a LinkedIn post from Konstantinos Laskaris, Tesla's principle motor designer. In the post, Laskaris said the company was hiring at least nine different engineers to work on aspects of the robot in both Palo Alto, California, and Athens, Greece. Tesla CEO Elon Musk has hinted that further details of an Optimus prototype will be revealed at the company's annual "A.I. Day" at the end of this month. But Musk has a reputation for raising expectations and hype around advances in A.I. and then disappointing (when he first offered to give people a sense of what the Tesla robot would be like, he brought out a human dancer in Spandex.) 

EYE ON A.I. TALENT

Secure messaging app Signal has hired Meredith Whittaker, a former Google manager who has become a leading critic of the company and of Big Tech generally, to be the company's president. In the role, Whittaker will help "guide strategy, communications and policy," according to The Washington Post

U.K. insurance broker LifeSearch has hired Sam Stafford to be its chief data officer, according to trade publication FinExtra. Stafford was previously the head of digital data science at Admiral Insurance.

EYE ON A.I. RESEARCH

Using A.I. to read your mind (sort of). Researchers at Meta have made a breakthrough in using A.I. to interpret the brain activity recorded through non-invasive electroencephalography and magnetoencephalography when people listen to speech. Based on just three seconds of recorded brain activity, the system can estimate what the study participants were hearing with up to 72.5% accuracy, according to Meta. This was much better than what had previously been possible with non-invasive methods. In the past, most of these brain activity models would have to be trained specifically for each person, but the Facebook researchers were able to combine data from across the study participants to train an A.I. system that will work reasonably well for any person. In the future, such research could lead to the development of more sophisticated systems that would allow people who are unable to speak to type with just their thoughts. You can read Meta's blog on the research here.

FORTUNE ON A.I.

Amazon drivers rebel against unrealistic A.I. delivery routes that don’t account for rivers, train tracks or narrow roads—by Chloe Taylor

Artificial intelligence is being used to accurately predict women’s childbirth risks—by Tristan Bove

Commentary: Amazon and Walmart want the FAA to let them use part of your property. Here’s how drone delivery companies are coming for your airspace—by Troy Rule

U.S. ban on Nvidia and AMD A.I. chip sales will restrict China’s and Russia’s militaries—and may curb tech advances in other fields too—by Grady McGregor

Kids opening up to their robot toys may help detect mental health issues before parents can spot them—by Sophie Mellor

BRAINFOOD

How will the kids learn about the world of work? Jeff Bezos's first job was famously flipping burgers at McDonald's as a teenager. A lot of other people got their initial introduction to the world of work under the golden arches too. But increasingly, those burger flipping jobs are being automated. This past week, The Financial Times wrote about Flippy, a burger flipping robotic arm made by a California-based company called Miso, that is being trialled at White Castle burger joints in the U.S. and at KFC branches elsewhere in the world. Flippy can, according to its maker, work twice as fast as a human can, resulting in 30% more burger flips in a given time period. The FT story suggests that teen jobs in fast food restaurants may soon be disappearing, especially as minimum wages have crept ever higher in most developing countries, making it increasingly attractive to turn to automation.

But its not just fast food—in many other fields automation is taking over the tasks that are often used as entry-level training. This is true in law, where A.I.-powered software is increasingly doing the document reviews and basic due diligence searches at which the big law firms once threw armies of fresh-from-law-school junior associates. The associates didn't relish the work, for the most part, which was kind of the legal equivalent of burger flipping. But it was seen as the way the profession trained its young. Now, with those roles increasingly being performed faster and often more consistently by A.I., the big law firms are having to rethink how they train and promote new hires. A similar thing is happening in areas such as accounting and banking—and even journalism where the production of basic news stories from press releases, a job once doled out to summer interns and new hires, can now be performed by A.I. software. How will professions train people without these roles in the future?

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.