Thanks to advances in artificial intelligence, misinformation and social-media hoaxes are about to get a lot weirder. For the past couple of years, deepfake technology, which uses machine learning to manipulate videos, has been getting more and more realistic in appearance.
How realistic? Freelance journalist Mikael Thalen plumbed the depths of deepfake videos online and found one that featured a mashup of two actors, Jennifer Lawrence and Steve Buscemi. On Twitter, Thalen shared the video, which shows Lawrence speaking at the Golden Globes, only with Buscemi’s face.
Unlike earlier deepfake videos that went viral—such as comedian Jordan Peele warning about deepfakes last April with a video of President Obama saying things he never said in reality—the Lawrence-Buscemi mashup is more eerily realistic, showing how far deepfake technology has come in less than a year.
Deepfakes, a portmanteau-word of “deep learning” and “fake,” often involve swapping faces, a tactic used to synthesize celebrity-porn videos or revenge-porn videos. Some AI researchers are finding ways to detect or even outsmart deepfake videos, but others worry that in an era of severe political polarization, they could become powerful ways to spread more disinformation, such as doctored videos of politicians speaking words they never actually utttered.
While the Buscemi-Lawrence video may seem benign and even amusingly surreal, the rapid advance its creators have demonstrated in the technology underscore other concerns. Last fall, a bipartisan group of U.S. lawmakers expressed concern that they could “undermine public trust,” while others this week warned that “malicious foreign actors” could use them to interfere in the 2020 U.S. elections.