Like a zombie horde, they keep coming. First, there were the pixelated likenesses of actresses Gal Gadot and Scarlett Johansson brushstroked into dodgy user-generated adult films. Then a disembodied digital Barack Obama and Donald Trump appeared in clips they never agreed to, saying things the real Obama and Trump never said. And in June, a machine-learning-generated version of Facebook CEO Mark Zuckerberg making scary comments about privacy went viral.
Welcome to the age of deepfakes, an emerging threat powered by artificial intelligence that puts words in the mouths of people in video or audio clips, conjures convincing headshots from a sea of selfies, and even puts individuals in places they’ve never been, interacting with people they’ve never met. Before long, it’s feared, the ranks of deepfake deceptions will include politicians behaving badly, news anchors delivering fallacious reports, and impostor executives trying to bluff their way past employees so they can commit fraud.
So far, women have been the biggest victims of deepfakes. In late June, the app Deepnudes shut down amid controversy after journalists disclosed that users could feed the app ordinary photos of women and have it spit out naked images of them.
There’s concern the fallout from the technology will go beyond the creepy, especially if it falls into the hands of rogue actors looking to disrupt elections and tank the shares of public companies. The tension is boiling over. Lawmakers want to ban deepfakes. Big Tech believes its engineers will develop a fix. Meanwhile, the researchers, academics, and digital rights activists on the front lines bemoan that they’re ill equipped to fight this battle.
Sam Gregory, program director at the New York City–based human rights organization Witness, points out that it’s far easier to create a deepfake than it is to spot one. Soon, you won’t even need to be a techie to make a deepfake.
Witness has been training media companies and activists in how to identify A.I.-generated “synthetic media,” such as deepfakes and facial reenactments—the recording and transferring of facial expressions from one person to another—that could undermine trust in their work. He and others have begun to call on tech companies to do more to police these fabrications. “As companies release products that enable creation, they should release products that enable detection as well,” says Gregory.
Software maker Adobe Systems has found itself on both sides of this debate. In June, computer scientists at Adobe Research demonstrated a powerful text-to-speech machine-learning algorithm that can literally put words in the mouth of a person on film. A company spokesperson notes that Adobe researchers are also working to help unmask fakes. For example, Adobe recently released research that could help detect images manipulated by Photoshop, its popular image-editing software. But as researchers and digital rights activists note, the open-source community, made up of amateur and independent programmers, is far more organized around making deepfakes persuasive and thus harder to spot.
For now, bad actors have the advantage.
This is one reason that lawmakers are stepping into the fray. The House Intelligence Committee convened a hearing in June about the national security challenges of artificial intelligence, manipulated media, and deepfakes. The same day, Rep. Yvette Clarke (D-N.Y.) introduced the DEEPFAKES Accountability Act, the first attempt by Congress to criminalize synthetic media used to deceive, defraud, or destabilize the public. State lawmakers in Virginia, Texas, and New York, meanwhile, have introduced or enacted their own legislation in what’s expected to be a torrent of laws aimed at outmaneuvering the fakes.
Jack Clark, policy director at OpenAI, an A.I. think tank, testified on Capitol Hill in June about the deepfakes problem. He tells Fortune that it’s time “industry, academia, and government worked together” to find a solution. The public and private sectors, Clark notes, have joined forces in the past on developing standards for cellular networks and for regulating public utilities. “I expect A.I. is important enough we’ll need similar things here,” he says.
In an effort to avoid such government intervention, tech companies are trying to show that they can handle the problem without clamping down too hard on free speech. YouTube has removed a number of deepfakes from its service after users flagged them. And recently, Facebook’s Zuckerberg said that he’s considering a new policy for policing deepfakes on his site, enforced by a mix of human moderators and automation.
The underlying technology behind most deepfakes and A.I.-powered synthetic media is the generative adversarial network, or GAN, invented in 2014 by the Montreal-based Ph.D. student Ian Goodfellow, who later worked at Google before joining Apple this year.
Until his invention, machine-learning algorithms had been relatively good at recognizing images from vast quantities of training data—but that’s about all. With the help of newer technology, like more powerful computer chips, GANs have become a game changer. They enable algorithms to not just classify but also create pictures. Show a GAN an image of a person standing in profile, and it can produce entirely manufactured images of that person—from the front or the back.
Researchers immediately heralded the GAN as a way for computers to fill in the gaps in our understanding of everything around us, to map, say, parts of distant galaxies that telescopes can’t penetrate. Other programmers saw it as a way to make super-convincing celebrity porn videos.
In late 2017, a Reddit user named “Deepfakes” did just that, uploading to the site adult videos featuring the uncanny likenesses of famous Hollywood actresses. The deepfake phenomenon exploded from there.
Soon after, Giorgio Patrini, a machine-learning Ph.D. who became fascinated—and then concerned—with how GAN models were being exploited, left the research lab and cofounded Deeptrace Labs, a Dutch startup that says it’s building “the antivirus for deepfakes.” Clients include media companies that want to give reporters tools to spot manipulations of their work or to vet the authenticity of user-generated video clips. Patrini says that in recent months, corporate brand-reputation managers have contacted his firm, as have network security specialists.
“There’s particular concern about deepfakes and the potential for it to be used in fraud and social engineering attempts,” says Patrini.
Malwarebytes Labs of Santa Clara, Calif., recently warned of something similar, saying in a June report on A.I.-powered threats that “deepfakes could be used in incredibly convincing spear-phishing attacks that users would be hard-pressed to identify as false.” The report continues, “Imagine getting a video call from your boss telling you she needs you to wire cash to an account for a business trip that the company will later reimburse.”
In the world of deepfakes, you don’t need to be famous to be cast in a leading role.
Correction: The original version of this article incorrectly characterized technology produced by Adobe that detects images manipulated by Photoshop. What Adobe unveiled is early-stage research to do that, not a commercial product, or “tool.”
This article originally appeared in the August 2019 issue of Fortune.
More must-read stories from Fortune:
—The 2019 Fortune Global 500: See the full list
—It’s China’s world: China has now reached parity with the U.S. on the Global 500
—China’s biggest private sector company is betting its future on data
—How the maker of the world’s bestselling drug keeps prices sky-high
—Cloud gaming is big tech’s new street fight
Get up to speed on your morning commute with Fortune’s CEO Daily newsletter.