This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.
Researchers at Facebook and Michigan State University say they have made a leap forward in detecting deepfakes, the realistic-looking images created by artificial intelligence that many fear could be used in disinformation campaigns.
Social media companies, including Facebook, as well as academic researchers, have been trying for several years to find ways to automatically spot deepfakes. But the fake photos and videos are often extremely difficult for humans to detect, although trained digital forensics experts can frequently spot them by meticulously inspecting the images for subtle anomalies.
For many researchers the obvious solution has been to use A.I. itself to catch these A.I.-generated fakes. But to date, reliable deepfake-spotting software has remained elusive. In a contest Facebook ran in 2020 to find the best deepfake detector, the winning software was accurate less than two-thirds of the time.
But now, in a blog post and a paper posted online, the research team from Facebook and Michigan State say they have created a system that, at 70% accuracy on a key benchmark test, is significantly better than any previous system that ingested whole still images or video frames for examination. The system was also only 1% less accurate than the best previous method overall. But that earlier top-performing system examined images pixel by pixel, instead of frame by frame. As a result, it consumed far more computer power than the system Facebook and Michigan State scientists created.
Critically, unlike many rival deepfake detectors, the researchers said, their approach should be able to uncover deepfakes created using methods its detection algorithm has never even encountered before in its training.
The method the scientists used to build their detector is somewhat counterintuitive. Rather than asking an algorithm to simply determine whether an image is real or a deepfake, they essentially assume the image is a deepfake and ask their algorithm to reverse engineer key aspects of the A.I. software used to create it. For instance, the algorithm must predict the number of layers in the neural network (a kind of machine-learning system loosely based on the human brain) that was used to create the image. It also looks at how these layers are arranged and at the mathematical formula that the A.I. creating the image used to optimize what it produced.
By learning to make accurate predictions about the technology used, the software can essentially create a “fingerprint” of that system. Once the algorithm knows this fingerprint, another piece of software layered on top of that algorithm’s output can try to match the fingerprint to the fingerprints of other videos or images. In this way, the software can attribute a deepfake to a particular known method for generating them, or, at the very least, tell researchers that a deepfake of, say, Tom Cruise, and another of Meryl Streep were likely created using the same software, even if it has never seen a deepfake made with that software before.
This is not that dissimilar from how the FBI once used subtle differences in how different mechanical typewriters render text to determine which brand and model of typewriter was used to compose a ransom note.
What’s more, a certain fingerprint will allow the system to predict that the video is, in fact, not a deepfake at all, but genuine.
Tal Hassner, a Facebook researcher who worked on the project, says that, at first, he doubted it would be possible to teach an A.I. system to make accurate predictions about the inner workings of the technology used to create an image just by examining the image itself. “It wasn’t apparent to me why this would work,” Hassner says. “But it turns out, to my personal surprise that, yeah, you can say something about the design of the model used to create this image just by looking at the image.”
Hassner says the research took inspiration from prior work by a Michigan State computer scientist who collaborated on the project, Xiaoming Liu. Liu had studied the subtle differences between images taken with different brands and kinds of digital cameras. He built machine-learning systems that could analyze images and determine, with a high degree of accuracy, the type of camera used to take that particular picture.
The Facebook scientist said that for the moment the deepfake detector and analyzer the team built is purely a research project, with no immediate plans to use the system across the company’s social media platforms to detect potential deepfakes. He said he was interested in seeing how the system performed against images or videos where only part of the image is created using a deepfake and the rest is real, or where there has been extensive postproduction work applied to the deepfake using more traditional digital video editing technology.
But in the past, these kinds of innovations have often found their way into Facebook’s production systems in relatively short order. The company has not revealed whether it currently uses automated software to try to detect deepfakes uploaded to either Facebook or Instagram.
Microsoft unveiled a commercial deepfake detection tool in September last year, which it calls Video Authenticator, that analyzes video frame by frame and provides a score of how confident the software is that the frame is either genuine or A.I.-generated. The company made the tool available to organizations monitoring the 2020 U.S. elections, including media outlets and political campaigns. Microsoft said the tool had a “high degree of accuracy” but has not disclosed exact figures on how well it has performed against benchmark tests.
Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.