Alarmed By Deepfake Videos, Facebook Creates Contest to Detect Them

September 5, 2019, 5:00 PM UTC

Alarmed by the rise of realistic fake videos created with artificial intelligence, Facebook is funding a competition to find ways to automatically detect these counterfeits.

The competition, called the Deepfake Detection Challenge, will be overseen by the Patnership on AI, a non-profit group whose members include many leading technology companies, non-governmental organizations, and research bodies. Microsoft will also be working on the project along with researchers from several well-known universities, according to a blog Facebook posted Thursday.

While counterfeit videos have existed for decades, the use of machine learning algorithms to generate such fakes automatically is a recent phenomenon. The algorithmic technique used to create these so-called deepfakes was invented in 2014 and the first deepfakes surfaced in 2017. Initially used to superimpose celebrities heads on actresses in adult film videos.

Since then, deepfakes have been used to, among other things, create a public service campaign for malaria prevention featuring David Beckham speaking languages he does not, in fact, speak, satirical videos of numerous politicians, and, nefariously, to create harassing pornographic videos of women.

Although there has been little evidence so far of deepfakes being used for political disinformation, Adam Schiff, the California Democrat who heads the House Intelligence Committee, has called the fake videos a “nightmarish” threat to the 2020 U.S. Presidential elections and called on social media companies to take action against them.

Facebook said it would devote $10 million in initial funding to set up the competition and cover research grants and prizes associated with it. Some of the money will go towards creating a database of fake videos that will be used to test the fake video detection algorithms. These videos will use paid actors who have consented to have their images manipulated by artificial intelligence. “No Facebook user data will be used in this data set,” Facebook said.

Mike Schroepfer, Facebook’s chief technology officer, said in a press briefing earlier this week that while deepfakes have not yet posed a serious problem on Facebook’s social network, he is concerned about their potential use in misinformation campaigns.

With Facebook having been battered over the past three years by revelations that its service has been used to spread disinformation, attempt to manipulate elections and, in some cases, instigate political and ethnic violence, he said the lesson of the past few years was that Facebook needed “to spend a lot more time worrying about what may happen, even if it hasn’t happened yet.” He said he was hoping to “get ahead of the problem” of deepfakes through this competition.

“We’re trying to catalyze the broader academic community to generate better ways of detecting manipulated media,” Schroepfer said. He said Facebook’s own A.I. research scientists and engineers would participate in the competition, but would not be eligible for any of the prizes associated with it.

Schroepfer said that there was a large amount of academic research in the past two years into techniques to make it easier to generate deepfakes—including ways to reduce the amount of real footage needed to create them, in some cases down to a single still image—and that there had not been enough emphasis on finding ways to detect the fakes.

“This is a really hard problem,” Schroepfer said. “I don’t think there is an easy solution out there but if we focus on it, I think we can do better than we have been doing.”

In creating a common dataset that different research teams can use to train and test deepfake detectors and offering prizes for those whose software performs best on this benchmark, Facebook is copying a method that helped lead to rapid progress in other areas of A.I., particularly computer vision.

Schroepfer said that even if researchers could not build “a perfect detector,” his goal was to make it harder and more expensive for people to create deepfakes that could elude detection. “That reduces the chance it becomes a problem on our platform,” he said.

Hany Farid, a computer scientist at the University of California at Berkeley, who built the algorithms that help prevent child pornography images from being reposted across the Internet, said he was supporting the new deepfake competition. “In order to move from the information age to the knowledge age, we must do better in distinguishing real from fake, reward trusted content over untrusted content, and educate the next generation to be better digital citizens,” he said in a statement.

Scientists from Cornell University, M.I.T., Oxford University, the University of Maryland College Park, and the University at Albany-SUNY, are also supporting the detection challenge.

A limited data set of fake videos will be tested on a pilot basis at an academic conference in October, Facebook said. A larger dataset will be released to a wider set of researchers at the conference on Neurological Information Processing Systems (NeurIPS), one of the top A.I. research conferences, in December.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward