Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Facebook Is Still Figuring Out How to Police Deepfakes

October 22, 2019, 5:33 AM UTC

Facebook is still trying to figure out how to police deepfakes, or videos that are enhanced by artificial technology to make it appear as though something happened that never did. The risk is that the clips could be used to influence elections or start wars.

But before Facebook can even think about a policy, it has a big challenge ahead in finding them among the billions of posts on its service.

“There’s a bunch of advancing technology in making deepfakes, but not a lot of good technology in identifying them right now,” Mike Schroepfer, Facebook’s chief technology officer, said at the Wall Street Journal’s Tech Live conference in Laguna Beach, Calif., on Monday night.

Last month, Facebook debuted a Deepfake Detection Challenge in partnership with academic institutions and companies including Amazon and Microsoft to research how to identify deepfakes. On Monday night, Schroepfer said that Facebook had released a set of 5,000 deepfakes that partners could study.

Facebook has struggled to police misinformation on its platform, and deepfakes could exacerbate the problem further. The company already had its first run-in with digitally altered video when a viral clip of Speaker Nancy Pelosi, which was slowed to make it appear as though she was drunk, went viral in May. Facebook eventually downgraded the video so that it wouldn’t be widely shared on its service. A month later, an Israeli startup called Canny AI created a deepfake video of Mark Zuckerberg giving menacing speech about the power he had over people’s data.

Though deepfakes are still an nascent problem and relatively easy to identify, they’re expected to become more convincing in the near future.

Facebook isn’t the only tech company trying to figure out how to manage deepfakes. At the conference on Monday, Twitter also said it was trying to deal with the problem by soliciting feedback in the coming weeks about developing a couple policy for policing deepfakes.

“We think that a lot of people have a lot of interest in this space, and have a lot of thoughts on how we should be dealing with this content,” said Vijaya Gadde, Twitter’s legal, policy, and trust and safety lead.

Schroepfer said Facebook is still debating how it will handle deepfakes on the policy side.

“Do you tread it differently if it’s straight up misinformation?” he said, adding that misinformation is typically labeled as such but not removed from the service. “There’s not a separate policy yet on how to treat it just because it’s an A.I. created thing.”