Facebook plans to ban some deepfake videos, as it seeks to show regulators that it’s cracking down on misleading content ahead of this year’s presidential election.
Deepfake videos are an emerging phenomenon that are created using artificial intelligence. In some cases, the technology can alter a video to make it seems like someone says something they didn’t—potentially starting a war or influencing the outcome of an election.
“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” Monika Bickert, Facebook’s global vice president of policy management writes in a blog post.
Enforcing a ban
Facebook has been grappling with how to handle deepfakes since last year, after two high-profile examples went viral on its social network. One video showing House Speaker Nancy Pelosi was slightly altered to make it seem as though she was drunk. Despite complaints, Facebook decided against removing the video, which, technically, wasn’t a deepfake because it was created without A.I.
Then in June, a pair of artists tried to test Facebook’s policy of allowing deepfakes by posting on Facebook an altered video showing Zuckerberg bragging about a plan to rule the world. The company didn’t remove the video since it didn’t violate its community guidelines. However, because the words coming out of Zuckerberg’s mouth were out of context and digitally altered, Facebook limited the video’s distribution in its news feed.
Zuckerberg has long struggled with how Facebook should handle free speech, including misinformation. More often than not, the company has opted to do nothing, instead arguing that it shouldn’t be an arbiter of free speech.
But Facebook’s new deepfakes ban, announced Monday night, is somewhat of a departure. It calls for removing videos that “would likely mislead someone into thinking that a subject of the video said words that they did not say,” and videos that are the result of A.I. merging, superimposing, or replacing imagery in a video, according to Bickert.
The ban was first reported by The Washington Post.
The new policy puts a modest limit on a recent and controversial Facebook rule that allows politicians to lie in ads. Now, they will still be able to lie, but they won’t be able to post deepfakes as part of those ads.
“We do not allow content that violates our community standards in ads (of which this manipulated media policy is a part), whether posted by a politician or anyone else,” a Facebook spokesperson tells Fortune.
Some videos are so amateurish that they seem more like “cheap fakes,” such as the one that featured Zuckerberg. But rapid advancements in technology make this a “cat and mouse game” when it comes to detection.
Nico Fischbach, global chief technology officer at Forcepoint, a cybersecurity company, described Facebook’s new policy as “an important move” at a time when deepfakes are “taking on a life of their own” and spreading across social networks.
“In general, people link it to the U.S. election, but at the end of the day this [deepfakes] is going to be used a lot for social engineering,” says Fischbach. “It’s not a U.S. problem. It’s a global problem.”
For that, Facebook has been working with outside academics, researchers, and organizations to get better and faster at detecting deepfakes. Last year, Facebook launched Deep Fake Detection Challenge aimed at creating open source tools that anyone can use to help sniff out manipulated media. The challenge is set to end in March and includes $10 million in grant funding.
Some deepfakes are allowed
The ban doesn’t cover every video that has been manipulated. The new policy allows videos that are parody, satire or have been edited to change the order of words. The Pelosi video, for instance, might still not meet the threshold for removal under the new guidelines. However, it’s unclear what standards would be used to determine whether a video was intended to be satire.
Bickert says some videos that don’t meet the standard for removal will still be reviewed by independent third-party fact-checkers. Those videos will be flagged as false to anyone who tries to share them.
“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem,” Bickert says. “By leaving them up and labelling them as false, we’re providing people with important information and context.”
More must-read stories from Fortune:
—7 companies founded in the last 10 years that you now can’t live without
—Electronic health records are creating a ‘new era’ of health care fraud
—Apple, Amazon, and Google want to create a smart home standard
—What a $1,000 investment in 10 top stocks a decade ago would be worth today
—Amazon is on a collision course with employee activists outraged by the climate crisis
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.