Facebook is working on a policy to police deepfakes, an emerging threat on the social network that involves users posting videos altered by artificial intelligence to spread misinformation.
But CEO Mark Zuckerberg said it’s a complicated matter because the company wants to avoid deleting videos in which the person featured merely feels they were misrepresented, like in a network news clip.
“We need to be very careful,” Zuckerberg said at the Aspen Ideas Festival on Wednesday in Aspen, Colo. “Across our services there more than 100 billion pieces of content a day flowing through the systems, and we want to make sure we can define things in a way that’s precise.”
Over the past couple of years, deepfakes have become a big focus for their potential to cause serious damage by putting words in people’s mouths. Although they look a bit amateurish today, they’re expected to improve in quality and become indistinguishable from real clips.
The fear is that such videos could spark a war, sway an election, or be used for extortion.
Facebook is currently discussing the matter with artificial intelligence experts as it maps out how it will police deepfakes. The company will likely create a new policy that separates deepfakes from misinformation.
Last month, Facebook made the controversial decision against removing a doctored video of House Speaker Nancy Pelosi that shows her slurring her words. Zuckerberg said the company flagged it as misinformation to slow the spread of it on the social network. But it took Facebook more than a day to react, which allowed the video to spread to millions of users. Zuckerberg said the slow response was an “execution mistake,” but he added that he doesn’t want Facebook to remove content that is merely “factually incorrect.”
“I do not think we want to go so far towards saying that a private company prevents you from saying something it thinks is factually incorrect,” Zuckerberg said. “That to me just feels like it’s too far.”
Earlier this month, Facebook’s policies were newly tested after two artists at a technology startup posted a deepfake video on Instagram of Zuckerberg delivering a disturbing message.
“Imagine this for a second, one man with total control of billions of people’s stolen data—all their secrets, their lives, their futures,” Zuckerberg appears to say in the deepfake. “Whoever controls the data controls the future.”
Though the video was allowed to remain on Instagram, Zuckerberg said he understands the need for a policy to address deepfakes that could cause more serious harm. For example, a deepfake video could feature President Donald Trump declaring war on a foreign country like North Korea, which could react with a nuclear attack.
“The policies continue to evolve as technology develops,” Zuckerberg said. “But I do think you want to approach this with caution and by consulting experts and not hastily or unilaterally.”