Skip to Content

‘This Is an Arms Race.’ Why Facebook Isn’t Winning Its War on Harmful Content—Yet

Facebook says its latest updates to News Feed, Messenger, and Instagram will help it battle misinformation and harmful content. While the new features incrementally turn the dial, don’t expect them to end fake news entirely.

Facebook detailed its new rollouts at a press briefing on Wednesday at its Menlo Park headquarters. To combat bad actors and fake news, the social network is introducing several new measures. It’s starting by limiting the reach of groups that repeatedly spread misinformation. With a new measurement called the “Click-Gap,” Facebook says it’s also getting better at identifying click-bait. And the social network is expanding the amount of content that will be fact-checked by the Associated Press. In addition, Messenger users will be able to get a verified badge, helping to combat impersonation, and Instagram posts deemed inappropriate will become harder to find.

These measures are meant to clean up the darker corners of Facebook’s social media empire. In recent years, the company has been heavily scrutinized for how it managed misinformation and harmful content. For example, it hosted live streams of the New Zealand mass shooting in March. It has helped the spread of misinformation and hate speech that encouraged people to kill Muslims, creating chaos in Sri Lanka in March 2018. And it exposed user information to political firm Cambridge Analytica, which used the data to influence the 2016 presidential election.

While Facebook’s latest efforts aim to prevent future incidents like these, the company admits it still faces numerous challenges in fighting the spread fake news.

“We’re up against adversaries,” Guy Rosen, Facebook’s vice president of integrity. “This is an arms race.”

But a few things handicap Facebook in its efforts to improve the integrity of its network.

First, the company takes a hands-off approach when it comes to removing misinformation. Facebook wants to provide users a place to openly express themselves and share information, regardless of its merit, and it’s relatively picky about what it considers harmful and worth removal.

Posts that don’t follow its community standards—like threats of physical harm, hate speech, violence, and sexual activity—are removed. Beyond that, it will take down posts that provide misinformation about elections or increase the risk of eminent violence in specific regions of the world.

But the company doesn’t remove information from Facebook just because it’s false, says Tessa Lyons, head of news feed integrity. As a result, posts containing scientifically debunked anti-vaccination claims remain on the platform—though their reach is diminished, reports WIRED.

To proactively identify potentially harmful posts in the massive amounts of content posted on Facebook every day, the company uses artificial intelligence. But, Facebook’s A.I. can be tricked—another handicap in its arms race.

When harmful content is published, A.I. quickly identifies and removes copies—often before anyone has seen it. But if the content is slightly modified, it sometimes can go undetected by the system. That was the case with the New Zealand shooting, as 300,000 videos of the 1.5 million published were not blocked from being posted.

This shortfall is why Facebook also relies heavily on help from content reviewers, human contract workers who physically review problematic posts. Vendors like Cognizant, Telus International, and Accenture provide a workforce of 15,000 content moderators to Facebook in 20 locations around the world.

But another handicap for Facebook is just how overwhelming this content can be. Given the A.I. detection’s shortcomings, contractors are sometimes left to review the same slightly modified graphic content over and over again, decreasing the ground they can cover and exposing them to possible traumatic effects.

Wednesday’s Facebook briefing showed that though it is is attempting to shore up the integrity of its network, it still has many more battles to win before it can come close to declaring a victory against misinformation.