Facebook Is Finally Starting to Take Some Responsibility for Fake News
Facebook CEO Mark Zuckerberg initially scoffed at the idea that hoaxes, misinformation, and “fake news” were a problem on the social network, or that they may have influenced the election of Donald Trump. But now, the company finally seems to be taking some responsibility for the role it plays in spreading that kind of content—and it’s about time.
In a blog post, Facebook announced that it is implementing a series of steps aimed at stamping out the problem of hoaxes and fake news, including a) the ability for users to more easily report fakes, b) a co-operative effort with third-party verification organizations such as Politifact and Snopes that will alert readers when a story’s accuracy is disputed, and c) cracking down on sites that pretend to be legitimate news outlets.
These moves are not going to solve the problem entirely, of course—in part because the term “fake news” includes a host of different kinds of content, from outright fakes and wholly manufactured stories to news reports from reputable outlets that make poorly-supported claims or haven’t been independently verified. But they are an important first step at rooting out what Facebook (FB) calls “the worst of the worst.”
At first, it seemed as though Facebook wasn’t even prepared to admit that fake news was a problem at all, or that the site had any responsibility to fix it. When the issue of its influence on the election first arose, Zuckerberg said that this idea was “crazy,” and argued that fake news accounted for no more than 1% of the content on the social network.
Get Data Sheet, Fortune’s technology newsletter.
From within Facebook itself, however, came reports that some staffers believed otherwise—a number of employees told the New York Times that they were concerned about the potential impact that the company’s distribution of fake anti-Clinton stories from a network of “alt right” sites had on the outcome.
Over the past few weeks, Zuckerberg began to moderate his initial position somewhat, saying the social network cared about the quality of the information that users were getting, and that the company was looking into taking a number of steps similar to the ones announced on Thursday.
Part of the reason why Facebook has probably been so hesitant about tackling the fake-news problem is that it risks dragging the company even further into the quagmire over whether it is a media company or not. The social network likes to see itself as an impartial distributor of content, not a media outlet that makes editorial decisions about what is true and what isn’t.
At the same time, however, there’s no question that Facebook—regardless of what it chooses to call itself—plays a huge role in distributing the news, and has become one of the main sources of news for millions of users. Whether it wants to admit it or not, that imposes some responsibility to ensure that what it is giving users is accurate.
Facebook’s fake news problem is worse than it looks:
Because of its size and influence, Facebook also has the ability to cut off the oxygen to some of these professional fake-news sites, by denying them not just revenue but the more important currency of ranking high in the news feed.
That is a dangerous weapon, in many ways, which is why some have been leery about empowering Facebook to make these kinds of decisions. And there’s no question that Facebook’s moves will be questioned and attacked by those from various political persuasions, arguing that the third-party fact-checkers it is relying on are biased (an accusation that both Politifact and Snopes have faced already).
The reality, however, is that the social network and its algorithm are already making decisions every day about who gets ranked highly and who doesn’t, what content gets seen and what goes unseen. At least now, some of those efforts will theoretically be directed towards improving the accuracy of what’s in the news feed, instead of just removing photos of breast-feeding mothers.
Welcome to the media business, Facebook.