Facebook has a plan to stop the spread of fake news on its site.
In a blog post Thursday, the social media platform laid out a four-step plan that will focus on the “worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.”
To start, Facebook (FB) is testing out how to make reporting a hoax easier. By clicking on the upper-right-hand corner of a post, users can report a story they think is suspicious. The company will also use its software to detect signs of fake news.
Once Facebook determines that its software or users have found a fake news story, a group of outside fact-checkers will examine it. If they determine the news is fake, Facebook will then flag the story as “disputed by third-party fact-checkers.” That notification will then be attached to the story within Facebook’s news feed.
Although users will still be able to share flagged news stories, they will see a warning that they have been disputed before posting them to their news feed.
The program echoes many of the same tactics to tackle fake news that CEO Mark Zuckerberg laid out in November. Since the U.S. presidential election, Facebook has faced an array of criticism that huge volumes of false news spread on the site and influenced voters, leading to Donald Trump’s victory. Although Zuckerberg previously rejecting the arguments, he has since said that “we [Facebook] take misinformation seriously.”