As part of its efforts to fight fake news, Facebook has begun assigning users a reputation score, designed to predict how trustworthy they are.
The new program, which has been previously kept quiet, is the latest in an attempt to automate ways to determine threats, such as foreign operatives who try to direct public opinion through posting false or malicious “news” stories, which can influence elections and other important events.
Users are ranked on a scale of zero to one, though that’s not meant to be a definitive indicator, Tessa Lyons, the product manager in charge of fighting misinformation at Facebook, tells The Washington Post. It’s one of several tools the site is using to determine risk. It’s also monitoring how often you flag content and which publishers are most trusted by users.
Lyons, though, did not go into details about what criteria are used to determine a user’s score, something that some privacy experts said is worrisome.
Facebook finds itself in a bit of a Catch 22 with the program. While it has an obligation to fight the spread of fake news, recent controversies about the company’s handling of user data could make the lack of transparency in how it ranks users an issue.