Is Facebook really training its content moderators to protect white men over black children?
Facebook, which topped 2 billion global users yesterday, has been working to fend off critics who say the social networking giant doesn’t do enough to police offensive content and online harassment on its site. But now those efforts may be backfiring following a new report that claims to show the awkward criteria the company uses to choose which content to censor.
A new report on Wednesday from ProPublica, which reviewed internal documents from Facebook, sheds some light on what appears to be the convoluted process through which the social media company determines which allegedly offensive posts are removed and the accounts that get suspended for hate speech. ProPublica reports that the internal guidelines that Facebook uses to train its content censors differentiates between groups such as white men, who fall under a so-called “protected category,” and black children. The latter belong to “subset categories,” which include groups of people whom Facebook would reportedly not protect from online hate speech, according to ProPublica.
The report went on to explain the reasoning behind Facebook’s seemingly confusing moderation policies, which look to censor slurs and other attacks against “protected categories” that are based on race, sex, gender identity, religion, national origin, sexual orientation, and serious disability or disease. Facebook posts including slurs based on those factors would be subject to removal. Other factors—including age, appearance, occupation, social class, and political affiliation—are lumped into unprotected categories based on the idea that they are less central to a person’s identity. Therefore, Facebook’s guidelines would call for slurs against “white men” (which are based on race and sex) to be categorized as hate speech over offensive posts aimed at “black children” (a group based on race and age).
The reasoning there would seem to be that two protected categories outweigh only one. It’s a solution that, perhaps, makes more sense as an algorithm than it does when put into practice with real people and actual offensive posts.
Get Data Sheet, Fortune‘s technology newsletter.
Still, even if the logic behind Facebook’s policies becomes somewhat less cringeworthy upon further explanation, the company will undoubtedly still have to deal with the backlash stemming from the ProPublica report, which includes a pretty regrettable company training slide that asks moderators which groups Facebook protects and presents the options as “female drivers,” “black children,” and “white men” (with the latter group inexplicably represented by a photo of the pop ensemble the Backstreet Boys, no less). White men are the correct answer under the company’s reported guidelines.
At the very least, the company knows that its policies are not perfect. “The policies do not always lead to perfect outcomes,” Monika Bickert, Facebook’s head of global policy management, told ProPublica. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”
Bickert offered a similar response last month, after the Guardian published more leaked documents showing examples of “disturbing” content that Facebook’s moderation rules would still allow to remain on its site. At the time, Bickert noted that moderating content on a massive scale “is complex, challenging, and essential,” but she also admitted that the company can “get things wrong, and we’re constantly working to make sure that happens less often.”
Facebook certainly isn’t the only digital company to face criticism for its handling of offensive content and online harassment, with Twitter among those also frequently coming under fire. In early May, Facebook hired an additional 3,000 content moderators (bringing the total to 7,500), and the company said it is deleting roughly 66,000 posts it identifies as hate speech each week as part of stepped-up efforts to combat online harassment along with offensive and violent content. Unfortunately for Facebook, the fallout from the ProPublica report is the latest stain on those efforts.