By Ellen McGirt
June 28, 2017

Two announcements from the world of social media: one terrific, the other troubling.

First, the good news. Twitter has hired Candi Castleberry Singleton as its new VP of diversity and inclusion, filling a spot that’s been vacant since February. The company took its time, and it paid off: By all accounts, Singleton is a treasure. She’s the founder of the Dignity & Respect Campaign, a behavior-based leadership initiative that helps individuals and organizations become more culturally aware, she also has experience helping large corporations drive diversity, having held major roles at Sun Microsystems and Motorola.

And now for the not so great news. This morning, the investigative media outlet ProPublica published an in-depth examination of internal documents that shed light on the algorithms Facebook uses to distinguish between hate speech and legitimate political speech. Here’s a tidbit:

Over the past decade, the company has developed hundreds of rules, drawing elaborate distinctions between what should and shouldn’t be allowed, in an effort to make the site a safe place for its nearly 2 billion users. The issue of how Facebook monitors this content has become increasingly prominent in recent months, with the rise of “fake news” — fabricated stories that circulated on Facebook like “Pope Francis Shocks the World, Endorses Donald Trump For President, Releases Statement” — and growing concern that terrorists are using social media for recruitment.

While Facebook was credited during the 2010-2011 “Arab Spring” with facilitating uprisings against authoritarian regimes, the documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.

The entire report is a must-read, but I predict the conversations consuming your feeds today will center on a single document – the one that trains content reviewers in the fine art of applying the company’s hate algorithm.

From the report:

The slide identifies three groups: female drivers, black children and white men. It asks: Which group is protected from hate speech? The correct answer: white men.

Facebook’s rationale, which the company is prepared to defend, is that accounts should be censured when attacks are directed at “protected categories” – based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease. The first two examples are considered “subsets” of protected categories because only one of the two identifiers is considered protected. Female is protected, drivers are not. In the global world of Facebook where everything should be fair, attacks on white men are deleted because both traits are considered protected categories.

This explains why every activist you know who writes passionately, knowledgeably, and responsibly about white supremacy gets routinely blocked on Facebook.

Danielle Citron, a law professor and expert on information privacy at the University of Maryland told ProPublica that this color-blind approach will “protect the people who least need it and take it away from those who really need it.”

It certainly raises some big questions. Facebook just reached two billion users, an extraordinary achievement. But it also means that it has the weight of more than one-third of the world on its shoulders. Facebook’s still imperfect attempts to police speech around the world should be everybody’s business.


You May Like