How Facebook Is Revamping Its Fight to End Online Hate Speech
Facebook is amping up its own anti-hate speech campaign this week after launching the program earlier this year, the company announced Wednesday.
The tech titan’s Online Civil Courage Initiative (OCCI) has been working with at least 84 activist groups and non-governmental organizations (NGOs) in Germany, France, and the UK since January. Now Facebook (FB) wants to expand its OCCI network to more regions throughout the world, providing more anti-hate activist groups with ad credits, marketing resources, and strategic support to empower their members to fight back against racist trolls and violent extremist rhetoric on the internet.
“This has really been about engaging with those that are challenging hate speech online, finding out the things they need the most to continue their work and designing the OCCI around these needs,” OCCI program manager Erin Saltman said in a video promoting the initiative.
“We’re developing a larger delivery model… We’ll look to grow an OCCI network of NGOs, private sector organizations, and government departments from across different regions so they can share experience, intelligence and skills across the sectors.”
Battling racist and terrorist propaganda online has become a national security issue for European nations as well as the U.S. since white nationalist groups like Germany’s PEGIDA and terrorist groups like Al-Qaeda and ISIS began maximizing their use of social media.
Government officials have been working with the tech community to develop ways to limit hate speech on their sites. In Germany, Facebook, Google (GOOG), and Twitter have compromised with the government to have hate speech removed from their sites within 24 hours, much to the chagrin of web troll extraordinaire Milo Yiannopoulos who maligned Facebook’s efforts earlier this year as “outright Orwellian.”
“[Facebook] is effectively slandering its own users saying that their perfectly reasonable points of view constitute ‘hate speech’ and that they’re not going to be allowed on Facebook,” Yiannopoulos wrote on Breitbart in February.
Facebook says it does what it can to avoid outright censorship.
“Censorship is not effective,” Saltman told the Wall Street Journal yesterday. “Conversations would start on mainstream platforms and migrate to less regulated, encrypted platforms.”
That appears to be what’s happened with Islamic extremists on Twitter (TWTR). The micro-blog site banned thousands of users spreading pro-terrorist messages in November after the terror attacks in Paris. The intelligence community has reported much of that extremist propaganda has migrated to other channels like Telegram, the encrypted messaging service similar to WhatsApp.
Since mid 2015, Twitter has suspended at least 360,000 accounts for promoting terrorism and daily suspensions are up more than 80% since last year, according to a recent company blog post. Twitter’s global public policy team also has expanded its partnerships with organizations working to counter violent extremism online.
“Our efforts continue to drive meaningful results, including a significant shift in this type of activity off of Twitter,” the company said on its blog.