• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Tech

Big Social Media Companies Team Up to Combat Terrorist Content

By
Kia Kokalitcheva
Kia Kokalitcheva
Down Arrow Button Icon
By
Kia Kokalitcheva
Kia Kokalitcheva
Down Arrow Button Icon
December 5, 2016, 8:46 PM ET

Facebook, Twitter, YouTube, and Microsoft are teaming up to combat terrorist content on their services, the companies said on Monday.

To do that, they will create a shared database with the unique digital fingerprints—called “hashes”—of terrorism images and videos that violate their content policies. By having access to this group database of identifiers, the companies believe they can more efficiently flag and potentially remove offending content if users attempt to publish it on their online services.

The database won’t contain personally identifiable user information, however.

“We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online,” Facebook said in a blog post about the new effort.

The companies will continue to make their own decisions as far as removing content or banning users, and the content won’t be automatically removed.

Get Data Sheet, Fortune’s technology newsletter.

Social media companies have long faced the challenge of keeping terrorist content off their networks, and they’ve mostly relied on their users to report it. This concerted effort could make it easier for the companies to crack down on pro-terrorist messages, which have been on the rise over the last few years.

For more on Facebook, watch this Fortune video:

This initiative also comes at a time where social networks are grappling with other community issues such as harassment and abuse, and the recent proliferation of “fake news,” or articles containing false information and claims. Facebook in particular has come under fire for its role in the dissemination of these fake news articles in the lead-up to the U.S. presidential election in November. While Facebook initially denied any responsibility, the company has since said it’s looking into ways it could potentially curb the spread of such content, though it’s resisting monitoring the content itself.

While Facebook—and other social media companies—could take a similar approach to squash fake news articles, it would still leave the company to grapple with determining which articles are “fake,” which is exactly what it wants to avoid.

About the Author
By Kia Kokalitcheva
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.