Instagram Turns to Artificial Intelligence to Fight Spam and Cruel Comments

June 29, 2017, 6:13 PM UTC

Instagram believes artificial intelligence could help fight trolls and junk messages.

The photo-sharing service debuted Thursday two new tools it said would help it reduce the amount of spam it receives as well block offensive comments that appear on posts and live video.

People can choose to have the automatic comment filter on or off. If an out-of-line comment still appears on a certain post with the filter on, people can still report it to Instagram as they would typically do.

Instagram said that the comment filter is only available in English, but will debut in other languages at later dates.

The spam filter, on the other hand, can automatically remove bogus messages “written in English, Spanish, Portuguese, Arabic, French, German, Russian, Japanese and Chinese,” Instagram CEO Kevin Systrom wrote in a statement.

Get Data Sheet, Fortune’s technology newsletter.

The underlying technology powering these two filters comes from Facebook’s (FB) artificial intelligence system used to trim offensive posts and spam, according to a Wired report. It makes sense for Instagram to be using Facebook’s own filtering technology considering Facebook bought the photo-sharing service in 2012 for $1 billion and has been migrating its internal technology to Facebook’s data centers.

The primary difference between Facebook’s and Instagram’s filtering tools appear to be that Instagram’s AI system was specifically “trained” with the spam and offensive comments that specifically appeared on its own service, the Wired report shows. Instagram had to hire contractors to comb through spam and gross comments to teach the machine-learning system to recognize patterns and identify whether potential comments are junk or distasteful.

But as Facebook disclosed earlier this week, AI is not the be-all-end-all for removing hurtful messages. In a post about how the social network handles abusive comments, Facebook said it still profoundly relies on humans to recognize and report on hate speech.

The social network is still “a long way from being able to rely on machine learning and AI to handle the complexity involved in assessing hate speech,” wrote Richard Allen, Facebook’s vice president of public policy for Europe, the Middle East, and Africa.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward