Instagram says that it can now use machine learning to detect bullying in a post uploaded to the service.
Announced Tuesday with a host of other anti-cyberbullying features, the new feature scans a photo and if it thinks it might contain bullying content sends that post to Instagram’s community moderators for review.
The big news here is that the tool is scanning the post itself, not just the text. In the past, users have written out defamatory comments and then shared them as an image, rather than text, in order to bypass Instagram’s text filters for negative content. That feature launched in May.
“While the majority of photos shared on Instagram are positive and bring people joy, occasionally a photo is shared that is unkind or unwelcome,” Adam Mosseri, head on Instagram said in a blog post announcing the updates. “This change will help us identify and remove significantly more bullying—and it’s a crucial next step since many people who experience or observe bullying don’t report it. It will also help us protect our youngest community members since teens experience higher rates of bullying online than others.”
In addition, the service rolled out a new bullying comment filter on live videos and a new “kindness camera effect” to help share positivity on the platform.