Social networking companies used by millions of people face a dilemma. They want users to post content freely, but they are also under growing pressure to block objectionable material like pornography and violence as well as bad acts like cyber-bullying.
This week, both Instagram and Twitter announced steps aimed at making their sites safer. The goal is to protect users from potentially objectionable posts while trying to avoid going overboard with censorship.
On Thursday, Instagram, which is owned by Facebook (FB), said it would start obscuring images that some users find objectionable. Of course, what is objectionable is in the eye of the beholder, but such material might include images of nudity or gore. Photos and videos that are flagged by users and then labeled as sensitive by an internal review board will be blurred, although users can still view the originals if they click on them.
“While these posts don’t violate our guidelines, someone in the community has reported them and our review team has confirmed they are sensitive,” Instagram co-founder Kevin Systrom said in a company post.
The idea is to give users a warning before they look at the images. Before, while scrolling through images, they had little advance warning of what was coming.
This news comes a day after Twitter (TWTR) vice president of data strategy Chris Moody said his company would use IBM’s Watson artificial intelligence services to root out bullies on the online bulletin board. Already used for formulating recipes and finding cancer treatments, Watson in some form will now become online sheriff in what the wild west that is Twitter.
“We have had some abuse on the platform,” Moody acknowledged at a tech conference in Las Vegas, according to Geekwire. “We’ve talked very publicly in the last few months and said our No. 1 priority is stop the abuse. It’s a very, very hard challenge.”
Get Data Sheet, Fortune’s technology newsletter.
Much of Watson’s expertise lies in text analysis, which would come in handy for parsing billions of tweets and then determine patterns among them. In theory, if a given Twitter account keeps tweeting at other accounts that do not follow it, it builds a pattern of behavior that, if detected, may show that it violated Twitter rules.
The account can then be terminated.
Three years ago, Twitter and IBM announced a partnership for analyzing data that enabled IBM to incorporate Twitter data into some of its business products. It is unclear if Watson’s new job at Twitter is part of that deal. Fortune contacted IBM and Twitter for comment and will update this story as needed.
For more on Twitter, watch:
This is an ongoing battle. Early this month Twitter published an update on how it’s working to curtail abusive accounts, in response to concerns about online behavior.
Instagram and Twitter are hardly alone. Google (GOOG) earlier this week promised to do a better job policing Youtube videos after several large companies pulled ads to keep them from showing up beside hate-filled material.
And, Facebook, which has nearly 2 billion users worldwide, is also struggling to find a balance between free expression and civil behavior.
The problem, as Facebook (FB) CEO Mark Zuckerberg put it in an open letter, is that standards of what is acceptable vary widely by geography and culture.
But then again, some bad behavior is inarguably blatant, and social media companies, in some cases, have failed to police even that.