Google’s Jigsaw Expands Troll-Fighting Tool Amid Concerns of Racial Bias

September 7, 2017, 10:00 AM UTC

A unit of Google dedicated to policy and ideas is pushing forward with a plan to tame toxic comments on the Internet—even as critics warn that its technology, which relies on AI-powered algorithms, can promote the sort of sexism and racism Google is trying to diminish.

On Thursday, the unit known as Jigsaw announced a new community page where developers can contribute “hacks” to build out its comment-moderation tool. That tool, which is used by the likes of the New York Times and the Guardian, helps publishers moderate Internet comments at scale. The system, which depends on artificial intelligence, allowed the Times to expand the scope of its reader comments tenfold while still maintaining a civil discussion.

Now, Jigsaw is posting code and reams of data on its new community page on the site GitHub in the hopes programmers will use it to build new features for its comment project. Such features could include 3-D visualizations or charts like the example below, which correlates toxic comments with time of day:

If developers leap in and build new tools, it could give Jigsaw and publishers (hundreds of them are reportedly experimenting with it) a stronger hand in shutting out trolls, who stalk the Internet looking for petty fights or simply to spew hate or insults.

Get Data Sheet, Fortune’s technology newsletter.

Recently, however, some are questioning whether Jigsaw’s AI is actually improving the Internet, or if the technology is making things worse. The tech site Engadget, for instance, noted that the phrase “I am a gay black woman” produced a toxicity score of 87%—meaning the automated moderation tool would be likely to bury any comment that contained the phrase. Conversely, the phrase “I am a man” received a very low toxicity score.

Such results would appear to confirm fears, expressed by some journalists and scholars, that biases are getting baked into the algorithms that inform technology such as Jigsaw’s AI tools.

According to a person familiar with Jigsaw, who spoke on the condition of anonymity, the reason AI tools find phrases like “I am a gay black woman” to be toxic is because there is a high probability the comments in which they are found contain swear words or other offensive language. Put another away, the data sets used to train the AI reflected the ugly nature of many Internet comments. The person added that Jigsaw is adding more granular context to its AI training sets in order to address the problem.

The company, meanwhile, put out a statement to address the alleged bias issue in its AI tool, which is called Perspective.

“Perspective is an early-stage machine learning technology that will naturally improve over time. Our team is constantly refining the models and working with research partners to improve Perspective’s accuracy,” said the statement.

Jigsaw on Thursday also unveiled a new blog called “The False Positive” that is dedicated to describing the challenges of developing machine learning tools.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward