ImageNet Roulette Highlights Bias in A.I. See For Yourself

Just like humans, artificial intelligence is guilty of making snap judgments that are biased or inaccurate. A new online tool called ImageNet Roulette was created to highlight the problem by letting anyone share a selfie and then see how A.I. labels them.

Some of the results have been spot-on, for instance labeling a platinum blonde Caucasian woman as someone whose hair is likely “artificially colored.” One humorous example labeled U.K. Prime Minister Boris Johnson as a “demagogue.”

However, other results have been blatantly offensive. People with darker skin have shared examples on social media that showed how the A.I. has labeled them as “wrongdoer, offender,” “convict,” and “first offender.” Some people are labeled using racist and misogynistic terms, which are among the 2,500 labels that ImageNet Roulette can choose from.

The project was created by Kate Crawford, head of AI Now, an organization that does work spotlighting ethical issues with A.I., and researcher Trevor Paglen, as part of an art installation currently featured at the Fondazione Prada Osservertario museum in Milan. ImageNet Roulette is part of the Training Humans exhibition, which details the history of image recognition systems and some of the inherent problems.

“We want to shed light on what happens when technical systems are trained on problematic training data,” Crawford and Paglen say on their website. “AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong.”

ImageNet Roulette was created using the ImageNet database, a digital archive unveiled in 2009 that now has 14 million labeled images. Those images were labeled using WordNet, a system of word classifications that was developed in the 1980s.

ImageNet Roulette doesn’t strive for perfection, and is instead designed to spark a discussion about bias in A.I. However, it’s hardly the first time artificial intelligence has failed when it comes to showing its unfair bias.

Last year, Google’s A.I.-powered selfie feature let people find their doppelgängers in works of art, but it spit out results that were offensive. African-Americans reported matching with stereotypical art depicting slaves. Asians were shown geishas as their look-alikes, and an Indian-American reporter was served a portrait of a Native American chief. A Google representative responded to the controversy with an apology and said the company is “committed to reducing unfair bias.”

While AI can automate jobs and make simple tasks easier for humans, it’s yet another example of how there’s still work to be done to perfect it.

More must-read stories from Fortune:

—Netflix killer? Here’s what analysts say about Apple TV+
WeWork’s latest idea to save its troubled IPO? Major governance changes
—‘Skype mafia’ backs A.I. startup automating contract negotiations
—Jingles all the way: Sonic branding is helping voice computing companies get heard
—In breakthrough, company uses quantum physics to protect data over telecom networks
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward