Great ResignationClimate ChangeLeadershipInflationUkraine Invasion

How to Fight the Growing Scourge of Algorithmic Bias in AI

September 14, 2018, 1:12 PM UTC
Joy Buolamwini speaks in Boston
Researcher Joy Buolamwini, seen here speaking in Boston at an Affectiva-sponsored conference on Sept. 6, 2018, has started the Algorithmic Justice League to combat algorithmic bias in AI and machine learning apps.
Courtesy of Affectiva and Steve Nisotel Photography

Joy Buolamwini was a graduate student at MIT a few years ago when she was working on an art and science project called the Aspire Mirror. The set up was supposed to use readily available facial recognition software to project images onto people’s faces. But the software couldn’t identify African-American Buolamwini’s own face—unless she put on a white mask. She tells the story in more detail in a TED talk.

As she encountered other examples of what’s become known as algorithmic bias, Buolamwini decided to conduct a more rigorous review. Putting three well-known facial recognition programs to the test (including ones from IBM and Microsoft), she found that all had a significantly harder time correctly identifying darker skinned faces, particularly of women.

Her next step has been to attack the problem of algorithmic bias head on, forming the Algorithmic Justice League, a group of real life superheroes with the mission of ferreting out and eliminating bias in the machine learning and artificial intelligence programs that are being used not just for mundane tasks like identifying your friends in a Facebook photo, but also making life-changing decisions in the realms of healthcare, insurance, and criminal justice.

I first met Buolamwini just last week, when I moderated a panel on the ethics of AI at a conference in Boston sponsored by AI software developer Affectiva. She and her fellow panelists, Rumman Chowdhury, global lead for responsible AI at Accenture, and Mark Latonero of the Data & Society Research Institute, offered more than a few ways to combat the problems we discussed.

To start, Buolamwini believes that “who codes matters,” because more diverse teams of programmers can be more aware of preventing algorithmic bias from creeping in. The sets of data used to train facial recognition or other kinds of apps need to be diverse, too. That may have been why the program Buolamwini used for her Aspire Mirror couldn’t identify black faces. Finally, she supports deeper consideration of the laws and practices around potential uses of AI. It’s a conversation that needs to be had immediately.