Joy Buolamwini

WGL05.19-Joy Buolamwini
Joy Buolamwini, a researcher at the MIT Media Lab in Cambridge, Mass., Feb. 1, 2018. In a new study that measured how facial recognition technology works on people of different races and gender, Buolamwini calculated that the technology makes more errors the darker the person's skin is. "You can't have ethical A.I. that's not inclusive," she said. "And whoever is creating the technology is setting the standards." (Tony Luong/The New York Times) Tony Luong—The New York Times/Redux
  • Title
    Founder, Algorithmic Justice League
  • Affiliation
    and graduate researcher, MIT Media Lab

The economic potential of artificial intelligence captivates business leaders. But the so-called deep-learning systems behind A.I. are only as good as the data they are trained on. Imagine a scenario in which self-driving cars fail to recognize people of color as people—and are thus more likely to hit them—because their computers were trained on data sets of photos where such people were absent or underrepresented.   No one has done more than computer scientist Joy Buolamwini to draw attention to A.I. bias. In one widely read study, Buolamwini showed how facial-recognition technology from Microsoft, IBM, and China’s Megvii performed better when analyzing photos of lighter-skinned men than of darker-skinned women. Both Microsoft and IBM subsequently updated their tech. Her study of Amazon’s facial-scanning has been more controversial (Amazon has disputed her approach), but the friction underscores her influence as the conscience of the A.I. revolution.