• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Tech

Beauty in the eye of the A.I.: How inherent racial bias has shaped A.I. and what brands are doing to address it

By
Gabby Shacknai
Gabby Shacknai
Down Arrow Button Icon
By
Gabby Shacknai
Gabby Shacknai
Down Arrow Button Icon
November 22, 2022, 3:00 PM ET

Joy Buolamwini’s idea seemed simple. For a class project while at graduate school at MIT, she wanted to create a mirror that would inspire her every day by projecting digital images of her heroes onto her face. But when she started using the basic facial recognition software needed to program the mirror, she came across an unexpected issue: it couldn’t detect her face. Unsure of what was wrong, Buolamwini had a few friends and colleagues test the software on themselves, but it recognized each and every one of them without fail.

Suddenly, the problem became clear, as the grad student reached for a white mask and saw that her face was instantly detected: the A.I. facial recognition couldn’t pick up on her dark skin.

The experience stuck with Buolamwini and inspired her to conduct some research on the matter. “I had some questions,” she recalls. “Was this just my face, or are there other things at play?” The grad student began investigating skin type and gender bias in commercial A.I. from companies like Amazon, Google, Microsoft, and IBM, eventually writing her thesis on the subject, and she discovered a troubling theme. These systems performed better on light-skinned faces than on dark-skinned, Buolamwini found, and while error rates for lighter-skinned men were less than 1%, they were over 30% for darker-skinned women.

At the time, usage of A.I. was rising rapidly, and every industry and sector was beginning to embrace its capabilities; even so, it was obvious to Buolamwini that this was only the beginning. “This problem became urgent to me because I was seeing how A.I. was used in more and more parts of life—who gets hired, who gets fired, who gets access to a loan,” she explains. “Opportunities were being governed by algorithmic gatekeepers, and that meant oftentimes, these gatekeepers were choking opportunity on the basis of race and the basis of gender.”

After finishing grad school, Biolamwini decided to continue her research on A.I.’s racial bias and quickly realized that much of this was a result of the non-diverse datasets and imagery used by a disproportionately white, male tech workforce to train A.I. and inform its algorithms.

And by 2018, major publications, like the New York Times, started shining a light on her findings, forcing tech companies to pay attention. As tech-world players retreated to the defense and obfuscated their own involvement, though, for many consumers and brands looking to use A.I., the problem became glaring—and for those who had experienced it first-hand, it felt like there was finally an explanation.

“This is absolutely something that I’ve experienced as a Black American moving through the world,” says Dr. Ellis Monk, associate professor of sociology at Harvard University’s T.H. Chan School of Public Health. He’s encountered cameras that won’t take his photo in certain lighting, automatic hand dryers that can’t detect his hand, and even photos of all white babies when searching for “cute babies” on a search engine. “You just notice that a lot of technologies take for granted that they work for everyone, and in reality, they just kind of ignore your existence, which can feel very dehumanizing.”

Dr. Monk, who has been researching skin tone stratification and colorism for over a decade, has long been privy to the discrimination based on skin tone that has been widespread in the United States since the time of slavery.

“Even though people talk about racial inequality and racism, there’s a lot of heterogeneity in differences in and across these census categories that we tend to use all the time—Black, Asian, Latinx, white, et cetera—and these differences aren’t necessarily picked up very easily if we just stay at the level of these broad census categories, which lump everyone together regardless of their phenotypical appearance,” he says. “But what my research shows is that almost everything that we talk about when we think of racial inequality—from the education system to how we deal with police and judges to mental and physical health, wages, income, everything we can think of—is actually based in skin tone inequality or skin tone stratification. So, there are incredible amounts of life outcomes related to the lightness or darkness of someone’s skin.”

With something so deeply engrained in the sociology of Americans, Dr. Monk says it’s only natural that it would extend to technologies programmed by them. “When we think about transitioning into the world of tech, the same things that are being marginalized and ignored by the conversations we have around racial inequality in the U.S.—skin tone and colorism—are also being marginalized and ignored in the tech world,” he explains. “People historically haven’t tested their products across different racial categories, which certainly includes the skin tone aspects of computer-vision technologies.”

As a result, from the very outset, A.I. products are not made with the intention that they will work well for everyone. “If you’re not intentional about designing your products to work well across the entire skin tone continuum and rigorously testing to make sure that’s the case, then you’re going to have these huge issues in technology,” the Harvard professor adds.

Dr. Monk believes that the growing adoption of A.I., particularly by non-tech industries, has helped shine a light on the technological shortcomings surrounding colorism—but more importantly, it’s brought attention to the underlying issue: colorism as a whole. He thinks that if this is considered and addressed, remedying A.I.’s racial bias and changing the dynamics on which it operates is entirely possible. And it’s with that in mind that Dr. Monk launched a partnership with Google earlier this year.

The collaboration came to be after some people working in responsible A.I. reached out to Dr. Monk a few years ago to discuss his research on skin tone bias and A.I. machine-learning. They soon learned about a skin tone scale that the sociology professor had designed and been using in his personal work and research, which was shown to be significantly more inclusive than the Fitzpatrick Scale, the industry standard for decades, and as inclusive as a 40-point scale.

“What the scale enables us to do it make sure that we’re measuring skin tone well so that we have data and analysis that speak to these forms of inequality and can begin to have a more robust, and frankly, more honest, discussion about how race matters in the U.S. and beyond,” Dr. Monk says.

Google announced in May that it would release the Monk Skin Tone Scale and integrate it across its platforms to improve representation in imagery and to evaluate how well its products or features work across skin tones. It also hopes that doing so will usher in change across A.I., well beyond the bounds of Google, whereby all kinds of A.I.-powered products and services are built with more representative datasets and can therefore break away from the racial bias that has long dominated the technology.

Dr. Monk believes that his partnership with Google is a testament to the ability to correct the historical wrongs present in A.I., but he does point out that it doesn’t have to come to correction if it’s done the right way to begin with. “A lot of the time, there’s such a rush to be the first to do something that it can supersede the kind of caution that we need to take whenever we introduce any form of this technology into society,” he says. “I would say is that there probably needs to be a lot more caution about launching these technologies in the first place, so, it’s not just about mitigating the things that are already out there and trying to fix them.”

And while that kind of thinking may not yet be the norm, some younger players in the A.I. space have made an effort to address and remedy racial bias from the start. One such company, is leading A.I. provider Perfect Corp., whose products have been licensed by countless beauty and fashion brands, including Estée Lauder, Neutrogena, and Target, and several tech companies, like Meta and Snap. Unlike some of the tech companies that came to the scene before there was any awareness of A.I.’s racial bias, execs at Perfect Corp. feel a sense of responsibility to create technologies that work for everyone, regardless of skin tone.

“Inclusivity across the complete range of skin tones was a priority from the initial conception of the technology and one that helped to direct the development of our tools,” says Wayne Liu, the chief growth officer of Perfect Corp. The company, which was founded by Alice Chang, a woman of color, was aware of A.I.’s limitations from the beginning, so it worked to find solutions before going to market.

“We developed advanced technologies, like advanced auto-adjust settings for adaptive lighting and angles, in order to ensure an inclusive and accurate experience that incorporated the complete range of skin tones,” Liu explains.

But Perfect Corp. knew that as a provider of A.I.-powered products to other brands, navigating the technology’s deficiencies didn’t stop with its team, so the company made a point of also working with its brand partners to ensure that any racial biases were addressed in the development phase. “The widespread and accurate application of our A.I. solutions as it applies to all consumers is essential to the success of our tools and solutions, and necessary in order for brands and consumers to depend on this type of technology as a utility to aid them in their purchase decisions,” Liu adds.

Several years after launching its A.I. Shade Finder and its A.I. Skin Analysis tools, Perfect Corp. has remained true to its initial goal of inclusion. Its technology boasts 95% test-retest reliability and continues to match or surpass human shade-matching and skin analysis. Even with these myriad efforts and consistently impressive results, however, Liu knows that, despite Perfect Corp.’s name, no company is perfect and there will always be room for improvement. He and his colleagues feel that feedback and adaptability are essential to the growth of their technology and to the industry as a whole.

“It’s critical that we listen to all feedback, both from brand partners and retailers, and that which we observe from evolving consumer behaviors, in order to continue developing and delivering technology that aids in the consumer shopping journey,” he says. “A.I. is an experience for all, not an experience for most, and the success of the technology as a true tool to aid in the consumer shopping experience is dependent on its accuracy and ability to work for all consumers, not just a segment of them.”

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.

About the Author
By Gabby Shacknai
See full bioRight Arrow Button Icon

Latest in Tech

NewslettersTerm Sheet
How Anthropic grew—and what the $183 billion giant faces next
By Allie GarfinkleDecember 4, 2025
34 minutes ago
Andrew Ross Sorkin and Alex Karp speak onstage during The New York Times DealBook Summit 2025 at Jazz at Lincoln Center on December 03, 2025 in New York City.
C-Suitepalantir
Palantir CEO Alex Karp defends being an ‘arrogant prick’—and says more CEOs should be, too
By Eva RoytburgDecember 4, 2025
2 hours ago
Apple head of user interface design Alan Dye speaking in a video for the company's 2025 WWDC event. (Courtesy Apple)
NewslettersFortune Tech
Meta poaches Apple interface design chief Alan Dye
By Andrew NuscaDecember 4, 2025
2 hours ago
InnovationBrainstorm Design
Should form always follow function? Architect Ole Scheeren isn’t sure: ‘We think of buildings as living organisms’
By Christina PantinDecember 4, 2025
6 hours ago
satellite
AIData centers
Google’s plan to put data centers in the sky faces thousands of (little) problems: space junk
By Mojtaba Akhavan-TaftiDecember 3, 2025
16 hours ago
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 25, 2024.
AIMeta
Inside Silicon Valley’s ‘soup wars’: Why Mark Zuckerberg and OpenAI are hand-delivering soup to poach talent
By Eva RoytburgDecember 3, 2025
16 hours ago

Most Popular

placeholder alt text
North America
Jeff Bezos and Lauren Sánchez Bezos commit $102.5 million to organizations combating homelessness across the U.S.: ‘This is just the beginning’
By Sydney LakeDecember 2, 2025
2 days ago
placeholder alt text
Economy
Ford workers told their CEO 'none of the young people want to work here.' So Jim Farley took a page out of the founder's playbook
By Sasha RogelbergNovember 28, 2025
6 days ago
placeholder alt text
North America
Anonymous $50 million donation helps cover the next 50 years of tuition for medical lab science students at University of Washington
By The Associated PressDecember 2, 2025
2 days ago
placeholder alt text
C-Suite
MacKenzie Scott's $19 billion donations have turned philanthropy on its head—why her style of giving actually works
By Sydney LakeDecember 2, 2025
2 days ago
placeholder alt text
Innovation
Google CEO Sundar Pichai says we’re just a decade away from a new normal of extraterrestrial data centers
By Sasha RogelbergDecember 1, 2025
3 days ago
placeholder alt text
Economy
Scott Bessent calls the Giving Pledge well-intentioned but ‘very amorphous,’ growing from ‘a panic among the billionaire class’
By Nick LichtenbergDecember 3, 2025
18 hours ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.