• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

I accidentally created a racist A.I. algorithm in 1998 and I’m having horrible déjà vu with the ChatGPT hype

By
John MacCormick
John MacCormick
and
The Conversation
The Conversation
Down Arrow Button Icon
By
John MacCormick
John MacCormick
and
The Conversation
The Conversation
Down Arrow Button Icon
May 9, 2023, 10:37 AM ET
Man staring at laptop
It's well known that A.I. reflects our biases.Getty Images

In 1998, I unintentionally created a racially biased artificial intelligence algorithm. There are lessons in that story that resonate even more strongly today.

Recommended Video

The dangers of bias and errors in AI algorithms are now well known. Why, then, has there been a flurry of blunders by tech companies in recent months, especially in the world of AI chatbots and image generators? Initial versions of ChatGPT produced racist output. The DALL-E 2 and Stable Diffusion image generators both showed racial bias in the pictures they created.

My own epiphany as a white male computer scientist occurred while teaching a computer science class in 2021. The class had just viewed a video poem by Joy Buolamwini, AI researcher and artist and the self-described poet of code. Her 2019 video poem “AI, Ain’t I a Woman?” is a devastating three-minute exposé of racial and gender biases in automatic face recognition systems – systems developed by tech companies like Google and Microsoft.

The systems often fail on women of color, incorrectly labeling them as male. Some of the failures are particularly egregious: The hair of Black civil rights leader Ida B. Wells is labeled as a “coonskin cap”; another Black woman is labeled as possessing a “walrus mustache.”

Echoing through the years

I had a horrible déjà vu moment in that computer science class: I suddenly remembered that I, too, had once created a racially biased algorithm. In 1998, I was a doctoral student. My project involved tracking the movements of a person’s head based on input from a video camera. My doctoral adviser had already developed mathematical techniques for accurately following the head in certain situations, but the system needed to be much faster and more robust. Earlier in the 1990s, researchers in other labs had shown that skin-colored areas of an image could be extracted in real time. So we decided to focus on skin color as an additional cue for the tracker.

I used a digital camera – still a rarity at that time – to take a few shots of my own hand and face, and I also snapped the hands and faces of two or three other people who happened to be in the building. It was easy to manually extract some of the skin-colored pixels from these images and construct a statistical model for the skin colors. After some tweaking and debugging, we had a surprisingly robust real-time head-tracking system.

Not long afterward, my adviser asked me to demonstrate the system to some visiting company executives. When they walked into the room, I was instantly flooded with anxiety: the executives were Japanese. In my casual experiment to see if a simple statistical model would work with our prototype, I had collected data from myself and a handful of others who happened to be in the building. But 100% of these subjects had “white” skin; the Japanese executives did not.

Miraculously, the system worked reasonably well on the executives anyway. But I was shocked by the realization that I had created a racially biased system that could have easily failed for other nonwhite people.

Privilege and priorities

How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.

Ten years before I created the head-tracking system, the scholar Peggy McIntosh proposed the idea of an “invisible knapsack” carried around by white people. Inside the knapsack is a treasure trove of privileges such as “I can do well in a challenging situation without being called a credit to my race,” and “I can criticize our government and talk about how much I fear its policies and behavior without being seen as a cultural outsider.”

In the age of AI, that knapsack needs some new items, such as “AI systems won’t give poor results because of my race.” The invisible knapsack of a white scientist would also need: “I can develop an AI system based on my own appearance, and know it will work well for most of my users.”

One suggested remedy for white privilege is to be actively anti-racist. For the 1998 head-tracking system, it might seem obvious that the anti-racist remedy is to treat all skin colors equally. Certainly, we can and should ensure that the system’s training data represents the range of all skin colors as equally as possible.

Unfortunately, this does not guarantee that all skin colors observed by the system will be treated equally. The system must classify every possible color as skin or nonskin. Therefore, there exist colors right on the boundary between skin and nonskin – a region computer scientists call the decision boundary. A person whose skin color crosses over this decision boundary will be classified incorrectly.

Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.

A simple analogy can explain this. Imagine you are given a choice between two tasks. Task A is to identify one particular type of tree – say, elm trees. Task B is to identify five types of trees: elm, ash, locust, beech and walnut. It’s obvious that if you are given a fixed amount of time to practice, you will perform better on Task A than Task B.

In the same way, an algorithm that tracks only white skin will be more accurate than an algorithm that tracks the full range of human skin colors. Even if they are aware of the need for diversity and fairness, scientists can be subconsciously affected by this competing need for accuracy.

Hidden in the numbers

My creation of a biased algorithm was thoughtless and potentially offensive. Even more concerning, this incident demonstrates how bias can remain concealed deep within an AI system. To see why, consider a particular set of 12 numbers in a matrix of three rows and four columns. Do they seem racist? The head-tracking algorithm I developed in 1998 is controlled by a matrix like this, which describes the skin color model. But it’s impossible to tell from these numbers alone that this is in fact a racist matrix. They are just numbers, determined automatically by a computer program.

The problem of bias hiding in plain sight is much more severe in modern machine-learning systems. Deep neural networks – currently the most popular and powerful type of AI model – often have millions of numbers in which bias could be encoded. The biased face recognition systems critiqued in “AI, Ain’t I a Woman?” are all deep neural networks.

The good news is that a great deal of progress on AI fairness has already been made, both in academia and in industry. Microsoft, for example, has a research group known as FATE, devoted to Fairness, Accountability, Transparency and Ethics in AI. A leading machine-learning conference, NeurIPS, has detailed ethics guidelines, including an eight-point list of negative social impacts that must be considered by researchers who submit papers.

Who’s in the room is who’s at the table

On the other hand, even in 2023, fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.

The systems suffer from exactly the same problems as my 1998 head tracker. Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.

So, how far has the AI field really come since it was possible, over 25 years ago, for a doctoral student to design and publish the results of a racially biased algorithm with no apparent oversight or consequences? It’s clear that biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.

These days it’s a cliché to say industry and academia need diverse groups of people “in the room” designing these algorithms. It would be helpful if the field could reach that point. But in reality, with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.

That’s why the fundamental lessons of my 1998 head tracker are even more important today: It’s easy to make a mistake, it’s easy for bias to enter undetected, and everyone in the room is responsible for preventing it.

John MacCormick is Professor of Computer Science, Dickinson College.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Authors
By John MacCormick
See full bioRight Arrow Button Icon
By The Conversation
See full bioRight Arrow Button Icon

Latest in Tech

Sarandos
Arts & EntertainmentM&A
It’s a sequel, it’s a remake, it’s a reboot: Lawyers grow wistful for old corporate rumbles as Paramount, Netflix fight for Warner
By Nick LichtenbergDecember 13, 2025
2 hours ago
Oracle chairman of the board and chief technology officer Larry Ellison delivers a keynote address during the 2019 Oracle OpenWorld on September 16, 2019 in San Francisco, California.
AIOracle
Oracle’s collapsing stock shows the AI boom is running into two hard limits: physics and debt markets
By Eva RoytburgDecember 13, 2025
3 hours ago
robots
InnovationRobots
‘The question is really just how long it will take’: Over 2,000 gather at Humanoids Summit to meet the robots who may take their jobs someday
By Matt O'Brien and The Associated PressDecember 12, 2025
16 hours ago
Man about to go into police vehicle
CryptoCryptocurrency
Judge tells notorious crypto scammer ‘you have been bitten by the crypto bug’ in handing down 15 year sentence 
By Carlos GarciaDecember 12, 2025
17 hours ago
three men in suits, one gesturing
AIBrainstorm AI
The fastest athletes in the world can botch a baton pass if trust isn’t there—and the same is true of AI, Blackbaud exec says
By Amanda GerutDecember 12, 2025
17 hours ago
Brainstorm AI panel
AIBrainstorm AI
Creative workers won’t be replaced by AI—but their roles will change to become ‘directors’ managing AI agents, executives say
By Beatrice NolanDecember 12, 2025
17 hours ago

Most Popular

placeholder alt text
Economy
Tariffs are taxes and they were used to finance the federal government until the 1913 income tax. A top economist breaks it down
By Kent JonesDecember 12, 2025
1 day ago
placeholder alt text
Success
Apple cofounder Ronald Wayne sold his 10% stake for $800 in 1976—today it’d be worth up to $400 billion
By Preston ForeDecember 12, 2025
22 hours ago
placeholder alt text
Success
40% of Stanford undergrads receive disability accommodations—but it’s become a college-wide phenomenon as Gen Z try to succeed in the current climate
By Preston ForeDecember 12, 2025
21 hours ago
placeholder alt text
Success
At 18, doctors gave him three hours to live. He played video games from his hospital bed—and now, he’s built a $10 million-a-year video game studio
By Preston ForeDecember 10, 2025
3 days ago
placeholder alt text
Economy
For the first time since Trump’s tariff rollout, import tax revenue has fallen, threatening his lofty plans to slash the $38 trillion national debt
By Sasha RogelbergDecember 12, 2025
17 hours ago
placeholder alt text
Economy
The Fed just ‘Trump-proofed’ itself with a unanimous move to preempt a potential leadership shake-up
By Jason MaDecember 12, 2025
15 hours ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.