• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Commentary

Biases in AI chatbots pose a risk to my real-time translation startup. Diversity and inclusion help us fight them

By
Heather Shoemaker
Heather Shoemaker
Down Arrow Button Icon
By
Heather Shoemaker
Heather Shoemaker
Down Arrow Button Icon
July 31, 2024, 2:16 PM ET

Heather Shoemaker is the founder and CEO of Language I/O, an AI-powered platform that helps companies support their customers in their native language with their existing team.

Heather Shoemaker, CEO of Language I/O
Heather Shoemaker, CEO of Language I/O, warns about biases in AI chatbots—and the need for diversity in the teams checking them.courtesy ofLanguage I/O

OpenAI’s technology development team consists of just 18% women. In a recent YouTube video announcement debuting the company’s ChatGPT-4o update—a “new flagship model which can reason across audio, vision, and text in real time”—the AI’s voice was that of a coy-sounding woman fawning over the man interacting with “her,” complimenting his outfit and giggling seductively. In short, the ultra-agreeable AI voice sounded flirty.

If more women had been involved in the AI’s development, the voice would likely have taken on a different persona—one that didn’t double down on outdated stereotypes.

My objective in founding Language I/O, a real-time translation platform, was to connect people through the power of AI—regardless of location, language, or lifestyle. As CEO, I’ve seen firsthand how a lack of diversity within teams can undermine even the most well-intentioned initiatives. A technology is only as good as the team behind it.

A completely homogeneous team risks becoming an echo chamber of limited, like-minded ideas, which will, at best, stifle innovation. At worst, the scarcity of different perspectives can lead to offensive, inappropriate, or outright incorrect solutions, undermining the power technology has to connect us.

Exposing biases in AI chatbots

That’s why I put together our red team—a group of diverse women with a mandate to break our technology. They succeeded in making AI bots curse, flirt, hallucinate, and insult, despite the standard industry guardrails.

But it’s not just gender biases that push through into chatbot outputs. Large language models (LLMs) can also have racial, cultural, and sexual-orientation biases, among others. It’s not surprising when you look at the corpus these models are trained on. User-generated content that’s publicly available on the internet is full of biases that lead to biased outputs.

Since our platform leverages AI-powered LLMs—and upcoming releases will include stand-alone, multilingual bots—finding a solution to this problem became a top priority for me. To build the guardrails necessary to keep the chatbots in line, we needed to figure out just how the chatbots could be made to go off the rails. Enter our red team.

While the concept of an AI red team is still fairly new, their goals are evergreen: help improve a technology through exhaustive and creative testing. When I first put out the call to all our interns about this project, four women from varied backgrounds volunteered and immediately got to work figuring out how to “break” our LLM. Once it was broken, we sent that information back to our development team so they could implement appropriate safeguards to prevent the AI from producing such outputs again.

With creative prompts, the red team quickly got the chatbot to do everything from promise fake discounts to swear profusely to talk shockingly dirty. They were not limited to exposing offensive tendencies. One team member convinced the chatbot, which was supposed to be responding to questions about a media streaming service, to apologize for the level of ripeness of a banana.

More valuable was that the multilingual team members identified potential pitfalls in cross-language communications, exposed biases stemming from male-dominated data sources, and flagged gender stereotypes embedded in AI personalities. Their varied backgrounds helped the team spot issues that often go undetected by less diverse groups.

Since we leverage AI for real-time translations into over 150 languages, we also have multilingual employees constantly testing our bot. Multilingual testing is especially important because the underlying protections in major LLMs are so focused on English. Our process is designed to ensure equality and quality across all the languages our bots support.

More ethical AI

As AI becomes more ubiquitous, brands are taking an interest in how models are trained and the ethical considerations behind them. We work closely with a major lingerie retail company, and its top concern is AI ethics. Given the inherent male bias in LLMs today, coupled with the fact that men control this largely female-focused industry, AI-generated output about lingerie is easily skewed. For example, lingerie advertising often focuses on a man’s idea of what lingerie is, which is often about sex. Women, however, want to feel and look good when they wear lingerie. So protecting this company’s brand and its customer experience is something we take seriously—and why we want the red team to push our model and technology.

One of the most interesting tests our team did used different fonts to see how responses changed. In one case, it made the bot swear like a sailor despite being trained not to do so. As a developer, I couldn’t help but admire the creativity of the team. It is only through testing like this that companies know what sort of AI protections are important.

We continuously test and retrain the models so they can adapt to ensure a more equitable interaction with the LLM. For the foreseeable future, the only way to provide AI equity is to prioritize strategies that promote it. That means working with teams that aren’t comprised solely of white men.

Diversity isn’t just a buzzword or a box to check—it’s essential for creating LLMs that work well for everyone. When we bring together people with different backgrounds, experiences, and perspectives, we end up with AI that’s more robust, ethical, and capable of serving a global user base. If we want AI that truly benefits humanity, we need to ensure the humans creating it better represent all of humanity.

More reading:

  • AI with hidden biases may be subtly shaping what you think: ‘You may not even know that you are being influenced’
  • AI is already screening job resumes and rental apartment applications and even determining medical care with almost no oversight
  • AI models are coming to fashion to promote diversity—but some industry insiders are concerned it will end up ‘parodying it’
  • As an underrepresented venture fund CEO, I believe in meritocracy—and I invest in underrepresented entrepreneurs for a reason

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By Heather Shoemaker
See full bioRight Arrow Button Icon

Latest in Commentary

Ayesha and Stephen Curry (L) and Arndrea Waters King and Martin Luther King III (R), who are behind Eat.Play.Learn and Realize the Dream, respectively.
Commentaryphilanthropy
Why time is becoming the new currency of giving
By Arndrea Waters King and Ayesha CurryDecember 2, 2025
9 hours ago
Trump
CommentaryTariffs and trade
The trade war was never going to fix our deficit
By Daniel BunnDecember 2, 2025
11 hours ago
Elizabeth Kelly
CommentaryNon-Profit
At Anthropic, we believe that AI can increase nonprofit capacity. And we’ve worked with over 100 organizations so far on getting it right
By Elizabeth KellyDecember 2, 2025
12 hours ago
Decapitation
CommentaryLeadership
Decapitated by activists: the collapse of CEO tenure and how to fight back
By Mark ThompsonDecember 2, 2025
12 hours ago
David Risher
Commentaryphilanthropy
Lyft CEO: This Giving Tuesday, I’m matching every rider’s donation
By David RisherDecember 1, 2025
1 day ago
college
CommentaryTech
Colleges risk getting it backwards on AI and they may be hurting Gen Z job searchers
By Sarah HoffmanDecember 1, 2025
2 days ago

Most Popular

placeholder alt text
Economy
Ford workers told their CEO 'none of the young people want to work here.' So Jim Farley took a page out of the founder's playbook
By Sasha RogelbergNovember 28, 2025
4 days ago
placeholder alt text
Success
Warren Buffett used to give his family $10,000 each at Christmas—but when he saw how fast they were spending it, he started buying them shares instead
By Eleanor PringleDecember 2, 2025
15 hours ago
placeholder alt text
Success
Forget the four-day workweek, Elon Musk predicts you won't have to work at all in ‘less than 20 years'
By Jessica CoacciDecember 1, 2025
1 day ago
placeholder alt text
Economy
Elon Musk says he warned Trump against tariffs, which U.S. manufacturers blame for a turn to more offshoring and diminishing American factory jobs
By Sasha RogelbergDecember 2, 2025
8 hours ago
placeholder alt text
Innovation
Google CEO Sundar Pichai says we’re just a decade away from a new normal of extraterrestrial data centers
By Sasha RogelbergDecember 1, 2025
1 day ago
placeholder alt text
Personal Finance
Current price of gold as of December 1, 2025
By Danny BakstDecember 1, 2025
1 day ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.