• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersEye on AI

Bias in medical algorithms is one of AI’s long-running issues. Will new guidelines ignite action?

Sage Lazzaro
By
Sage Lazzaro
Sage Lazzaro
Contributing writer
Down Arrow Button Icon
Sage Lazzaro
By
Sage Lazzaro
Sage Lazzaro
Contributing writer
Down Arrow Button Icon
December 19, 2024, 12:11 PM ET
Robotic engineer colleagues working on robotic knee replacement surgery project, adjusting robots settings.
AI is increasingly finding its way into healthcare decisions, from diagnostics to treatment decisions to robotic surgery.Getty Images

Hello and welcome to Eye on AI. In today’s edition…An international initiative aims to tackle bias in medical AI algorithms; Europe’s privacy regulators say training on internet data might pass muster with GDPR—but the hurdles for doing so are high; Geopolitical tensions impede the flow of AI talent from China to the U.S.; Character.ai comes under fire for disturbing content (again); and AI startups hog all the fundraising.

Recommended Video

As I’ve written about in this newsletter many times, AI is sweeping the healthcare industry—from drug discovery to AI-enhanced mammograms to transcription of clinical medical documents. 

Long before hallucinations and many of the risks brought to the forefront by the generative AI boom became apparent, we had widespread evidence of bias in AI algorithms, which are often less accurate for some groups, such as women and people of color. Now, as AI companies and healthcare providers increasingly integrate AI into patient care, ways to evaluate and address such biases are needed more than ever. 

Yesterday, an international initiative called “STANDING Together (STANdards for data Diversity, INclusivity and Generalizability)” released recommendations to address bias in medical AI technologies, hoping to “drive further progress towards AI health technologies that are not just safe on average, but safe for all.” Published in The Lancet Digital Health and NEJM AI—along with a commentary by the initiative’s patient representatives published in Nature Medicine—the recommendation follows a research study involving more than 30 institutions and 350 experts from 58 countries. 

The recommendations largely deal with transparency, training data, and how AI medical technologies should be tested for bias, targeting both those who curate datasets and those who use the datasets to create AI systems. 

The problem 

Before getting to recommendations, let’s review the problem. 

Overall, algorithms created to detect illness and injury tend to underperform on underrepresented groups like women and people of color. For example, technologies that use algorithms to detect skin cancer have been found to be less accurate for people with darker skin, while a liver disease detection algorithm was found to underperform for women. One bombshell study revealed that a clinical algorithm used widely by hospitals required Black patients to be much sicker before it recommended they receive the same care it recommended for white patients who were not as ill. Similar biases have been uncovered in algorithms used to determine resource allocation, such as how much assistance people with disabilities receive. These are just a handful of many examples. 

The cause of these problems is most often found in the data used to train AI algorithms. This data is itself often incomplete or distorted—women and people of color are historically underrepresented in medical studies. In other cases, algorithms fail because they are trained on data that is meant to be a proxy for some other piece of information, but which turns out not to appropriately capture the issue the AI system is supposed to address. The hospital algorithm that denied Black patients the same level of care as white patients failed because it used health-care costs as a proxy for patient care during training. And it turns out that hospital systems have historically spent less on healthcare for Black patients at every level of care, which meant that the AI failed to accurately predict Black patients’ needs.

Suggested solutions

The collective behind the study issued 29 recommendations — 18 aimed at dataset curators and 11 aimed at data users. 

For the dataset curators, the paper recommends that dataset documentation should include a summary of the dataset written in plain language, indicate which groups are present in the dataset, address any missing data, identify known or expected sources of bias or error, make clear who created the dataset, who funded it, and detail any purposes for which dataset use should be avoided, among other steps to increase transparency and provide context. 

For data users, the recommendations state that they should identify and transparently report areas of under-representation, evaluate performance for contextualised groups, acknowledge known biases and limitations (and their implications), and manage uncertainties and risks throughout the lifecycle of AI health technologies, including documentation at every step.

Among the overall themes are a call to proactively inquire and be transparent, and the need to be sensitive to context and complexity. “If bias encoding cannot be avoided at the algorithm stage, its identification enables a range of stakeholders relevant to the AI health technology’s use (developers, regulators, health policy makers, and end users) to acknowledge and mitigate the translation of bias into harm,” the paper reads. 

Will guidelines translate into action? 

Like with every emerging use of AI, it’s a delicate balance between the potential benefits, known risks, and responsible implementation. The stakes are high, and this is particularly true when it comes to medical care. 

This paper is not the first to try to tackle bias in AI health technologies, but it is among the most comprehensive and arrives at a critical time. The authors write that the recommendations are not intended to be a checklist, but rather to prompt proactive inquiry. But let’s be real: the only way to be certain these lessons will be applied is through regulation. 

And with that, here’s more AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS

Europe’s privacy regulators affirm AI companies’ “legitimate interest” GDPR argument but set a high bar for complying. The European Data Protection Board (EDPB) issued the new guidelines pertaining to AI yesterday, stating that AI companies’ argument that they have a “legitimate interest” to process people’s personal data for the sake of training AI models is a potentially legal basis for doing so. The opinion does stress that claiming “legitimate interest” would require companies to pass a three-step test, including having a “clear and precisely articulated” reason for processing someone’s data, and that the processing would have to be “really necessary” for achieving the desired aim. Meta applauded the decision, while stating it’s “frustrating” it took this long. Some privacy advocates, on the other hand, felt the decision is too vague, while others worry that the opinion will make it difficult to offer many AI applications in Europe. In particular, some pointed to challenges the three-step test poses for general AI models like ChatGPT that weren’t built with one clear use in mind and can be used in new and different ways after release. You can read more from Fortune’s David Meyer. 

Intensifying U.S.-China tech tensions are impacting immigration of top AI talent. China produces half of the world’s AI talent and has consistently ranked as U.S. tech companies’ biggest source of highly-skilled international STEM workers. Chinese AI workers are still looking to immigrate, citing that restrictions prevent them from accessing cutting-edge chips and technologies, like those from OpenAI. Yet, rising geopolitical tensions and espionage concerns are leading to increased scrutiny, longer screening processes, and visa delays for Chinese nationals applying to study or work in the U.S., as well as Canada, another popular destination for top AI talent. You can read more from Rest of World. 

Character.ai is hosting chatbots emulating real school shooters—and their victims. “Much of this alarming content is presented as twisted fanfiction, with shooters positioned as friends or romantic partners,” reported Futurism, which found chatbots emulating the specific shooters who committed the massacres at Sandy Hook and Columbine, as well as their victims. Other chatbots thrust users into the midst of graphic school shooting scenarios, prompting them to navigate chaotic scenes at a school in a game-like simulation. These scenes discuss specific weapons and injuries to classmates, reported Futurism. The disturbing report comes as the Google-backed company is already facing multiple lawsuits alleging its chatbots promote violence and self-harm to young users.

AI search tool Perplexity raises additional $500 million at $9 billion valuation. That’s according to a Bloomberg story. The funding round was led by Institutional Venture Partners. Founded in 2022, Perplexity has grown rapidly, and boasted 15 million active users as of March.

Google is asking contract evaluators helping to train its Gemini AI system to judge content in which they may have no expertise. Tech Crunch, citing documents that it had obtained, reported that Google had updated the guidance it had given contractors who work for GlobalLogic, an outsourcing firm whose contractors provide feedback on the answers Google’s Gemini AI models produce in order to help refine those systems. While the contractors used to be able to skip evaluating answers if they felt unqualified to assess the response, the new guidelines removed this option. Critics argue this could lead to less reliable AI outputs, particularly in critical domains such as healthcare, financial advice, or legal advice. Google declined to comment on the report.

OpenAI rolls out a ‘1-800-CHATGPT’ feature. The AI company announced it will allow users in the U.S. to call ChatGPT for free for up to 15 minutes per month using 1-800-CHATGPT and message it globally via WhatsApp, The Verge reported. The service, powered by OpenAI’s Realtime API, aims to make AI more accessible through familiar channels. OpenAI clarified it will not use these calls to train its models, addressing privacy concerns, Fortune’s Jenn Brice reported. But the new feature is reminiscent of Google’s discontinued GOOG-411, which collected voice samples to improve speech recognition.

FORTUNE ON AI

Databricks CEO Ali Ghodsi on raising $10 billion, fighting for AI talent, and someday going public —by Allie Garfinkle

Hundreds of OpenAI’s current and ex-employees are about to get a huge payday by cashing out up to $10 million each in a private stock sale —by Sharon Goldman

Michael Dell says adoption of AI PCs is ‘definitely delayed,’ but it’s coming: ‘I’ve seen this movie a couple times before’ —by Sharon Goldman

How Lowe’s is trying to spruce up shopping with AI, mixed-reality headsets, and other new technologies —by John Kell

AI CALENDAR

Jan. 7-10: CES, Las Vegas

Jan 16-18: DLD Conference, Munich

Jan. 20-25: World Economic Forum, Davos, Switzerland

Feb. 10-11: AI Action Summit, Paris, France

March 3-6: MWC, Barcelona

March 7-15: SXSW, Austin

March 10-13: Human [X] conference, Las Vegas

March 17-20: Nvidia GTC, San Jose

April 9-11: Google Cloud Next, Las Vegas

EYE ON AI NUMBERS

2

That’s how many years on average it took the AI startups that became unicorns in 2024 to reach that $1 billion+ valuation level. That’s compared to nine years for non-AI unicorns. Of the 72 companies that became unicorns this year, 32 (44%) are AI startups, according to CB Insights. 

Even when it comes to much smaller rounds and valuations, non-AI startups are struggling to fundraise as AI monopolizes the attention of investors. Reporting this week in TechCrunch described how other startups can’t compete with all the appetite for AI, and that non-AI companies that raised series A rounds 18 months ago are struggling to raise series B rounds, even with decent revenue growth.

This is the online version of Eye on AI, Fortune's biweekly newsletter on how AI is shaping the future of business. Sign up for free.
About the Author
Sage Lazzaro
By Sage LazzaroContributing writer

Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

See full bioRight Arrow Button Icon

Latest in Newsletters

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • World's Most Admired Companies
  • See All Rankings
  • Lists Calendar
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Lists Calendar
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • About Us
  • Lists Calendar
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Newsletters

Meta CEO Mark Zuckerberg in Washington, D.C. on March 26, 2026. (Tom Williams/CQ-Roll Call/Getty Images)
NewslettersFortune Tech
Meta cuts 8,000 workers to relieve AI spending pressure
By Andrew NuscaApril 24, 2026
15 minutes ago
Upstart’s new millennial CEO, a Yale dropout, thinks AI can make every American 10% richer
NewslettersCEO Daily
Upstart’s new millennial CEO, a Yale dropout, thinks AI can make every American 10% richer
By Diane BradyApril 24, 2026
56 minutes ago
AI security leaders gather in Washington as risks mount—and Mythos raises the stakes
NewslettersEye on AI
AI security leaders gather in Washington as risks mount—and Mythos raises the stakes
By Sharon GoldmanApril 23, 2026
18 hours ago
What a Best Buy CEO exit and a fresh start at Lululemon reveal about leading through volatility
NewslettersMPW Daily
What a Best Buy CEO exit and a fresh start at Lululemon reveal about leading through volatility
By Emma HinchliffeApril 23, 2026
19 hours ago
Stephen and Ayesha Curry attend the LA premiere of Columbia Pictures and Sony Pictures Animation's "Goat" at the AMC Century City 15 in Los Angeles on February 6, 2026.
NewslettersCFO Daily
Stephen and Ayesha Curry talk about the one habit that separates good business leaders from great ones
By Sheryl EstradaApril 23, 2026
23 hours ago
Why Trump may hand taxpayers a majority stake in a failing airline: ‘Everything is a deal’
NewslettersCEO Daily
Why Trump may hand taxpayers a majority stake in a failing airline: ‘Everything is a deal’
By Diane BradyApril 23, 2026
1 day ago

Most Popular

When interest on national debt overtook military spending, it triggered a limit where the U.S. may ‘cease to be a great power,’ warns Hoover historian
Economy
When interest on national debt overtook military spending, it triggered a limit where the U.S. may ‘cease to be a great power,’ warns Hoover historian
By Eleanor PringleApril 23, 2026
24 hours ago
Despite nearing their 60s, nearly four in 10 Americans heading towards the end of their careers don’t even have a retirement account
Success
Despite nearing their 60s, nearly four in 10 Americans heading towards the end of their careers don’t even have a retirement account
By Emma BurleighApril 23, 2026
20 hours ago
Officials will flush 50,000 toilets to flood a Utah lake in order to generate electricity
Environment
Officials will flush 50,000 toilets to flood a Utah lake in order to generate electricity
By Mead Gruver, Dorany Pineda and The Associated PressApril 22, 2026
2 days ago
Cursor’s 25-year-old CEO is a former Google intern who just inked a $60 billion deal with SpaceX
AI
Cursor’s 25-year-old CEO is a former Google intern who just inked a $60 billion deal with SpaceX
By Marco Quiroz-GutierrezApril 22, 2026
2 days ago
The Iran war is pushing Southeast Asia to debate the once unthinkable: Whether ships will need to pay to transit the Strait of Malacca
Economy
The Iran war is pushing Southeast Asia to debate the once unthinkable: Whether ships will need to pay to transit the Strait of Malacca
By Angelica AngApril 23, 2026
24 hours ago
The Gen Z Pout and the Gen Z Stare are both a warning to Fortune 500 CEOs
Future of Work
The Gen Z Pout and the Gen Z Stare are both a warning to Fortune 500 CEOs
By Nick LichtenbergApril 23, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.