• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechGoogle

Google’s sexist algorithms offer an important lesson in diversity

By
Stacey Higginbotham
Stacey Higginbotham
Down Arrow Button Icon
By
Stacey Higginbotham
Stacey Higginbotham
Down Arrow Button Icon
July 8, 2015, 12:17 PM ET
Digital business help from Google
Digital business help from Google. File photo dated 04/09/13 of an iPad showing the Google search engine home page as the internet giant has launched a multimillion-pound initiative aimed at helping thousands of firms with digital business. Issue date: Friday March 13, 2015. The company is opening so-called "pop-up garages", starting in Leeds later this month, as part of a six-month scheme offering digital classes, as well as offering workshops in computer science training for teachers. Eileen Naughton, Google's managing director for UK & Ireland, said: "While the majority of UK small businesses recognise the importance of having a website and using basic digital tools, less than 30% of small to medium-sized enterprises have an effective online presence. We want to help jumpstart the other 70%. We believe that giving small business-owners access to expert advice will help strengthen the UK's reputation as one of the most advanced digital economies, and cement its place as the most advanced e-commerce market in the world.Ó See PA story INDUSTRY Google. Photo credit should read: Chris Radburn/PA Wire URN:22485088Chris Radburn — PA Wire/AP

Google is the latest company to be caught out by software behaving like a jerk, with researchers from Carnegie Mellon University showing that the company displays more prestigious job listings to men as opposed to women. MIT’s Technology Review reported that the researchers built a tool called AdFisher and created fake profiles to look for jobs using profiles with male names and female names.

The research looked at the targeting of ads served up by Google on third-party websites and found the issues associated with job listings. It also found that the ad settings page that Google shows searchers so they can see how Google is making inferences about them, doesn’t offer a complete picture. Researchers found that visiting webpages associated with substance abuse changed the ads shown to substance-abuse-related ads but that the ad setting page didn’t change.

This isn’t the first time Google has had trouble with its algorithms. Its facial recognition algorithms used in its photo software recently labeled a photo of two African-American people as gorillas, and it also was called out for only showing male images for a search of “CEO.” Facebook and other companies trying to build smart machines have had similar problems with their algorithms.

Stories like this aren’t just fodder to titillate your friends on Facebook (governed by its own algorithms), but should drive home two important lessons. The first is that as our machines get smarter and we trust them to do more of our thinking, diversity in tech becomes more important. The reason these algorithms make these mistakes isn’t necessarily malicious, it’s because they reflect the biases of those that programmed them. For example, if you train facial recognition software using photos from your employees and have few black employees, then your software won’t recognize African-Americans as well.

When you built software to show certain people job listings, there are any number of ways biases can creep in, from an assumption that men are more interested in C-level jobs to looking at salary range and eliminating women for some jobs because they are historically paid less. Having employees who can bring these systemic biases to the forefront will help tech firms (and any firms building machine learning algorithms) avoid these mistakes.

The second big lesson we need to learn from these stories is that this software has real consequences and we need to ensure that we don’t blindly trust it. Discrimination from software can have tangible effects on people’s lives, such as downgrading their credit scores, tweaking the value of their homes and even who they perceive to be their friends on social networks. Thus, we need ways to monitor how algorithms affect people, which is why research like Carnegie Mellon’s is so important.

In the case of the findings from this research, Google was notified of the findings, but didn’t respond to the team. That’s unfortunate, because it may have led to better research and ultimately, better algorithms. In many cases all researchers are hoping for is more transparency from the companies they are covering. From the MIT Tech review story:

However, [Roxana Geambasu, an assistant professor at Columbia University] says that the results from both XRay and AdFisher are still only suggestive. “You can’t draw big conclusions, because we haven’t studied this very much and these examples could be rare exceptions,” she says. “What we need now is infrastructure and tools to study these systems at much larger scale.” Being able to watch how algorithms target and track people to do things like serve ads or tweak the price of insurance and other products is likely to be vital if civil rights groups and regulators are to keep pace with developments in how companies use data, she says.

Google responded to the news stories about the study’s findings with the following statement:

Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed. We provide transparency to users with ‘Why This Ad’ notices and Ad Settings, as well as the ability to opt out of interest-based ads.

This is a complicated issue, made worse by the lack of understanding around computer science and how we are training computers to think and see. For now, public shame may be enough, but that’s likely only in cases of hot button issues around racism and sexism. Other problematic algorithms around class discrimination or labeling drug addicts by their search terms will be tougher to draw attention to. But they are just as harmful.

The Obama administration is looking at the problem, as are others. Suggestions include the aforementioned public shaming, government legislation or teaching people some kind of algorithmic literacy. But the first step is to hire with diversity in mind.

This story was updated to add Google’s comment.

About the Author
By Stacey Higginbotham
See full bioRight Arrow Button Icon

Latest in Tech

Big TechStreaming
Trump warns Netflix-Warner deal may pose antitrust ‘problem’
By Hadriana Lowenkron, Se Young Lee and BloombergDecember 7, 2025
5 hours ago
Big TechOpenAI
OpenAI goes from stock market savior to burden as AI risks mount
By Ryan Vlastelica and BloombergDecember 7, 2025
5 hours ago
AIData centers
HP’s chief commercial officer predicts the future will include AI-powered PCs that don’t share data in the cloud
By Nicholas GordonDecember 7, 2025
8 hours ago
Future of WorkJamie Dimon
Jamie Dimon says even though AI will eliminate some jobs ‘maybe one day we’ll be working less hard but having wonderful lives’
By Jason MaDecember 7, 2025
12 hours ago
CryptoCryptocurrency
So much of crypto is not even real—but that’s starting to change
By Pete Najarian and Joe BruzzesiDecember 7, 2025
17 hours ago
Elon Musk
Big TechSpaceX
SpaceX to offer insider shares at record-setting $800 billion valuation
By Edward Ludlow, Loren Grush, Lizette Chapman, Eric Johnson and BloombergDecember 6, 2025
1 day ago

Most Popular

placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
2 days ago
placeholder alt text
AI
Nvidia CEO says data centers take about 3 years to construct in the U.S., while in China 'they can build a hospital in a weekend'
By Nino PaoliDecember 6, 2025
1 day ago
placeholder alt text
Economy
The most likely solution to the U.S. debt crisis is severe austerity triggered by a fiscal calamity, former White House economic adviser says
By Jason MaDecember 6, 2025
1 day ago
placeholder alt text
Big Tech
Mark Zuckerberg rebranded Facebook for the metaverse. Four years and $70 billion in losses later, he’s moving on
By Eva RoytburgDecember 5, 2025
2 days ago
placeholder alt text
Economy
JPMorgan CEO Jamie Dimon says Europe has a 'real problem’
By Katherine Chiglinsky and BloombergDecember 6, 2025
1 day ago
placeholder alt text
Uncategorized
Transforming customer support through intelligent AI operations
By Lauren ChomiukNovember 26, 2025
11 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.