Skip to Content

Google’s sexist algorithms offer an important lesson in diversity

Digital business help from GoogleDigital business help from Google

Google is the latest company to be caught out by software behaving like a jerk, with researchers from Carnegie Mellon University showing that the company displays more prestigious job listings to men as opposed to women. MIT’s Technology Review reported that the researchers built a tool called AdFisher and created fake profiles to look for jobs using profiles with male names and female names.

The research looked at the targeting of ads served up by Google on third-party websites and found the issues associated with job listings. It also found that the ad settings page that Google shows searchers so they can see how Google is making inferences about them, doesn’t offer a complete picture. Researchers found that visiting webpages associated with substance abuse changed the ads shown to substance-abuse-related ads but that the ad setting page didn’t change.

This isn’t the first time Google has had trouble with its algorithms. Its facial recognition algorithms used in its photo software recently labeled a photo of two African-American people as gorillas, and it also was called out for only showing male images for a search of “CEO.” Facebook and other companies trying to build smart machines have had similar problems with their algorithms.

Stories like this aren’t just fodder to titillate your friends on Facebook (governed by its own algorithms), but should drive home two important lessons. The first is that as our machines get smarter and we trust them to do more of our thinking, diversity in tech becomes more important. The reason these algorithms make these mistakes isn’t necessarily malicious, it’s because they reflect the biases of those that programmed them. For example, if you train facial recognition software using photos from your employees and have few black employees, then your software won’t recognize African-Americans as well.

When you built software to show certain people job listings, there are any number of ways biases can creep in, from an assumption that men are more interested in C-level jobs to looking at salary range and eliminating women for some jobs because they are historically paid less. Having employees who can bring these systemic biases to the forefront will help tech firms (and any firms building machine learning algorithms) avoid these mistakes.

The second big lesson we need to learn from these stories is that this software has real consequences and we need to ensure that we don’t blindly trust it. Discrimination from software can have tangible effects on people’s lives, such as downgrading their credit scores, tweaking the value of their homes and even who they perceive to be their friends on social networks. Thus, we need ways to monitor how algorithms affect people, which is why research like Carnegie Mellon’s is so important.

In the case of the findings from this research, Google was notified of the findings, but didn’t respond to the team. That’s unfortunate, because it may have led to better research and ultimately, better algorithms. In many cases all researchers are hoping for is more transparency from the companies they are covering. From the MIT Tech review story:

However, [Roxana Geambasu, an assistant professor at Columbia University] says that the results from both XRay and AdFisher are still only suggestive. “You can’t draw big conclusions, because we haven’t studied this very much and these examples could be rare exceptions,” she says. “What we need now is infrastructure and tools to study these systems at much larger scale.” Being able to watch how algorithms target and track people to do things like serve ads or tweak the price of insurance and other products is likely to be vital if civil rights groups and regulators are to keep pace with developments in how companies use data, she says.

Google responded to the news stories about the study’s findings with the following statement:

Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed. We provide transparency to users with ‘Why This Ad’ notices and Ad Settings, as well as the ability to opt out of interest-based ads.

This is a complicated issue, made worse by the lack of understanding around computer science and how we are training computers to think and see. For now, public shame may be enough, but that’s likely only in cases of hot button issues around racism and sexism. Other problematic algorithms around class discrimination or labeling drug addicts by their search terms will be tougher to draw attention to. But they are just as harmful.

The Obama administration is looking at the problem, as are others. Suggestions include the aforementioned public shaming, government legislation or teaching people some kind of algorithmic literacy. But the first step is to hire with diversity in mind.

This story was updated to add Google’s comment.