Fighting bias in A.I. means acknowledging it exists

While artificial intelligence can reduce human error and work at speeds that no individual can replicate, the fact remains that programs and algorithms are built by people with their own sets of biases. Overt or internalized prejudices can seep into how A.I. and machine learning systems are constructed and perform, potentially leading to systems that only work for the portion of the population that’s representative of the creators. 

It’s an issue that many tech corporations are grappling with now, especially as there’s been an increased awareness of the issues surrounding race, gender, and economic imbalances in society. So how can companies ensure that the A.I. products they build are as free from bias as possible? At Sony AI, which was founded in 2020 and deals with everything from image sensors to music and movies, the key is trying to identify potential biases at the earliest stages of product development. 

“There’s no such thing as a perfectly unbiased algorithm, so I always tell business units, ‘The fact that we’re pushing you to conduct fairness assessments is not because we think that you’ve done something wrong or that your product, in particular, is really problematic. Instead, it’s something we always want to check for,’” said Sony Group’s head of A.I. ethics office Alice Xiang at this week’s Fortune Brainstorm A.I. conference in Boston. “Whether it be something relatively innocuous, like an autofocus feature that finds faces or finds eyes, to when we’re talking about robotics, frequently computer vision is a major part of that. If you have a self-driving car, then you need to think about being able to detect pedestrians and ensure that you can detect all sorts of pedestrians and not just people that are represented dominantly in your training or test set.”

“Business units often understand when you explain to them what the issue is, broadly, of bias,” Xiang continued. “No one wants to produce products that are biased, but it’s actually quite difficult in practice to figure out the right benchmarks for testing for bias, the right techniques to mitigate bias. That’s where research really plays a key role. Because this is a really new space, there are  constantly new methods being developed, and we really want to be on the cutting edge in terms of the techniques that we’re employing.”

A key issue in this specific area is identifying where bias already exists and how to weed that out from becoming a part of an algorithm or system. It’s certainly an area of concern for Dr. Margaret Mitchell, the chief ethics scientist of Hugging Face, which focuses heavily on A.I. language processing.

“We find that when we have these large language models training on tons and tons of data … most of it is sourced from the web, where we see a lot of racism and sexism and ableism and ageism,” she said. “[It’s] largely sourced from Wikipedia, which is primarily written by men, white men, between something like 20 to 30 or so, and single and PhD, higher-level education, which means that the kind of topics that are covered, that are then scraped in training the language models, reflect those knowledge bases, reflect those backgrounds.”

To illustrate this point, Mitchell cited a search result from Google, the same company that fired her in early 2021 following her open criticism of the company’s lack of diversity and inclusion (Google says her termination was due to a breach of its code of conduct and security policies). “If you try and do a search for ‘Black history,’ you’ll be redirected to African-American history, which is American-centric and not quite understandable about the whole history of people who are Black,” she said. “It’s a really key issue when it comes to what the language models regurgitate as a function of the normal skews, just in who is talking and who’s being scraped on the web, as well as the inherent racism and sexism, etc. that gets expressed. These end up coming out in what’s generated and what’s suggested.”

“AI is never going to be perfect,” said Dr. Haniyeh Mahmoudian, global AI ethicist at DataRobot, which works in machine learning automation and development. “It comes to having a very thorough understanding of what are the risks of using the system, having a thorough risk assessment of the process, understanding if you have data quality concerns — everything that goes along the way of building an AI system. And based on that, then we can understand if there is a need for mitigation.”

Basically, she said, it’s about monitoring A.I. at every step of the way to make sure bias isn’t becoming part of the program. “We can take on some mitigation tasks along the way, or understand who is going to be impacted if this system, at some point, makes mistakes, if it does something that is unexpected. This actually helps us understand if you’re ready to build and put this system in production or if we need to hold back and collect better data until you’re ready to deploy.”

As Xiang said, it’s there that transparency in the process is key to eliminating biases. “There is more of a sense that we can at least hold a human accountable if something goes wrong, or at least ask them for the rationale behind their decision making,” she said. “From that perspective, I think it’s very important for folks not only to think about, ‘How do we make this as good as possible and do the relevant risk assessment?’ but also carefully document the failure models of the product, make that very clear, and have mechanisms in place to be able to detect failures in deployment then act upon them. Because that’s the major place where it can be very risky to move forward with A.I. versus humans.”

More must-read business news and analysis from Fortune:

Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.

Read More

Brainstorm HealthBrainstorm DesignBrainstorm TechMost Powerful WomenCEO Initiative