Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.
If companies are serious about their artificial intelligence software working well for everyone, they must ensure that the teams developing it as well as the datasets used to train the software are diverse.
That’s one takeaway from an online panel discussion about A.I.’s bias hosted by Fortune on Tuesday.
It can be challenging for companies to find datasets that are both fair and reflective of everyone in society. In fact, some datasets like those from the criminal justice system are notoriously plagued with inequality, explained Katherine Forrest, a former judge and partner at the law firm Cravath, Swaine and Moore.
Consider a dataset of arrests in a city in which local law enforcement has a history of over-policing Black neighborhoods. Because of the underlying data, an A.I. tool developed to predict who is likely to commit a crime may incorrectly deduce that Black people are far more likely to be offenders.
“So the data assets used for all of these tools is only as good as our history,” Forrest said. “We have structural inequalities that are built into that data that are frankly difficult to get away from.”
Forrest said she has been trying to educate judges about bias problems affecting certain A.I. tools used in the legal system. But it’s challenging because there are many different software products and there is no standard for comparing them to each other.
She said that people should know that today’s A.I. “has some real limitations, so use it with caution.”
Danny Guillory, the head of diversity, equity, and inclusion for Dropbox, said one way his software company has been trying to mitigate A.I. bias is through a product diversity council. Council members analyze the company’s products to learn if they inadvertently discriminate against certain groups of people. Similar to how Dropbox workers submit products under development for privacy reviews prior to their release, employees submit products for diversity reviews.
Guillory said the company’s diversity council has already discovered some bias problems in an unspecified product that had to do with “personal identifying information” and workers were able to fix the issues. A Dropbox spokesperson later added that this was a so-called pilot product, that wasn’t formally released.
The point is to spot bias problems early, instead of having to “retroactively fix things,” Guillory said.