Welcome to a special monthly edition of Fortune’s Eye on A.I. newsletter.
If you don’t want your artificial intelligence software to make biased decisions, make sure you have a “human in the loop.”
Companies have repeated that mantra for years as a remedy against A.I. bias disasters, such as Microsoft’s infamous Tay chatbot going haywire. In that case, internet pranksters taught vile language to Tay, which parroted it back publicly. One major takeaway from the debacle was that Microsoft should have closely monitored the chat app for how it was learning.
Having humans oversee the design and implementation of A.I. helps to catch errors. Humans can better spot bias, like A.I.-powered hiring software that only selects white, male candidates for job interviews.
But companies can also overestimate the ability of humans, who have their own biases, to catch A.I. bias problems.
A report released this month by the National Institute of Standards and Technology (NIST) addresses A.I. bias’s complexity and the obstacles companies must consider when designing software so that it doesn’t deviate from its intended behavior. One of the major takeaways of the report is that if companies don’t account for the biases of their own workers, they may have a more difficult time spotting A.I. blunders.
“People say today, for example, there’s many problems with A.I., but all of them go away as soon as we put a human in the loop to supervise the behavior,” said Apostol Vassilev, a NIST mathematician who co-authored the report.
The report mentioned an older study in which researchers wanted to use an unspecified machine-learning system to deliver gender-neutral online job and training ads to both men and women as part of a STEM career campaign. The problem, however, was that the ad-campaign creators didn’t take in account that in some countries women are considered a “more valuable part of the community because they make most of the household decisions in terms of purchasing,” Vassilev said. Eventually, the algorithm showed more job ads to men because it was designed to minimize costs, thus ruining what was supposed to be a gender-neutral, A.I.-powered job recruitment campaign.
“Seemingly everything is transparent, everything is fair, and all of a sudden, you put the local context into it and you end up with a result that was not the one that you expected,” Vassilev said.
A popular misconception regarding A.I. bias is that most organizations should only focus on preventing bias during the development phase of the underling machine-learning model, explained Kristen Greene, a cognitive scientist and co-author of the NIST report. However, by only focusing on the A.I. developers that create the underlying algorithms and how their biases may inform the technology, organizations may be “forgetting about every single other human that touches A.I. at any point in the lifecycle,” she said.
Even how an A.I. system presents information can activate a person’s cognitive biases, said Reva Schwartz, a NIST research scientist and lead investigator on A.I. bias. She gave the hypothetical of a machine-learning system presenting a list of criminal suspects to a detective, who may believe that the numerical order of the listing shows who is most likely guilty.
These kind of ranked lists “absolutely activates people’s cognitive biases,” Schwartz said. “People perceive things that are higher on a list as more important or more likely to be the thing you’re looking for.”
Ultimately, companies should realize that preventing A.I. bias is a complicated task that involves continual monitoring of the software by multiple teams in order to catch problems that one group may have overlooked. It’s essentially an endless process.
The NIST team hopes that their report lays the groundwork for future research into managing bias in A.I. software.
“There’s still too many open questions that we need to fill in with research and understanding before we can write the standard,” Vassilev said.
Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com
A.I. IN THE NEWS
Snapchat to the brain. Snap, the parent company of the popular Snapchat social media app, has bought Nextmind, a neural technology startup that built a kind of headband called a non-invasive brain-computing interface intended to help people interact with computers by thinking. The startup’s device uses machine learning to measure brain activity captured by the wearable band's sensors. Snap did not disclose the deal’s value, but said the startup will be help the company’s research lab, which is developing augmented reality technology, in which people wear glasses that display digital imagers on the physical world.
Nvidia brings out the chips. Nvidia debuted this week its H100 GPU, intended to help companies more quickly train huge A.I. language models. The computer chip has been tailored to work with the trendy transformer neural network software that researchers have used to teach computers to recognize complex patterns in human language. The company pitched its new chip as capable for helping A.I. researchers develop the kinds of language models that require an “unrelenting appetite for AI compute power.”
Bankers beware. Banks and financial firms that are increasingly using A.I. are at risk for Russian-linked cyberattacks, according to a report by The Wall Street Journal. Several experts expressed concern in the report that Russian hackers could try to exploit holes in A.I. software that has only recently been developed and hasn’t been thoroughly tested for security flaws. “When you introduce machine learning into any kind of software infrastructure, it opens up new attack surfaces, new modalities for how a system’s behavior might be corrupted,” Abhishek Gupta, a leader of the Montreal AI Ethics Institute non-profit told the Journal.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.