Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

A.I. Has a Bias Problem, and Only Humans Can Make It a Thing of the Past

October 26, 2019, 11:00 AM UTC

Big tech has a far-from-sparkling record when it comes to hiring a diverse workforce—and that’s a problem that could bleed into the future. The reason? Without more women and people of color driving the development of artificial intelligence, the results that the technology will spit back out will be, to put it mildly, problematic.

A.I.-based decisions are only as good as the data that helped form them, says James Hendler, professor and director, Rensselaer Institute for Data Exploration and Applications at Rensselaer Polytechnic Institute in Troy, New York. And if the data—or the way it’s processed—is biased or flawed, the results will be, too. Also, if the group of people inputting that data neither reflect the world’s diverse population nor has a broad view of the world, that’s a problem.

According to the World Economic Forum’s Global Gender Gap report 2018, just 22% of A.I. and 12% of machine-learning professionals worldwide are women. For other marginalized groups, the news is even worse: An April 2019 report from the AI Now Institute found that just 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%. There is no public data on transgender people or other gender minorities within the tech industry, according to the report.

But, thanks to a growing group of entrepreneurs and founders of nonprofits, the future is not lost.

Organizing change

As A.I. becomes more ubiquitous—and invisible—in our decision-making processes, biased results threaten to become more frequent and more severe in their consequences, says Tess Posner, CEO of Oakland, Calif.-based AI4All.

That can be annoying when A.I. is, say, recommending products. But it can be life-changing when bad data and algorithms are used to decide who gets a job interview or a loan, how resources are deployed after natural disasters, or who should get paroled? “It could increase marginalization of certain populations and actually make some of the equity issues in the economy that we’re trying to fix worse,” Posner says. “Building A.I. algorithms that can either mirror or, in some cases, enhance those issues would be hugely problematic.”

AI4All provides educational resources so that high school students from underrepresented groups can learn about—and eventually work in—the field of artificial intelligence. The organization works to nurture interest in A.I., develop technical skills, and connect students with mentors who can help them find a career path in the field.

Other groups working to fix A.I.’s problems before they proliferate include the Institute for Electrical and Electronics Engineers (IEEE). In March 2017, the group announced the approval of IEEE P7003™—Algorithmic Bias Considerations which aimed to improve transparency and accountability in how algorithms target, assess, and influence users and stakeholders of A.I. or other intelligent systems. IEEE has an ongoing project and working group devoted to helping algorithm designers identify ways to eliminate negative bias. (On a far broader level, earlier this year the European Commission published guidelines on building trustworthy A.I.)

But change is, hopefully, coming from within big tech too. The Partnership on A.I. is a San Francisco-based nonprofit founded in 2016 by representatives from six of technology companies: Apple, Amazon, DeepMind and Google, Facebook, IBM, and Microsoft. In 2017, the Partnership expanded to include other stakeholders. The group, which has representatives from more than 50 organizations, is tasked with researching and addressing aspects of artificial intelligence including ethics, safety, transparency, privacy, biases, and fairness.

“We believe that artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be used to help humanity address important global challenges,” says Mira Lane, partner director, Ethics & Society at Microsoft. “This organization seeks to ensure that our work fulfills these expectations.”

The Partnership on A.I. has identified six “thematic pillars” on which it focuses, ranging from how A.I. makes decisions to its impact on workforce displacement to how it can be used for social good. The organization also plans to create work groups for specific sectors to identify possible industry-related issues in areas like health care or transportation, Lane says.

Already at work

Many companies are already working toward making changes that will stop A.I. problems before they go any further. CareerBuilder worked with its internal data scientists and tapped the A.I./machine learning knowledge with partners including Emory University, Indiana University, and the University of Tennessee Knoxville, as well as an expert external HR consulting firm to build a new A.I.-powered platform. The features include an A.I.-powered tool that can help companies and recruiters ensure that their job ads are effective and inclusive, as well as an A.I.-powered resume-building tool that can, among other things, help candidates improve their grammar skills and access more job opportunities.

At Microsoft, Lane says the company is taking more immediate action to both remove bias from A.I. and use the technology itself to improve diversity and inclusion. Microsoft’s internal efforts include scrutinizing data sets and assumptions within the group, and explicitly defining what it means for a system to behave fairly and ensure the standard is met. Earlier this year, Microsoft shared its Error Train Analysis tool, used to reduce errors and understand how its machine learning models are performing.

Consulting firm Accenture has also devoted resources to address biased A.I. The company’s Fairness Tool helps teams evaluate sensitive data variables and other factors that may lead to a biased outcome in order to increase the fairness in a model’s accuracy. The tool also helps teams identify false positives and negatives, and other factors that indicate whether the A.I.’s output is fair.

But changing the A.I. tech itself is, of course, only a part of what needs to be done. Organizations also need to get the basics right, such as creating development teams that are diverse across factors like gender, race, ethnicity, socio-economic backgrounds, and areas of expertise. For example, says Rensselaer Institute’s Hendler, development teams should also include people with expertise in ethics, as well as members of the community the A.I. is meant to serve and people with disabilities.

“As A.I. systems get more sophisticated and start to play a larger role in people’s lives, it’s imperative for companies to develop and adopt clear principles that guide the people building, using and applying A.I.,” Lane says. “We need to ensure systems working in high-stakes areas such as autonomous driving and health care will behave safely and in a way that reflects human values.”

As more companies adopt A.I., more issues will surely come to the forefront. Hopefully the conscientious human oversight and governance provided by the organizations above will help make sure A.I. doesn’t increase unfair practices or marginalization in the coming years and beyond.

More must-read stories from Fortune:

How to claim a cash settlement of up to $358 for Yahoo’s data breaches
Apple Card’s newest benefit: relief for natural disaster victims
—Now hiring: people who can translate data into stories and actions
Is A.I. a trillion-dollar growth engine or a jobs-killer? There’s reason for optimism
—The gaming addiction center in the U.K. is a sign of the future
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.