Commentary: Fear of an AI Apocalypse Is Distracting Us From the Real Task at Hand

Businessman looking at line graph on robot display screen
Businessman looking at line graph on robot display screen
Gary Waters—Getty Images/Ikon Images

With the excitement of every technological advancement comes a wave of fear and uncertainty. We’ve seen this scenario play out repeatedly since the Industrial Revolution as people wrestled with the impact of new technology on their lives and work. Today we see that fear bubble up in the wake of every AI breakthrough.

Despite huge progress in recent years, AI is still in its early days and with that comes a level of uncertainty. This uncertainty is only compounded when glitches arise or expectations outweigh reality, which leads to misunderstanding and anxiety. As an outspoken AI critic, Elon Musk capitalizes on this misunderstanding by painting pictures of a looming AI apocalypse even as he embeds powerful AI into Tesla’s vehicles. All of this shows that, to some degree, we find ourselves caught up in a dangerous and unnecessary hype cycle.

We have to reach past that unfounded fear. Here is the reality: There is no credible research today supporting these doomsday scenarios. They are compelling fictions. I enjoyed watching the Terminator just like many other kids my age but these entertaining scenarios distract us from addressing the immediate threats posed by AI.

We face major issues around bias and diversity that are much more human and much more immediate than Singularities and robot uprisings: training data with embedded biases and a lack of diversity both in the field and our datasets.

By training AI on biased data, we might unintentionally instill our own biases and prejudices in AI. Left unchecked, biases will lead to AI that benefits some at the expense of others. Without increasing the diversity of the field, some will have a far greater influence over the hidden decisions behind the creation of AI. As AI integrates into decision-making processes that have more impact on individual lives—hiring, loan applications, judicial review, and medical decisions—we will need to be vigilant against it absorbing our worst tendencies.

There’s No Such Thing as Innocent Data
As AI touches most essential human systems, we need to keep in mind that it is not operating in a vacuum. AI relies on massive amounts of data, which powerful algorithms analyze and then surface insights that can be revelatory. But AI is only as good as the training data it receives. If the data has biases—tinged with racist or sexist language, for example—that will infect the outcomes. Whatever you teach an AI will be amplified as the algorithm replicates its decisions millions of times. Bias previously unseen in the data becomes disturbingly clear as an AI system starts outputting results that reflect our own most ingrained prejudice.

Unlike a robot uprising, biased AI is not a hypothetical risk. A biased AI that judged a beauty contest chose light-skinned contestants over dark-skinned ones. A biased Google algorithm categorized black faces as gorillas. In one study, a biased AI reviewing resumes preferred European-American names over African-American ones. In another study, biased AI associated male names with career-oriented, math and science words while it associated female names with artistic concepts. Just as our own clicks keep us inside our own Facebook filter bubbles, biased data creates AI that propagates human prejudice.

We cannot avoid taking responsibility for our decisions by deferring to AI. The more we incorporate these systems into our decision-making processes, the more we must do to ensure that we are using these systems responsibly. The first step to addressing bias in our data is to build greater transparency into the data collection process. Where did it come from? How was it gathered? Who collected it? We also need to address bias in our models. By making the inner workings of our models clearer, we will be able to uncover biases in our data that we had not previously identified.

If we can properly address the issue of bias in AI, it is much more likely that AI will continue to be a helpful tool for creating a better world. While it might be completely infeasible to tackle the issue of bias in the billions of people in the world, it seems entirely possible that we can create AI that operates with less bias than its creators.

While human bias created these challenges, human insight can address them. Algorithms are making great strides in rooting out fake news and identifying discrimination, but human oversight will be a necessity to build more equitable AI systems. In the ongoing conversation about how AI will change jobs, it’s easy to envision a new role emerging: the AI Monitor. We still need human checks of the input and output of AI.

AI Is Really a Human Problem

This leads us to the second, related issue in building more unbiased AI: we need more diversity in the community of researchers and developers who are building these systems. Several studies have revealed severe imbalances in the industry. According to Code.org, “black, Latin, American Indian, and Native Pacific Islander students are dramatically underrepresented,” comprising just 17% of computer science majors. Less representation often coincides with less accessibility—not just in classrooms, but in companies, governments and citizen groups.
We need to actively ensure accessibility to these new technologies as they arise. Fortunately, AI can also help with this issue. Using AI, we can continue to deliver more intuitive developer tools and increase access to education, thereby widening the net of people who are likely to do AI in the first place.

We can already see this happening now, with some apprenticeship programs specifically seeking to bring underrepresented groups into computer-science classrooms. An organization called Year Up trains 4,000 low-income adults ages 18-24 in technology and finance every year. They have seen remarkable success: 84% of Year Up’s student attend college or are employed within six months of graduating.

By creating educational opportunities for a wider range of people, the workforce will become more diverse, and so will the approach to AI’s data and design. With more diversity, we can avoid some of the pitfalls of bias that a single perspective may have.

The Future Starts Now

Moving forward, we have to address these challenges with urgency. The future will be here before we know it.

AI is possibly the most advanced tool ever developed, and like a hammer, it can be used for good or ill. With the right oversight from scientists and political leaders, AI can change the world for the better. It can automate food production and transportation, make personalized healthcare a reality, fill information gaps, and give everyone—regardless of skill level or background—the ability to be more productive.

In this respect, creating better AI is linked to creating better life for people. That is the promise of AI: to make us better, more efficient versions of ourselves and to give us the opportunity to focus more on creative, impactful tasks. Even in these early days, it is already helping us by holding up a mirror.

Richard Socher is chief scientist at Salesforce.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Great ResignationClimate ChangeLeadershipInflationUkraine Invasion