Great ResignationClimate ChangeLeadershipInflationUkraine Invasion

Responsible A.I. can’t exist without human-centered design

January 21, 2022, 3:32 PM UTC
A.I. workflows that are not designed around human users pose safety and ethical concerns.
Jonas Gratzer - LightRocket - Getty Images

“Hey Siri, what’s the weather?” It’s one of the most common questions asked of Apple’s virtual assistant, but it’s also one of the many ways artificial intelligence (A.I.) is already a part of your life.

Such experiences have become the ambient noise of our daily lives, making things easier in a thousand little-noticed ways like setting a timer, populating Netflix recommendations, or proposing words to finish an email.

What does grab the public’s attention are A.I. failures: The airplane safety system that reportedly caused Boeing 737 Max jets to nose-dive or the recall of self-driving cars. 

A.I.’s vast potential–whether in aiding public safety, fighting the COVID-19 pandemic or helping you find a photograph on your smartphone–and explosive growth raise the question of how to responsibly maximize its upside while safeguarding against mistakes and disasters. The answer lies in the relationship between A.I. and its users: rooting A.I. in human need. 

A.I’s best mission statement is arguably to maximize human potential by being a powerful assistive tool that liberates human intelligence from mundane or overwhelming tasks. Design is that mission’s connective tissue. Simply put, an A.I.’s effectiveness is bounded by its user’s experience, a link upon which we must focus as we continue to evolve A.I. systems. If A.I. is a tool, design is the handle or grip which allows humans to wield it.

And we need those tools. While the information age has left us awash in data, humans can only process a finite amount of it. Think of our brains as openings through which only so much data can flow. Well-designed A.I. can identify what’s important, limiting what we try to squeeze through the opening.

In practical terms, there are three rules for creating responsible, powerful A.I. tools by keeping them human-centered:

Suit the tech to the problem, not the other way around. People don’t buy shovels to have them–they want to dig holes. Too often in high tech, we create first and find a use later.

Identify customer needs and then design the best technology to solve them. For example, real-time call transcription can assist police departments in taking 911 calls. A.I. can address the user’s need by searching for and flagging key information, such as location and type of emergency, enabling the respondent to focus on the caller and address their problem. 

Preserve human agency by embracing clarity–and ambiguity. As an assistive technology, and one which evaluates along a spectrum of likelihood, A.I. must be able to present information in an easily understandable way, including the ability to express uncertainty and doubt. For example, the A.I.-powered transcription can be designed to adjust the transcript’s font to indicate uncertainty. The harder to read a word, the less certain the A.I. is of having transcribed it properly.

The alternative–communicating results shorn of ambiguity–can erode human agency by narrowing options. Design is the difference. The benefits of clarity flow both ways: The better an A.I. understands its user, the more quickly it will be able to identify important information.

Transparency also helps users who are responsible to other human stakeholders. Arthur C. Clarke memorably said that “any sufficiently advanced technology is indistinguishable from magic.” However, for those who need to communicate why decisions were made, such as public safety officials, the relationship between what an A.I. algorithm produces and its inputs needs to be clear. 

Optimize for accountability by properly contextualizing. Designing A.I. to solve unique problems and work in specific workflows might seem counterintuitive, but such a measured approach will mean a more focused delivery of user-experience value in the long run. It can allay privacy concerns, for example, by grounding A.I. in a larger workflow that has policies and procedures to mitigate against ethical lapses, misuses, and mistakes. 

The example of the 737 Max illustrates the importance of ensuring a human-focused context. Its software was designed to lower the plane’s nose if a sensor detected that it was rising up too high. When that sensor malfunctioned, the pilots had only seconds to disengage the system–but they were unaware it was part of the airplane’s design.

Like A.I., design is everywhere and often goes unnoticed. It is the difference between your favorite chair and the one you avoid, or why you choose one app over another which performs much the same function.

A workflow that is focused on the end-users may have alerted the crew of the apparent danger and given them a clear way to override the A.I. Better design of A.I., which focuses on assisting humans to make better decisions and not replacing human intelligence, might have prevented a tragic outcome.

Mahesh Saptharishi, Ph.D, is the executive vice president and CTO of Software Enterprise & Mobile Video at Motorola Solutions. A highly respected technology expert and thought leader, Saptharishi earned a doctorate degree in machine learning from Carnegie Mellon University and has authored numerous scientific publications, articles, and patents.

More must-read commentary published by Fortune:

Never miss a story: Follow your favorite topics and authors to get a personalized email with the journalism that matters most to you.