What Happens When a Self-driving Car Is at Fault?

April 25, 2018, 6:50 PM UTC

March 18 changed everything—and nothing—in the frenzied and nascent world of autonomous vehicles. That Sunday evening, in a Phoenix suburb that has become a hub for testing autonomous vehicle technology, an Uber self-driving vehicle struck and killed pedestrian Elaine Herzberg. The vehicle was in autonomous mode at the time of the collision, with a human test driver behind the wheel.

The incident is believed to be the first death caused by a fully autonomous vehicle. It prompted Uber to halt autonomous vehicle testing on public roads in four cities and other companies to pause their own public road tests. It also led Arizona Gov. Doug Ducey, a proponent of autonomous vehicle technology, to suspend Uber from operating in the state. Advocacy groups called for a national moratorium on self-driving tests.

A fatal self-driving car crash seemed inevitable. Though the arrival of the robot car promised a dramatic reduction in the 1.25 million road traffic deaths that occur around the globe each year, there was also the sneaking suspicion that someday, for some reason, a self-driving vehicle would cause a collision—perhaps because of a string of bad code, an unexpected equipment failure, or an impossible decision. And then what? Who is responsible, legally speaking, in a world where machines make their own decisions?

“The short answer is, it depends,” says Jim McPherson, a California attorney who studies autonomous vehicles. “The longer answer is, anyone who is responsible for causing harm.”

A visualization (top photo) and the reality (bottom) of an autonomous vehicle encountering a school bus.Courtesy of Waymo
Courtesy of Waymo

In the eyes of insurers, today’s self-driving vehicles are treated no differently than conventional cars when they’re involved in a collision, according to Maureen Brown of Munich Reinsurance America, a firm that insures a number of companies testing autonomous vehicles. And states that allow companies to test self-driving vehicles require them, as with human drivers, to have insurance. Every autonomous Uber vehicle, for example, is covered by commercial auto liability insurance that covers bodily injury (including death) and property damage. And that coverage is in place regardless of whether the car is in autonomous mode or not.

An autonomous Volvo SUV, owned and operated by Uber, lying on its side in the road after a collision in Tempe, Ariz., on March 24, 2017.Mark Beach—Fresco News/Reuters
Mark Beach—Fresco News/Reuters

Still, expect changes. The rise of self-driving cars will prompt us to adapt existing frameworks for new operators. “Today when you’re writing an auto insurance policy, it’s all about the drivers,” says Robert Passmore, an executive at the Property Casualty Insurers Association of America. “What’s their driving record and that kind of thing? As we get more and more into where the system is driving, it’s going to be increasingly about that system.”

Not to mention its track record. “It’s not just being able to show what happened but being able to show that the data should be believed,” says Bryant Walker Smith, a law professor at the University of South Carolina who studies driverless car regulations. “And then having the resources on all sides—that’s government investigators, plaintiffs, even defendant companies—to be able to analyze that data, understand it, and model it.”

The U.S. legal system has yet to be truly tested by a self-driving car crash. Every incident involving an autonomous vehicle in the U.S. to date—and there have been few—has been settled out of court. Just one proper lawsuit has been filed so far, stemming from a collision involving a self-driving car powered by GM-owned Cruise Automation and a motorcyclist in San Francisco. But even that case appears destined for a settlement.

Expect that pattern to continue—after all, no one wants to take the road not yet traveled. Indeed, Uber reached a settlement with Herzberg’s husband and daughter within 10 days of the incident, notes University of Toledo law professor Agnieszka McPeak.

“That case settled pretty quickly because they don’t want negative precedent,” she says. “As soon as there’s a case that goes against them, it opens up a Pandora’s box of liability.” 

This article originally appeared in the May 1, 2018 issue of Fortune.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward