Google's Chris Urmson shows a Google self-driving car to U.S. Transportation Secretary Anthony Foxx and Google chairman Eric Schmidt at Google's headquarters on February 2, 2015.
Photograph by Justin Sullivan — Getty Images
By David Z. Morris
December 7, 2015

In a recent study, researchers with the University of Michigan Transportation Research Institute suggest that, much like your neighbor’s hormonal fifteen year old, autonomous vehicles may start out with pretty limited driving privileges. Research professor Dr. Michael Sivak and UM staffer Brandon Schoettle argue that there are certain circumstances driverless vehicles just can’t deal with yet, and that, until they can pass a licensing test proving their ability to “operate in all driving situations,” they’ll need what amounts to an A.I. learner’s permit.

Though driverless systems of some sort seem increasingly imminent, Sivak and Schoettle point out that the systems still have some shortcomings, and they may be difficult to solve in even the medium term. Those include problems with both sensing and decision-making.

First, as good as cameras and sensors are, driverless systems still sometimes have a hard time interpreting what they see. They don’t do to well in snow or even, sometimes, rain, and can be flummoxed by unusual situations, like downed power lines and flooded roads. Like any good parent, Google (GOOG) sees certain situations where their driverless cars need to be supervised or reigned in.

There is one big difference, the authors argue, between a teenager and driverless cars—driving in good conditions won’t make an autonomous system better in rougher conditions. That means the only way for a Google car to pass its driver’s test will be thanks to a software upgrade distributed by the manufacturer. (Though machine learning could change that).

Even if those sensing problems are solved soon, Sivak and Schoettle find the problems much more daunting when it comes to decision making. First, the authors say we need to think harder about what being a good driver really means—and, pretty clearly, it isn’t a matter of simply following the rules. Sivak and Schoettle point out that human drivers regularly break the letter of traffic laws in the interest of safety, such as when they speed to match the flow of surrounding traffic, or cross double yellow lines to avoid an accident.

This leads to a really tough question: “Should manufacturers be allowed,” the authors ask, “To program a vehicle to willfully break applicable laws? If so, which laws and to what extent?” Human drivers get tested on this kind of boundary-pushing when they take an in-car driver’s test, with a tester that can carefully evaluate their judgment. It’s likely that only extensive real-world monitoring—as opposed to, for instance, reviewing lines of code—could ever show that an autonomous system has the wisdom to make a legally ambiguous call.

Then there’s the real doozy—what about when there is no right decision? The UM authors imagine an autonomous system faced with a so-called “trolley scenario,” in which one available choice harms the car and its passengers, and the other harms a pedestrian. Would an AI be ‘correct’ to sacrifice the safety of its passenger to save a stranger?

That’s not a question most humans have to face on their driver’s test, so asking robots to find the right answer might not be fair—at least, not until they want to join Starfleet, where the exams are a bit more philosophical.

For more on the challenge of driverless technology, watch this Fortune video:

Subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology.

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST