Humans Saved Google’s Self-Driving Cars From 13 Accidents

January 13, 2016, 8:55 PM UTC
Kirsten Korosec

Google’s self-driving cars would have been in 13 accidents—and at fault 10 of those times—had its test driver not taken control of the vehicle, according to a recent report submitted to California regulators.

The report, filed with the California Department of Motor Vehicles, shows progress. It also illustrates just how far Google—and other companies like Mercedes-Benz, Delphi, and Nissan—has to go before its software is safe enough for widespread public use.

Google’s self-driving cars aren’t perfect. One did get pulled over by a police officer for mucking up traffic in Mountain View, Calif., after all. But the company has managed to avoid any major catastrophes like hitting people, other cars, or objects that would threaten its self-driving car project. It has driven its cars in autonomous mode 1.3 million miles—424,331 of which were on public roads—since September 2014.

In that time, it’s been involved in 17 accidents and Google was never at fault. With such a record, it’s easy to believe that Google is on the cusp of deploying fleets of self-driving cars.

SIGN UP: Get Data Sheet, Fortune’s daily newsletter about the business of technology.

However, it’s annual report provides a clearer picture of how dependable Google’s software is.

Any company issued a testing permit from the California DMV must submit an annual report detailing its so-called “disengagements,” a jargon term that means the number of times drivers have had to take control of a car because the software failed or for safety reasons. For example, a driver might take control if it encounters another car traveling the wrong way down the road or a pedestrian or cyclist acts erratically.

When this happens, Google will replay the real-word situation in its simulator to determine what would have happened if the test driver hadn’t taken back control. It’s the best and safest method to evaluate the software, and to tweak it if it makes the wrong decision. A software “fix” is tested against many miles of simulated driving, then tested on the road, and, after review and validation, rolled out to the entire fleet, Google says.

In the reporting period, there were 69 events across Google’s fleet when the driver took manual control of the car because of safety reasons. Google put these incidents through its simulator and determined that the driver prevented the self-driving car from making contact with another object 13 times.

In 10 of those cases, the Google software would have been at fault. Two involved traffic cones and three were caused by another driver’s reckless behavior, according to the report.

These are encouraging numbers, says Chris Urmson, director of the self-driving project, because most of the incidents occurred early on. Only five took place during the 370,000 miles driven in 11 months of 2015, Urmson wrote in a blog posted on Medium. He expects the rate of these incidents to keep declining.

Urmson then tempered that positive spin with a big caveat: “That said, the number of incidents like this won’t fall constantly; we may see it increase as we introduce the car to environments with greater complexity caused by factors like time of day, density of road environment, or weather.”

WATCH IT: Google’s self-driving car gets pulled over

Google’s testers also sometime take over the steering wheel when Google’s software detects a tech failure. In such cases, the car gives a audio and visual signal—a red light and a ding ding, Google says—to signals the driver to take manual control.

Google says this occurred 272 times during the September 2014 to November 2015 reporting period. It took Google test drivers an average of 0.84 seconds to respond.

The rate of these technological hiccups has dropped from 785 miles per incident at the end of 2014 to 5,318 miles per failure at the end of 2015. In other words, the software is improving.

But is it improving fast enough for Google to reach its goal? The company has said it wants to commercialize self-driving cars by 2020.

Technology isn’t the only impediment to Google’s self-driving ambitions.

The California DMV released draft rules in December that prohibit the use of fully autonomous driverless cars that don’t have a steering wheel or a brake pedal—like the prototype that Google has developed. Google was angered by the draft rules and accused California of interfering with the development of fully self-driving cars.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward