On Friday, a Lexus outfitted with Google autonomous driving technology was struck by a vehicle that ran a red light in Mountain View, California. Some observers say it is the worst crash that Google’s autonomous vehicles have been involved in. There were no reported injuries.
A photo of the aftermath (which you can see at 9TO5Google) shows the Interstate Batteries van apparently at fault. According to a statement from Google, the autonomous vehicle’s “light was green for at least six seconds before our car entered the intersection.”
Witnesses said that after the crash, “dazed Google employees” sat waiting for a tow truck.
The crash is in significant contrast to recent incidents involving Tesla vehicles driving in Autopilot mode. Those crashes, including one that was fatal, suggested at least some failure of the vehicles’ detection systems to properly read and respond to their environment. But it’s hard to lay much blame on Google’s system for being t-boned by a van running a red light.
Instead, the crash shows a much different problem for driverless cars—that they will continue to share the road with fallible human drivers for decades to come. A recent Goldman Sachs report points out that, at least using current replacement rates and ownership models, it could be 2060 before the North American auto fleet reaches full autonomy.
Under those circumstances, a new sort of question arises—how much responsibility will autonomous systems have for anticipating and avoiding the errors of old-fashioned human drivers?