Regulator on Tesla Autopilot Death Says One Incident Won’t Derail Tech
About halfway through his speech at an autonomous car conference in San Francisco on Wednesday morning, Mark Rosekind, the administrator of auto safety regulator the National Highway Traffic Safety Administration, finally addressed what he referred to as “the elephant in the room.”
“I am not going to comment on an ongoing investigation, and it would be inappropriate to prejudge the outcome until all the facts are in for us to analyze,” said Rosekind.
Of course, he was not-so-subtly referring to his agency’s investigation into a fatal crash involving a Tesla car with autonomous vehicle software engaged.
That crash, which occurred in May in Florida, was the world’s first known fatality in an autonomous vehicle. That investigation appears to be ongoing. NHTSA also opened up an investigation into a second recent crash with a Tesla car in Pennsylvania, but Tesla CEO Elon Musk said recently that autopilot wasn’t engaged during that accident.
Tesla launched its autopilot software last October. The technology uses sensors, cameras, vision-base computing, radar, algorithms, and wireless tech to control certain aspects of driving with a computer. Tesla cars with autopilot engaged can steer and change lanes without driver input, and a driver can summon a car out of a parking spot.
While it’s still not clear if Tesla’s autopilot software contributed to the fatal crash, the incident kicked off a media frenzy around whether the rollout of autonomous car tech needs more oversight, and if Tesla should disable the technology.
Musk has responded to critics and said that Tesla will not disable the tech. But he has said that the company is looking into ways to make the software better.
In front of over a thousand auto executives, entrepreneurs, policy-makers, and researchers at the Automated Vehicles Symposium, NHTSA’s Rosekind followed up his “no comment” on the Tesla autopilot investigation with a couple other comments.
Most importantly, he forcefully said, “No one incident will derail the Department of Transportation and NHTSA from its mission to improve safety on the roads by pursuing life-saving technologies.” In other words, regulators won’t use any accident (presumably even if Tesla autopilot was found to have contributed to the crash) as an excuse to halt the roll out of autonomous car tech.
Get Data Sheet, Fortune’s technology newsletter.
The opportunities to make car driving safer with autonomous vehicles is just too important to be stopped by an unfortunate event, Rosekind conveyed. When it comes to road safety in general ,”we’re not in a good place,” he said, referring to the 35,000 deaths on American roads last year.
The administrator also provided a sunnier outlook on investigating crashes like the Tesla autopilot death, remarking “New highly-automated vehicles provide an enormous opportunity for learning that has rarely existed before.” If something goes wrong with an automated car, the data can be analyzed, learned from, and shared with other autonomous cars.
For more on the mixed signals of Tesla’s Autopilot, watch:
Rosekind pointed out how that could make autonomous cars far faster at learning and improving than human drivers.
“New drivers have to learn on the road and make the same mistakes as thousands before them,” said Rosekind. “Automated vehicles will be able to benefit from the data and learning from all others on the road.”
The NHTSA and the DOT plan to unveil guidance on automated vehicles this summer, and Rosekind said the document would come out “very soon.”