Tesla’s (TSLA) revelation of the first known fatality from a self-driving car dominated the news cycle last month. Those that feared the technology used the death as proof that self-driving cars are dangerous. Those that support the technology worried this would set their efforts back.
Through the drama, I wanted to hear from the Self-Driving Coalition for Safer Streets. A lobbying group formed this spring, its members feel so strongly about the importance of getting self-driving regulation right that they set aside their fierce rivalries to team up. The coalition includes Uber and Lyft, as well as Ford (f), Volvo, and Google (goog). They’re all taking different approaches to self-driving, but agree that full autonomy is safer than partial autonomy, which is what Tesla’s Autopilot feature currently offers.
I spoke with David Strickland, former NHTSA regulator and current counsel to the coalition, about the Tesla fatality, the self-driving car industry’s great “evolution vs. revolution” debate, if tech companies have a responsibility to make sure people don’t abuse their products, and the sticky problem of at irrational human beings.
The conversation has been edited and condensed for clarity.
Fortune: What is the self-driving car coalition’s approach to making self-driving cars a reality—straight to full autonomy or incremental features, like Tesla?
David Strickland: The issue is human error. 94% of all crashes have an element of human error in it. We lost 34,000 people last year to crashes. Half of all fatalities are [people not wearing seatbelts], and a third have an impaired driver involved.
[The coalition believes] in having self-driving where there is zero expectation that the human has to intervene. Once you do that, you are addressing a wide range of driver problems that have causality to crashes. It is a much more difficult engineering task and much more intensive engineering task, but we believe it is the most assured way to approach it from the safety aspect.
Making sure the technology is 99.9999% assured it will perform as designed is a crucial element. It isn’t just simple making the machine and the software [perfect], but also anticipating the uses of the technology. Every manufacturer has a responsibility to address foreseeable uses. What are the ways they are going to use it? What ways does usage create unnecessary risk? How do you build interventions to stop that?
This is an issue manufacturers have to deal with in auto manufacturing every day. NHTSA has to deal with holding manufactures accountable for foreseeable use and abuse.
In general tech companies don’t like to be held accountable for someone abusing their products. They tend to say, “We warned them. It’s not our problem if people don’t use it as designed.”
There is a tension in that legal and philosophical nexus. You can’t nerf the planet. Great technologies and innovations have some risk of usage. They just do. Driving is a very hazardous activity, but millions of Americans get behind the wheel every day.
The questions are: What do warnings mean? What is the duty to warn? Where do manufactures have responsibility to warn? And what is the responsibility of consumer to not take on undue risk? If you see a significant number of people abusing the product in a particular way, it is incumbent on the manufacturer to address it. When you have a product that creates an attractive risk, it’s thinking about how to take reasonable counter-measures.
Fortune’s recent feature cover story on self-driving cars: Some Assembly Required
What has reaction to the Tesla fatality been among members of coalition?
We were all saddened. The reason why everybody is working so hard on this technology is because we lose way too many people to crashes. That being said, not knowing all the details of the crash scenario, the members of the coalition have not made a policy statement about it.
We know there are going to be other manufactures besides large, well-capitalized, multi-national businesses deploying this technology, and we need to make sure to protect the ecosystem, so that we don’t have some player taking on unnecessary risk that may impact the opportunity to fully deploy because people think self-driving may not be ready or dangerous. We’re very supportive of having the right regulatory balance to have the highest measure of safety, but also balancing that with deployment.
What do you think of the way Tesla handled the fatality?
Every company has a different set of risk assessments in how they approach things. It’s premature because there are still investigative aspects [around the cause of the Tesla fatality]. I’m loathe to say, “Well, that company did it wrong.”
Auto manufacturers typically do not undertake public betas on technologies they’re not ready to put into the stream of commerce. By the time it gets into the hands of the consumer, it’s consumer-ready.
For more, read: What Elon Musk Misses About Self-Driving Cars
Do you think the fatality will have an impact on the NHTSA’s guidelines?
I don’t think so. As unfortunate as it was, there was no one in the industry that didn’t anticipate that would happen. The Tesla Model S is not a self-driving car; it’s a car with an advanced driver assist system. They expect the driver to be in position, with hands near or on the wheel and eyes on the road. That’s not self-driving.
It happened sooner than we all thought, but it happened. NHTSA also anticipated this. There wasn’t anything unique or surprising about the failure. NHTSA had likely taken that into account. They’re making sure the guidelines marry up with how it happened, but it hasn’t changed the arc of the agency’s work. It reaffirmed the issues they’re working on were very important.
Who is going to be harder to convince that this technology is safe? Regulators or the public?
It’s the responsibility of regulators to make sure all vehicles are compliant with safety standards and not creating unreasonable risk in the products deployed. And then the agency looks at the benefits, which are so huge.
Any technology is going to have to win in the consumer marketplace. It has to be accepted by consumers. It is incumbent upon my members and the industry to make the case to consumers that this has wide benefits.
But people are irrational. They won’t always respond to stats about mobility or quality of life, or even saved lives. How to you convince them of the benefits?
As a former regulator and person that worked on policy for over 15 years, it’s a combination of facts and properly conveying benefits and impacts, but also conveying information in a way that people will accept it. You can’t convey the same information to every person to same way.
One example is with child passenger safety. For years, the Latino community had lower usage of child safety seats. Some of the issues were cultural—in a lot of Latin cultures, they believe the safest place for a child to be is in a grandmother’s arms. So we involved the church, having priests bless car seats. We saw more reception to our message. We could tell them all the facts, but you have to recognize how people listen to messages and try to find a way to convey them that way.
It’s finding the approach that succeeds, not repeating yourself over and over again about something that is absolutely, positively true if people won’t listen to you.
For more, read: Memo to Silicon Valley: People Fear the New and Unknown
Why do you think the Tesla Model S Autopilot fatality hit such a nerve with the public?
This is the greatest notion of what our future is, ever since that World Fair video from the 50’s. It’s magical and people recognize we are on the edge of this. Anytime anyone writes about it, it sells papers.