News emerged this week that U.S. regulators were investigating the death of a driver using the Autopilot feature of a Tesla Model S. This was the first death of its kind, and while it’s first and foremost a tragic loss of life, it also points to an array of challenges, ethical conundrums, and unanswered questions about the quest for self-driving cars. What had been theoretical debates are suddenly starkly real.
By and large, there seems little expectation that the event in and of itself will slow progress towards vehicle automation. Tesla’s own stock suffered modest losses on the news, and analysts described the event as a mere “headline risk.”
Get Data Sheet, Fortune’s technology newsletter.
That’s in part because, while an investigation is still underway, it so far does not seem that the Tesla Autopilot feature was the root cause of the accident. There is speculation that the Tesla driver may have been distracted, and perhaps speeding. More important still, most accounts of the incident have the semi-truck’s driver making a very dangerous turn across oncoming traffic. Autopilot clearly isn’t perfect, but the emerging picture is one in which two human drivers created a situation that an automated system failed to save them from, rather than one in which an automated system made a fatal mistake on its own.
More broadly, this incident has an air of inevitability: No one claims that automated systems will prevent all crashes, and as the company with the most advanced commercially available automation tech, Tesla more or less knowingly shouldered the risk of being in the spotlight when a crash like this occurred. Tesla has responded to the event in part by pointing out that this is the first crash after 130 million miles of Autopilot use, while U.S. drivers overall average about one death per 100 million vehicle miles traveled. Though Tesla’s sample size is not big enough to make the case on that comparison alone, it’s at least an early indicator that Autopilot does make the cars safer.
Nonetheless, the incident generates some risks. For one, it could lead to political pressure to tighten regulation of automation features, which is currently relatively limited. Tighter regulation could slow development of the technology. The legal fallout from the incident is also still uncertain—if someone can convince a judge or jury that Tesla is liable for the crash, the picture for Autopilot and automation could shift rapidly.
Questions of both regulation and liability could hinge on Tesla’s repeated insistence that Autopilot is a ‘beta’ product, and its many built-in warnings that drivers should keep their hands on the wheel even when it is active. While releasing a product that’s less than perfect is common practice in the tech world, where Elon Musk’s roots lie, this crash reminds us that things are different when it comes to cars. The ‘beta’ program has been crucial to helping Tesla improve Autopilot – but judges and lawmakers may ultimately have to decide whether that’s worth the tradeoff of risking driver lives on a lightly regulated and explicitly imperfect product.
Related to this is the question of whether ‘partial automation’ creates a unique sort of risk. As a Kelley Blue Book analyst put it to the Detroit News, “documented abuses of driver-assist technology” have been plastered all over sites like Youtube —videos of drivers operating their Tesla with no hands, or even while reading the newspaper. It’s fair to ask whether Tesla should have been more aggressive about policing these misuses of the system, perhaps by marketing or characterizing the technology itself more conservatively. Those changes could be coming soon.
For more on autonomous vehicles, watch our video.
At the most extreme end of that debate, Gizmodo’s Alissa Walker argues that the crash proves that “fully autonomous vehicles are the only types of self-driving cars that make sense in our streets.” That’s a problematic argument, because various kinds of partial automation, such as automatic braking, are already on the road and saving lives. Despite some bold public statements, there’s also little certainty that full vehicle automation is coming anytime soon, and keeping lane detection and other safety features out of cars until it’s here would possibly hinder the development of the myriad features necessary to add up to a fully autonomous car.
At least in the near term, what it all boils down to is this: Automobiles are powerful, dangerous machines. Maybe full automation will someday make them truly safe, preventing most, or even all, of the million-plus traffic deaths that occur worldwide each year.
But we’re not there yet.