By David Z. Morris
March 31, 2018

In a statement Friday night, Tesla reported that the Autopilot system was engaged when a Model X crashed into a highway divider on March 23rd. Further, Tesla says the unnamed male driver’s hands were “not detected on the wheel for six seconds prior to the collision,” and that he had received “several” automated warnings to keep his hands on the wheel earlier in the same drive.

The driver, according to data in the vehicle’s logs, had about five seconds to react before striking the divider, but did not. The driver was killed in the crash, which Tesla says was made worse by an unrepaired highway crash guard.

The case highlights a key and persistent worry about Tesla’s Autopilot system – not whether it is effective, but whether drivers understand its intended use. Despite its name, Autopilot is not a full self-driving system, and Tesla has implemented a variety of safeguards and alerts to remind drivers to keep their hands on the wheel and their eyes on the road when Autopilot is activated.

Get Data Sheet, Fortune’s technology newsletter.

But this is not the first time a crash apparently resulted when a driver did not heed those warnings. A fatal May 2016 crash was caused in part by overreliance on Autopilot, according to a National Transportation Safety Board investigation. Autopilot was also active during a nonfatal Model S crash in California this January.

Despite those incidents, statistics show that Autopilot increases driver safety overall. The U.S. Department of Transportation has found that Autopilot reduces crash rates by 40%, and Tesla says Autopilot-equipped vehicles are 3.7 times less likely than average to be involved in a fatal accident.

Despite that bigger picture, the latest crash has helped hammer the carmaker’s stock, which dropped more than 15% early this week before recovering slightly.

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST