Our autonomous future depends on educating drivers

Getty Images

The heartbreaking headlines have become all too familiar.

Tesla driver dies in first fatal autonomous car crash in US

Recent self-driving car crash shares similarities to Williston crash of 2016

The dates and details of these incidents vary widely, but a single thread unites them all: confusion about the driver’s role with automated vehicle technology. And not only was this confusion a contributing factor in these crashes, but by infecting public officials and journalists, it also helped shape headlines that actually spread the very same confusion to new potential victims. 

Put bluntly: Uncertainty about the human role in driver assistance systems on new cars available for purchase today—none of which are actually autonomous—is a self-perpetuating educational challenge with life-and-death consequences. 

As vehicle capabilities become increasingly adept at handling some driving tasks (braking in an emergency, or letting you know when you’re departing from your lane), drivers start to become more reliant on the capabilities of the vehicle and less focused on their own role in managing the vehicle. We know from decades of research that this kind of “vigilance task” is hard for humans, almost inevitably leading to inattention and distraction.

SAE International has created a system for classifying automated technologies in vehicles ranging from level 0 (no driving automation) to level 5 (full driving automation). This taxonomy is used nearly ubiquitously in the industry, but many consumers don’t understand that every car for sale today—regardless of its impressive capabilities—still requires the full attention of a human driver. Tesla’s Autopilot, like all the automated driver assistance features available in new cars for sale to the general public, is what is known as a level 2 driver assistance system. This means that even though the major vehicle controls are automated, including steering, acceleration, and braking, the human driver must actively monitor the automation and intervene when it makes a mistake—or be held liable for the consequences. 

Safe usage of the system, then, depends as much on what is happening in the human’s brain as in the car’s software, bringing a new importance to how these systems are presented to buyers. Halfway down the Autopilot page on Tesla’s website, the company admits that “current Autopilot features require active driver supervision and do not make the vehicle autonomous,” a disclaimer the automaker includes on the in-vehicle screen, as well. However, this clear (if not entirely conspicuous) disclaimer is muddled by confusing and contradictory messages from Tesla itself and even ambiguous reporting from media outlets like the headlines referenced at the start of this piece (which erroneously suggest that an “autonomous” or “self-driving” car was involved).

The industry and its nonprofit partners (including PAVE, the organization I lead) have made attempts to address the confusing language surrounding automated vehicle technologies, including proposing a standard nomenclature for driver assistance features aimed at “clearing the confusion” that is caused by a bewildering muddle of brand names and mixed messages. Still, the persistence of crashes involving over-trust of driver assistance systems shows that much educational work needs to be done.

The tragic irony of these confusion-fueled crashes is that the very same confusion that contributed to them in the first place also leads to misreporting of these crashes as involving “autonomous” or “self-driving” vehicles. Not only does this spread the confusion about the crash itself, which can potentially contribute to more crashes in the near term, but it also undercuts one of the great long-term opportunities in road safety: actual fully autonomous vehicles.

Fully autonomous vehicles, classified as SAE level 4 or higher, hold the potential to massively reduce the crashes that claim the lives of tens of thousands Americans each year—but only if consumers are willing to trust the technology with their lives. Each time a crash involving a level 2 driver assistance system is misreported as an autonomous vehicle crash, that trust erodes just a little bit more. 

More than five years after the first Autopilot-involved crash, we still have a long way to go in addressing this life-and-death educational challenge. That challenge is steep, but at its core, the message is simple: No car available for sale to the general public today is truly autonomous or self-driving. This simple message must be at the heart of all communications about driving automation technology, from driver education and journalism to our casual conversations.

Few of our everyday decisions can contribute to real-world life-and-death outcomes as directly as the ways we talk about this technology. We have seen past examples of education campaigns about traffic safety issues that have made a significant impact—such as reducing drunk driving deaths or increasing seat belt usage—and with more precious lives and a safer autonomous future at stake, we can do it again.

Tara Andringa is the executive director of Partners for Automated Vehicle Education (PAVE), a nonprofit coalition focused on engaging with the public about automated vehicles and their potential to improve the safety, mobility, and sustainability of our transportation system. Ms. Andringa has led the group since its public launch in January 2019.

More must-read commentary published by Fortune:

Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.

Read More

Great ResignationClimate ChangeLeadershipInflationUkraine Invasion