We already have the technology we need.
When Google launched its self-driving cars project seven years ago, many saw it as an inconceivable moonshot. Today that idea fuels a global arms race for technology and talent, with even the Queen of England anteing up to get the biggest piece of the reported $7 trillion pie.
While tech superpowers Uber, Google, Tesla, and Lyft dominate the news cycles, traditional car manufacturers are running expensive Silicon Valley research centers, expanding self-driving fleets, forging partnerships, and plowing billions into acquisitions. Name a major player in either the tech or transportation ecosystem, and they’re probably investing in autonomous, self-driving, technology.
This is all for good reason: Autonomous vehicles (AVs) are our future. While significant technical challenges remain unsolved, AV technology is improving rapidly. Soon technological capability won’t be the greatest impediment to adoption; societal friction will be. This friction will delay full autonomy for at least a decade, or however long it takes for the tech community (which hasn’t always been particularly empathetic) to collaborate with policymakers, regulators, insurance providers, and consumer advocates to address the significant social, regulatory, and legal challenges AVs will create.
Autonomous vehicles can dramatically reduce vehicular injuries and deaths, but we don’t yet know how people will react to the inevitable mistakes that will occur. Tesla’s self-driving technology was at first blamed and then cleared for a fatal crash in Ohio, but Uber’s self-driving cars missed at least six red lights in San Francisco last year. Even though society accepts car accidents as an unavoidable (and horrible) risk associated with mobility, machine error will be judged much more harshly than human error due to the expectation of precision.
With artificial intelligence (AI) playing an increasing role in AV technology, self-driving vehicles will inevitably make “wrong” choices when presented with options of who will live and who will die. Many companies and institutions are tackling the ethical side of decisions made by self-driving cars, such as MIT’s Moral Machine (a judgment crowdsourcing platform) and Crowdflower’s approach to machine learning with human-in-the-loop AI. Over time, more segments of society will need to contribute to these morally and legally challenging decisions.
AVs will save lives, but their adoption could result in massive job loss. By some estimates, there are four million taxi, delivery, bus, and truck drivers in the U.S.—all of whose jobs could be automated. And that doesn’t even take into account the management and support staff for those jobs, nor the trickle-down impact on the gas stations, motels, retail outlets, and restaurants that rely on trucking routes.
AVs can’t vote; workers can. Policy makers will need to have serious political will to prioritize saving lives and future technology over real and perceived job loss, especially if the economic headwinds continue to blow in a populist direction. We’re yet to find out the details of the Trump administration’s commitment to updating the existing Obama-era self-driving guidelines in the coming months.
Another risk accompanies the emergence of AVs. Remember the Wired journalist who allowed hackers to take over his car—with him in it? That hack was carried out with only a few components of the car controlled by a computer. AVs, by contrast, are run entirely by software. An alert driver could override the self-drive function if they realized something was going wrong, but if no one’s at the wheel, there’s no human safety backup. While we haven’t seen any large-scale automotive hacks yet, we did see a preview of the security risks inherent to Internet-connected automobiles with last year’s high-profile hack into household Internet of Things (IoT) devices. To manage the risk of a more destructive hack that could weaponize automated vehicles, significant advances in cybersecurity will be required, along with commitment from the entire connected automotive ecosystem to adopt state-of-the-art security technology.
The path to level 5 autonomy, or a completely autonomous vehicle that performs equally to a human, will be incremental and evolutionary, with drivers and autonomy coexisting for a long period while technology, regulations, infrastructure, and public opinions sort out the moral and ecosystem issues. AVs are inevitable, but fortunately for consumers, the technologies required for ubiquity will improve the safety, efficiency, and enjoyment of the ride along the way—whether there’s a person, a machine or a combination of the two in the driver’s seat.
Ajay Chopra is a general partner at Trinity Ventures.