Feds Say Safety Is the Key to the Future of Autonomous Cars

July 19, 2016, 8:15 PM UTC
Photograph by Bloomberg via Getty Images

Weeks after the world’s first known fatality in a car using autonomous technology, the U.S. Secretary of Transportation took the opportunity of an industry event to emphasize that the key to a successful roll out of autonomous car technology over the coming years will be a focus on safety.

“We need industry to take the safety aspect of this very seriously,” said Anthony Foxx to an audience of over a thousand automaker executives, researchers, startup entrepreneurs, and policy-makers at the fifth annual Automated Vehicle Symposium in downtown San Francisco on Tuesday morning.

Foxx continued that while autonomous car technology needs to be focused on making driving safer and reducing car crashes, the technology itself also needs to be implemented safely.

“We don’t want to replace crashes with human factors with large numbers of crashes caused by systems,” Foxx said. He added that while there are a lot of reasons as to why the industry is moving toward autonomous vehicles, “if safety isn’t at the very top of the list, we’re not going to get very far.”

Foxx also noted that his department is considering ways in which pre-market approval steps could be used to make the roll-out of the tech safer for both industry and consumers. The department is expected to release more guidance on autonomous car tech later this summer.

Foxx delivered his comments at a time when autonomous car safety has emerged as a hot button issue. Most industry-watchers agree that autonomous cars are rapidly coming, but it will be a tricky task to both simultaneously encourage and regulate the tech as it emerges.

At stake is a massive new industry where computers and cars merge and the world’s biggest automakers and Internet companies are hoping to make major revenue. Tesla, Google, Apple, Ford, Nissan, GM, hundreds of Silicon Valley startups and many more are all angling for a piece of the autonomous car market.

Three weeks ago, electric car maker Tesla Motors (TSLA) revealed that the National Highway Traffic Safety Administration has been investigating a fatal crash that had occurred in May in Florida in a car with Tesla’s autopilot software engaged. The regulators have not yet determined if Tesla’s autopilot software performed as expected, or if it played a role in the accident.

However, the incident demonstrates how as more and more drivers start using the tech, there will be many hurdles to overcome. Tesla has about 100,000 of its cars out there driving on roads, and about 70,000 so far are capable of using autopilot. The company has a goal to make 500,000 of its cars a year by 2018, and all of those will likely have autopilot enabled.

But the laws governing who’s at fault when it comes to autonomous car crashes hasn’t yet been worked out, partly because different automakers are using the tech in different ways.

Get Data Sheet, Fortune’s technology newsletter.

Many of the big automakers are using autonomous tech as mainly an aid to driving right now—like a fancier cruise control—while other companies like Google are focused on building fully self-driving cars. There are five levels of autonomous car tech; level zero is no autonomous tech, and four is enabled for full autonomy (five is a human can’t even take over for driving).

Tesla has aggressively rolled out its autonomous software while it’s still in “beta” mode, and has enabled its customers to change lanes, summon their cars, and steer their cars, while relying on the car’s computer. Tesla’s CEO Elon Musk has said that Tesla’s cars could be fully autonomous within a year and a half from now.

In the wake of the fatal crash and reports of other crashes using Tesla’s autopilot, some critics like Consumer Reports have called for certain aspects of the autopilot software to be disabled and for the company to rethink the autopilot name with the contention that it gives drivers a false sense of security.

The autonomous level 3—in which the car switches between full autonomy and full human control—could be particularly hard to navigate and will require extensive user experience design and engineering. Google (GOOG) decided to completely skip this level and focus on self-driving cars because the hand-off in level 3 appeared to be so difficult.

For more on how self-driving cars are blowing up the auto industry, watch:

Tesla CEO Elon Musk has said that Tesla will not disable autopilot, but has said the company will continue to make the software better. Tesla cars use radar, sensors, cameras, GPS, and wireless connections to sense the surroundings and steer, but they don’t use lidar, a laser-based sensing system employed by Google.

During his talk on Tuesday, Secretary Foxx acknowledged that “autonomous vehicles are coming,” whether the world is “ready or not.” He posited said that industry and government need to “prepare our ecosystem to integrate these new types of vehicles into the bloodstream of American infrastructure.”

If companies make major mistakes around the autonomous tech at this early stage—like installing software before it’s ready, withholding data about performance, or mis-marketing systems—it could have a backlash effect on the entire industry.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward