Cruise CEO Kyle Vogt knows first-hand how difficult it is to create self-driving cars that are as capable as humans.
“You’ve got these metal machines bouncing around on four bags of air we call tires in an urban environment where people are breaking the laws and not acting predictably,” he says.
In February, General Motors-backed autonomous vehicle unit Cruise debuted a driverless taxi service in San Francisco after initially testing it on employees. It follows Alphabet’s Waymo subsidiary premiering autonomous taxis in Phoenix in 2020. Meanwhile, in December, autonomous car company Argo AI, Lyft, and Ford (an investor in Argo AI along with Volkswagen), debuted autonomous rideshares in Miami.
These self-driving taxi services come after years of inflated expectations that autonomous cars would soon be widely available. Self-driving cars, it turns out, are more difficult to perfect than originally thought.
The deep learning that powers self-driving cars must learn to handle so-called edge cases, like a cat jumping out into a busy road. There’s also “the less glamorous work” required to make self-driving cars safe, Vogt says. Computers inside autonomous cars “can be like a laptop computer and just freeze up,” he says. If something goes wrong, the car must know how to detect the error and respond, like pulling over to the side of the road.
Additionally, autonomous cars must know how to deal with the “social aspects of driving,” like reacting to a police car that flashes its lights to signal that the self-driving car should pull over, Vogt says.
Cruise had to ensure that its vehicles could handle as many driving scenarios as possible before starting testing its service with employees in November. Since then, the company has provided “hundreds of rides,” with half of its trips since Feb. 1 being with members of the public, a Cruise spokesperson says.
Vogt acknowledged that company’s new ride-hailing service is “small-scale right now,” but he says it would soon expand, potentially to other cities. Currently, California’s Department of Motor Vehicles only allows Cruise to operate its taxi service from 10 p.m. to 6 a.m., when traffic is light and there are few pedestrians. The cars must also keep their speed under 30 miles per hour, and can’t operate in heavy rain or heavy fog.
While Cruise, Waymo, and other companies have been testing autonomous cars, electric car maker Tesla has been offering customers “full self-driving” capabilities as part of a test since 2020. Some lawmakers argue that Tesla’s marketing is misleading because it gives consumers the false belief that they don’t need to pay attention to the road because of the technology’s capabilities.
Vogt says that he’s okay with describing “advanced driver support systems” like Tesla’s that still require people to pay attention on the road as “self-driving.” In his opinion, consumers can tell the difference between a car with self-driving features versus cars built by Cruise that require no human drivers.
Ultimately, however, he prefers the term “driverless” when referring to fully-autonomous cars.
“The distinguishing thing in a driverless car is the car truly works for you,” he says. “You sit in the backseat and kick back and do nothing.”
Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com
P.S. If you want to get more of Fortune’s exclusive interviews, investigations, and features in your inbox, then sign up for Fortune Features so you never miss out on our biggest stories.
A.I. IN THE NEWS
Facial recognition goes to war. Ukraine is using the facial-recognition software sold by the controversial startup Clearview AI, the company’s CEO told Reuters. Clearview AI is giving Ukraine free access to its database of faces so officials can “vet people of interest at checkpoints,” the report said. The company said that it has gathered 2 billion photos from the Russian social media service VKontakte to help power its facial-recognition software.
Microsoft steps up its quantum game. Microsoft said it has demonstrated the ability to produce a phenomenon known as Majorana zero mode, a potentially major milestone. Quantum computing relies on so-called qubits that have the ability to more efficiently encode data than current transistor-based computer chips. But researchers are divided about the best way to produce stable qubits. Microsoft said that its breakthrough could pave the way for so-called topological quantum computers, which would be powered by a new kind of qubit that has only been theorized to exist.
Amazon heads to Virginia. Amazon is partnering with Virginia Tech to create The Amazon—Virginia Tech Initiative for Efficient and Robust Machine Learning, intended to offer doctoral students fellowships and create research projects dedicated to machine learning. Amazon previously selected Arlington, Va., as its second corporate headquarters, which it calls HQ2.
Healthy A.I. Cedars-Sinai Medical Center has created its Artificial Intelligence in Medicine division for researching how A.I. can be used to solve medical problems and improve clinical care. “Through the use of applied artificial intelligence, we can solve existing gaps in mechanisms, diagnostics and therapeutics of major human disease conditions which afflict large populations,” the A.I. unit’s founder, Paul Noble, said in a statement.
EYE ON A.I. TALENT
Salesforce hired Juan Perez to be the business software giant’s chief information officer and member of its executive leadership team. Perez was previously the CIO and engineering officer for UPS.
Xplorie picked Caleb Yaryan to be the vacation rental company’s chief technology officer. Yaryan was previously senior product owner at the real estate company BoomTown.
Eventus Systems chose Josh Bosquez to be the financial services software company’s CTO. Bosquez was previously the CTO of Armor Cloud Security.
EYE ON A.I. RESEARCH
A.I.’s duel-use problem. Researchers from Collaborations Pharmaceuticals, King’s College London, and the Swiss institute Spiez Laboratory, published a paper in Nature about how the same A.I. used to power drug discovery can be used for nefarious purposes like developing new biochemical weapons. The paper explains how easy it would be for researchers to use machine learning to design chemical warfare agents, which “should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community. The paper is an example of A.I.’s duel-use problem, in which researchers fail to understand how the technology they create could be used in ways they do not expect.
From the paper:
For us, the genie is out of the medicine bottle when it comes to repurposing our machine learning. We must now ask: what are the implications? Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight. If the threat of harm, or actual harm, occurs with ties back to machine learning, what impact will this have on how this technology is perceived? Will hype in the press on AI-designed drugs suddenly flip to concern about AI-designed toxins, public shaming and decreased investment in these technologies?
FORTUNE ON A.I.
Rivian’s lack of history is hurting its chances with chipmakers—leaving Amazon facing a $10 billion hit—By Christiaan Hetzner
Some 61% of women say online harassment is a problem. Google Jigsaw wants to give them back control—By Emma Hinchliffe
Why Stryker is going all in on A.I. in healthcare—By Susie Gharib
China’s tech hub Shenzhen locks down 17.5 million residents, closing Apple factories and risking chaos in global supply chain—By Eamon Barrett
Augmented reality specialist Magic Leap is back with a new headset as interest in the metaverse soars—By Jonathan Vanian
BRAIN FOOD
Deep skepticism about deep learning. A.I. expert Gary Marcus wrote an opinion article for the science publication Nautilus about the limitations of deep learning, in which researchers use neural network software to analyze data so they can adapt and make decisions. In recent years, Marcus has positioned himself as an A.I. contrarian who believes that deep learning, while a powerful tool, has been overhyped.
He wrote that deep learning is useful “when all we need are rough-ready results, where stakes are low and perfect results optional.” In other words, deep learning is good for teaching computers to recognize cats in photos, but it makes too many mistakes to be used in more high-stakes scenarios, like “radiology or driverless cars,” he writes. More powerful A.I. will include a variety of techniques as opposed to only deep learning, he believes.
From the article:
Because general artificial intelligence will have such vast responsibility resting on it, it must be like stainless steel, stronger and more reliable and, for that matter, easier to work with than any of its constituent parts. No single AI approach will ever be enough on its own; we must master the art of putting diverse approaches together, if we are to have any hope at all. (Imagine a world in which iron makers shouted “iron,” and carbon lovers shouted “carbon,” and nobody ever thought to combine the two; that’s much of what the history of modern artificial intelligence is like.)
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.