The road to fully autonomous cars is less like a well-maintained freeway and more like a dirt path.
Just three years ago, enthusiasm about self-driving cars was approaching delusional, with one financial analyst making a “conservative prediction” that there would be 10 million self-driving cars on the road by 2020. The combination of advances in artificial intelligence, the declining costs of sensors, and billion-dollar investments by old-school car makers and Silicon Valley hotshots led many to assume that people would soon be able to nap while their cars shuttled them around town.
Obviously, that didn’t pan out. As The New York Times recently said, “the technology is still far from ready, and many investors are wary of dumping more money into it.”
Austin Russell, the CEO of Luminar, a startup that specializes in the Lidar tech that helps self-driving cars detect objects around them, acknowledged to Fortune that “the assumptions didn’t come true.” The expected timeline for widespread use of self-driving cars was off and the cost of developing the complicated tech was severely underestimated.
But progress in autonomous automobiles hasn’t stalled, Russell explained. His company, for instance, recently partnered with auto-giant Volvo (also a Luminar investor) to use its technology in future vehicles. The hope is that by 2022, Volvo will start selling lidar-equipped cars that will drive themselves on freeways (not city streets), allowing people to watch movies, read books, or snooze while in the driver’s seat, he said.
Russell thinks other car-makers will follow Volvo’s lead, and by 2025, there will be several brands of self-driving cars “you can buy on dealer lots.” He also believes that traditional auto companies will more likely lead the autonomous push, rather than tech startups.
Auto giants have the cash to invest billions of dollars, unlike startups, which have more limited funds. Indeed, Stefan Seltz-Axmacher, the CEO of the now-shuttered self-driving truck company Starsky Robotics, attributed the “exponential increases” in the cost of improving the necessary artificial intelligence systems as one of the reasons his company collapsed.
Still, Russell may be a self-driving car expert, but he’s no soothsayer. Yes, auto giants pursuing autonomous vehicles like Volvo, General Motors, and Ford have big pockets, but they aren’t immune from events out of their control—the coronavirus pandemic being one such example. Earlier this month, GM’s Cruise self-driving car subsidiary reportedly laid off nearly 8% of its workforce to cut costs amid the COVID-19 outbreak.
“In this time of great change, we’re fortunate to have a crystal clear mission and billions in the bank,” a Cruise spokesperson said in a statement. “The actions we took reflect us doubling down on our engineering work and engineering talent.”
Companies creating the future for autonomous automobiles are likely to get car sick on the way.
A.I. IN THE NEWS
Cuts come to IBM’s A.I. and Watson groups. IBM laid off an unspecified number of workers in its Watson and A.I. unit, reported tech publication The Register. IBM told the publication that it isn’t releasing specific numbers about its layoffs "for competitive reasons." From the article: The question therefore remains open as to whether the layoffs are focused on IBM keeping costs low to cope with post-pandemic economic woes, doubling down on cloud, managing the decline of its services business, cutting staff from AI-driven initiatives that aren't doing well, or all of the above.
Sizing up the new A.I. chips on the market. Nvidia and Intel are heavyweights offering new A.I. chips, according to tech publication ZDNet. From the article: In fact, Nvidia's software and partner ecosystem may be the hardest part for the competition to match. The competition is making moves too, however. Some competitors may challenge Nvidia on economics, others on performance.
Protecting humanity. CNBC explored the work of Professor Nick Bostrom of the University of Oxford’s Future of Humanity Institute to research A.I. that doesn’t negatively harm people. From the article: Asked if he is more or less worried about the arrival of superintelligent machines than he was when his book was published in 2014, Bostrom says the timelines have contracted. “I think progress has been faster than expected over the last six years with the whole deep learning revolution and everything,” he says.
Automation at half-speed. The New York Times highlighted research from computer scientist Ben Shneiderman of the University of Maryland about the potential impact on automation on workers and humankind. Shneiderman is one of several experts urging software engineers building powerful machine-learning technologies to keep humans “in the loop” in their overall design. From the article: Dr. Shneiderman has challenged the engineering community to rethink the way it approaches artificial intelligence-based automation. Until now, machine autonomy has been described as a one-dimensional scale ranging from machines that are manually controlled to systems that run without human intervention.
Open-sourcing equality. LinkedIn’s testing software that’s used to measure inequality across its products is now availablefor free in an open source model. For a deep look at LinkedIn’s testing tools used to gauge how the company’s products could potentially negatively impact minority groups, check out Jeremy Kahn’s article in March.
EYE ON A.I. TALENT
Google hired Matt Pancino to be the search giant’s Asia Pacific director of industry solutions, reported tech publication IT News. Pancino was previously the chief technology officer of the Commonwealth Bank of Australia.
The city of Los Angeles picked Jeanne Holm to be its chief data officer, Techwire reported. Holm was previously the senior technology advisor to L.A.’s mayor Eric Garcetti.
Creighton AI hired Ryan Willson to the company’s partner and head of North America, reported financial publication Hedgeweek. Wilson was previously the CEO of Lateef Investment Management.
EYE ON A.I. RESEARCH
A.I. and the future of healthcare. Medical journal The Lancet published a paper about A.I. and the future of healthcare authored by researchers from the Heilbrunn Department of Population and Family Health, Columbia Mailman School of Public Health in New York; Spark Street Advisors; and the Department of International Health, Johns Hopkins Bloomberg School of Public Health.
The paper overviews various peer-reviewed A.I.-related healthcare papers to assess how cutting-edge machine learning is projected to impact the delivery of healthcare services, particularly to low-and-middle-income communities, or LMICs. As the authors write, “AI does not need to be held to a higher standard of research; however, its unique complexities, including the requisite use of large datasets and the opaque nature of some AI algorithms, will require approaches specifically tailored to interventions and consideration of how efficacy and effectiveness are assessed.”
From the paper: Although most AI investigators report necessary approvals by institutional review boards, indicating that the studies were all done ethically, only a few described how the research teams addressed issues of informed consent or ethical research design in tools that used large datasets and electronic health records. Reporting on ethical considerations would help future researchers to address these complex yet essential issues.
Similarly, only a few studies reported on the usability or acceptability of AI tools from the provider or patients’ perspective, despite acknowledging that usability is an important factor for AI interventions, particularly in LMICs.
FORTUNE ON A.I.
Facebook makes a bigger push into shopping with new online storefronts for businesses—By Jeremy Kahn
Uber’s new plan: Slim down to get ahead—By Danielle Abril
The boss in your bedroom: As workplace surveillance spreads, what are your rights?—By Jeff John Roberts
Amazon was built for the pandemic—and will likely emerge from it stronger than ever—By Brian Dumaine
Grading A.I. Artificial intelligence expert Kai-Fu Lee has written an essay in Wired about A.I.’s potential impact on healthcare and grades how A.I. has been doing during the coronavirus pandemic.
From the article:
Truth be told, AI has not had a particularly successful four months in the battle of the pandemic. I would give it a B-minus at best. We have seen how vulnerable our health care systems are: insufficient and imprecise alert responses, inadequately distributed medical supplies, overloaded and fatigued medical staff, not enough hospital beds, and no timely treatments or cures.
A.I. is so far a B-minus student that’s made some impact in a few areas like spotting initial coronavirus outbreaks. I wish Lee would have explained why the hyped technology was not able to address the problems he laid out after grading the tech. It appears he’s laying the blame on the inadequacies of our current health care system rather than any particular shortcoming of current deep learning technologies. Ultimately, the optimistic Lee believes that “AI will help ensure we will be better prepared for the next pandemic,” and said that he can “see a clear roadmap of how AI, accelerated by the pandemic, will be infused into health care.”
Update: Tuesday 12:45 PM PT. Essay updated with statement from Cruise spokesperson about its layoffs and clarifies GM's role.