CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Self-driving cars are returning to work too

July 7, 2020, 3:14 PM UTC

Shelter-in-place orders have forced many companies that are developing self-driving cars to stop testing their vehicles. That’s bad news for the nascent industry because the machine learning that the cars rely on learns best from road testing.

Consider ride-hailing company Lyft, among the many businesses experimenting with self-driving cars. Last week, the company said that it would resume test driving its autonomous vehicles on a test track in Palo Alto, Calif. after a three-month pause due to the pandemic.

During those three months, Lyft didn’t leave its self-driving car project stuck in idle. Instead of driving on pavement, it trained its machine-learning systems on simulated roads intended to mimic the real world.

The idea of using simulated driving to train autonomous vehicles isn’t new. But the coronavirus pandemic has led to Lyft and others like Alphabet’s Waymo and GM’s Cruise self-driving car subsidiary to beef up their simulation technology.

Sameer Qureshi, a Lyft self-driving car director, told Fortune that the next few weeks are important because Lyft will be able to compare its simulated testing to real driving. In the process, Lyft will learn how to improve its simulated testing to be more like the physical world.

A typical simulation may require an autonomous vehicle to properly react to a pedestrian crossing the street (i.e. it must recognize the individual as a human and then stop). Despite the relatively simple scenario, it involves numerous variables that cars could fail to take into account and therefore cause a collision.

For instance, a real-life self-driving car’s brakes may be subjected to “wear and tear” after repeated use, thus impacting how that car stops, Qureshi said. Driving simulators have difficulty taking into account such a nuance, but it’s a crucial issue that can mean the difference between a self-driving car breaking too hard (causing passengers to spill their coffee) or too soft (ending with a collision).  

“Sometimes simulation does not 100% match what the cars do in real life,” Qureshi said.

Returning to the road is a big deal for Lyft, whose self-driving car technology is behind other companies like Waymo and Cruise, according to many analysts. Qureshi acknowledged that “self-driving cars are hard,” but he argued that Lyft has some advantages that may be overlooked.

One involves the data the company collects from its ride-hailing service that it can use to fine tune its autonomous driving technology, he said. For example, it knows which streets are the busiest in certain cities and how rainstorms affect traffic patterns.

“So I can’t argue that we are way ahead of everyone else,” Qureshi said. “But we made some significant amounts of progress in the last three years we’ve been around.”

Still, don’t expect Lyft, or any other company to put its self-driving taxis into a commercial service anytime soon—the technology still needs to be further developed. As Qureshi says, “It will be a long time before autonomous vehicles completely replace human drivers.”

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

A.I. IN THE NEWS

Stop “recognizing” faces in Canada. Controversial startup Clearview AI will no longer sell its facial-recognition technology in Canada, the Office of the Privacy Commissioner of Canada said in a statement. The Canadian commissioner said that while an investigation into Clearview AI by “privacy protection authorities for Canada, Alberta, British Columbia and Quebec” is still open, the startup has been cooperating with the government and that a contract with The Royal Canadian Mounted Police, Clearview’s “last remaining client in Canada,” is now indefinitely suspended. Clearview AI has come under fire from civil rights activists and some lawmakers who are concerned that the company’s technology, fueled in part by a massive database of people’s faces that were scraped from the Internet, pose substantial privacy risks.

Machine learning goes wireless.  A team of researchers from the University of Southern California and the University of California, Berkeley have won a three-year grant from The National Science Foundation and Intel to work on machine learning systems that can work with next-generation wireless networks, IEEE Spectrum reported. The researchers’ project involves federated learning, which as my Eye on A.I.colleague Jeremy Kahn explained, is a “privacy-preserving machine learning technique” that “functions as a network, where each node retains all its own data locally and uses that data to train a local A.I. model.” The IEEE Spectrum article said that Intel and the NSF’s research program has so far awarded $9 million to 15 research teams exploring machine learning and wireless networks.

We need to slow down. Representatives from two professional radiological organizations, the American College of Radiology and the Radiological Society of North America, have written a letter expressing their concerns with so-called autonomous radiology A.I., which the FDA defines as “software in which AI/ML Is being used to automate some portion of the radiological imaging workflow (e.g. detection, diagnosis, reporting).” The letter’s authors wrote:

While we understand the desire among industry and others to swiftly advance autonomous AI, our organizations strongly believe it is premature for the FDA to consider approval or clearance of algorithms that are designed to provide autonomous image interpretation independent of physician expert confirmation and oversight because of the present inability to provide reasonable assurance of safety and effectiveness.

Set sail on the data lake. Charles Chen, the A.I. director for the U.S. Department of State, gave a shout out during a webcast to the trendy “data lake” technology that he credits as being “certainly essential for the success of machine learning and AI,” IT publication MeriTalk reported. A data lake generally refers to a massive data repository containing numerous, pre-curated and pre-cleaned datasets that multiple employees can access. “Accuracy of machine learning requires very large datasets as a training model,” Chen reportedly said.

EYE ON A.I. TALENT

David's Bridal has hired Danny Luczak to be the retail outfit’s chief technology officer. Luczak was previously the vice president of application development at Public Storage.

Hitachi Vantara Federal has promoted Gary Hix to be the IT firm’s CTO. Hix was previously Hitachi Vantara Federal’s director of engineering and a manager of IBM’s storage services solution team.

EYE ON A.I. RESEARCH

Settle down now Alexa. Researchers from the Technical University of Darmstadt in Germany, the University of Paris-Saclay, and North Carolina State University published a paper about technology that can help identify certain words that cause voice-activated digital assistants like Amazon’s Alexa to unintentionally wake up, leading to “unexpected audio transmission.” Some of the words the researchers identified that have a high probability to cause Alexa to accidentally activate during their tests include alexiteric, alissa, alosa, alyssa, barranca, elector, electra, elissa, and elixir.

The researchers wrote, “Based on these findings, it is unsurprising that Alexa-enabled devices are often triggered unintentionally, leading to private conversations and audio being transmitted outside the user’s home.”

VentureBeat notes in an article about the research that the technology “was built on a Raspberry Pi for less than $40," and “operates by periodically generating audible noises when a user isn’t home and monitoring traffic using a statistical approach that’s applicable to a range of voice-enabled devices.”

Anomaly detection gets really deep. Researchers from the University of Adelaide and the University of Technology Sydney published a paper that’s essentially a review of how deep learning can be applied to anomaly detection, a data science technique that helps researchers better understand data that’s non representative of the norm. It’s an area of interest in the field of A.I. because while deep learning is generally good at identifying major patterns in vast amounts of data, it can struggle to spot the outliers.

The paper’s authors write that “other interesting applications” of fusing deep learning with anomaly detection “include detection of adversarial examples, anti-spoofing in biometric systems, and early detection of rare catastrophic events (e.g., financial crisis and other black swan events)."

FORTUNE ON A.I.

Why shares of Tesla, Nio, and other electric-vehicle makers are skyrocketing—By Aaron Pressman

Civil rights groups urge Microsoft to end NYPD partnership—By Jonathan Vanian

Controversial surveillance startup Anduril gets a $1.9 billion valuation—By Lucinda Shen

Silicon Valley companies are rethinking in-office perks for a post-pandemic era—By Michal Lev-Ram

Samsung made a closet that disinfects your clothes—By Rachel King

BRAIN FOOD

A.I. at the end of your life. As hospitals and healthcare clinics continue to implement A.I. technologies for patient care, new ethical questions are being raised about how to communicate the use of those technologies to patients. For instance, several health care organizations are experimenting with A.I. systems that analyze patient health records in order to predict the likelihood of death for certain patients, health news organization Stat reported in a thought-provoking piece.

The A.I. system then notifies doctors about its predictions of which patients are at high risk of death to encourage the physicians to discuss mortality and the logistics of dying with the patient.

From the article:

Those kinds of questions are increasingly cropping up among clinicians at the handful of hospitals and clinics around the country deploying cutting-edge artificial intelligence models in palliative care. The tools spit out cold actuarial calculations to spur clinicians to ask seriously ill patients some of the most intimate and deeply human questions: What are your most important goals if you get sicker? What abilities are so central to your life that you can’t imagine living without them? And if your health declines, how much are you willing to go through in exchange for the possibility of more time?

One physician who is testing one of the A.I. systems said that doctors are generally reluctant to tell patients about the technology for obvious reasons. “It can seem very awkward, and like a big shock — this ominous force has predicted that you could pass away in the next 12 months,” Stanford physician Samantha Wang told the publication.