CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

This is not a drill: The coronavirus pandemic is testing A.I.’s ability to handle extreme events

March 24, 2020, 2:14 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

In finance, they are called black swans. Those rare, extreme events that come along only once every decade or even once a century and can send markets reeling. The global coronavirus pandemic is certainly one.

In data science and artificial intelligence circles, those same kind of events are known by different names: edge cases, corner cases, or “out-of-distribution” datapoints. And most A.I. systems do not cope well when confronted with them.

The coronavirus pandemic is providing a real-world test of how robust many companies’ new-fangled A.I. systems really are.

Most of today’s machine learning systems need to be trained on lots of historical data. But what happens when the present suddenly stops looking like the recent past?

Most A.I.-driven trading algorithms, for instance, have only been implemented in the last five years. Their training data might not even have included the 2008 financial crisis. They almost certainly don’t include anything like the massive demand-driven shock we’re seeing across all industries right now.

So, some A.I.-driven investment strategies that were supposed to do well in all kinds of different market conditions have actually performed much worse than expected in the past few weeks.

Another example: Ocado, a popular online grocery business in the U.K., has seen traffic to its website spike four times higher than any previous peak the company has experienced in its 20-year history. In a conference call with reporters Thursday, Ocado spokesman David Shriver said so many visitors went to its website that the company’s cybersecurity software, which uses machine learning to detect aberrant behavior, assumed the site was experiencing a denial of service cyberattack and moved to block those connections. Luckily, human operations managers intervened to prevent that from happening.

What can a company do to make sure its machine learning models are able to cope with these extremes? Jay Schuren, a data scientist at DataRobot, a Boston startup that helps large corporations create and run machine learning models, has tips.

  • It’s vital that companies monitor their data models in real-time. For a grocery that normally sells 22 cartons of milk a minute, you want to know if you suddenly start selling 10 times that amount. Not enough businesses do this today, Schuren says.
  • Businesses need to be proactive about which machine learning models and which input variables within the models are most sensitive to extreme events. Anything that depends on human behavior—from electricity demand to shopping—will probably change because of Covid-19, he says.
  • Businesses need to think about the risks associated with different algorithms. If a system for placing ads goes haywire, that’s not good, but the consequences are a lot less severe than a system dispatching $1 million worth of products to a store that’s now shuttered due to social distancing measures.
  • A company’s data scientists should sit down with the business’s subject-matter experts and stress-test a system in simulation: What items might customers want in a crisis? And what will happen to your supply management algorithm if you do get thousands of people wanting to purchase six months’ worth of toilet paper in a week?
  • Data scientists can rejigger which inputs an A.I. system uses so the software might be less thrown-off by extreme variations: For instance, rather than using prices as an input variable, a model that uses the percentage change in prices instead will return to normal functioning faster.
  • Companies should look for proxies that might exist in their data: Does this look like what happened during Hurricane Sandy or what happened during the 1973 oil crisis?
  • Finally, data scientists need to think carefully about whether they want the current coronavirus extremes included in future training data. For some systems, doing so might inoculate the software from being caught off guard by a similar crisis. But in a lot of other cases, it might have the opposite effect, leading the system to falsely expect that the crisis reflects a “new normal.” All those people stockpiling toilet paper today may have so much on hand they won’t need to buy any more for months, resulting in a sudden crash in demand in the near-future that the A.I. system won’t be able to foresee, even though a human analyst would certainly expect it.

    Schuren says that companies could benefit from building families of different types of machine learning models for different conditions: one type that is more economically efficient, but more fragile, that they use in normal circumstances, and another that is maybe less efficient, but also less prone to break when confronted with abnormal data, that they can fall back on during extreme events.

On that note, here’s the rest of the news in A.I. this week.

Jeremy Kahn


A.I. startups helped spot the COVID-19 pandemic and may enable faster response to future outbreaks. My Fortune colleague Aaron Pressman looks at the A.I. startups whose technology was able to provide an early warning of the emerging COVID-19 outbreak in China back in autumn, predicting how the virus would spread. The companies, which include Toronto-based BlueDot, Boston-based HealthMap and San Francisco startups Kinsa and Metabiota, as well as the giants Google, Facebook and Tencent, use a variety of A.I.-based techniques, including natural language processing and predictive analytics based on flight booking data, to spot the emergence of mysterious diseases.

Intel to release experimental "neuromorphic" chip. Neuromorphic chips try to mimic in silicon the sparse connections found in the human brain and which scientists believe partly account for the brain's electrical power efficiency when compared with traditional silicon computer chips. Semiconductor giant Intel is going to release its version of the potentially revolutionary hardware, which it is calling Pohoiki Springs, later this month to a select group that includes academic researchers, government labs and about a dozen companies including Accenture and Airbus, which will be able to access the devices through Intel's cloud-computing network, reports The Wall Street Journal.

Add autonomous vehicles to the list of COVID-19 casualties. The social distancing measures put in place to fight the pandemic has forced autonomous vehicle companies to suspend testing of their self-driving cars on public roads. Tech publication VentureBeat reports that Alphabet's Waymo, GM's Cruise, Uber, Aurora, Argo AI, and have all halted their testing programs, largely in order to limit contact between passengers and safety drivers. In the case of Waymo, which has been running a fully autonomous ride-hailing service in Phoenix, Arizona, no staff members could disinfect cars between customer uses.  

Of course, some self-driving companies were in trouble even before coronavirus. The self-driving truck startup Starksy Robotics has shut down, according to a Medium post from its founder, Stefan Seltz-Axmacher. He didn't blame coronavirus for his company's demise. Instead, Seltz-Axmacher said the company struggled with the spiraling costs of perfecting its technology, especially to handle rare edge cases. "Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems," he wrote.

SenseTime pulls IPO plans for now and will seek up to $1 billion in private funding. The Chinese facial recognition startup, which is backed by funders including Alibaba and SoftBank and considered one of the most promising A.I. companies anywhere, was expected to pull off a blockbuster IPO, seeking to raise at least $750 million on Hong Kong's stock exchange later this year. But the COVID-19 pandemic, plus the fact that the U.S. blacklisted SenseTime and eight other Chinese tech companies for their alleged complicity in human rights abuses in China, has forced SenseTime to shelve those plans for now. Instead, the company will try to raise between $500 million and $1 billion in additional funding from its current investors, sources told the Nikkei Asian Review.


  • Axon, a Seattle security technology company, has hired Yasser Ibrahim as its senior vice president of artificial intelligence. Ibrahim is joining Axon from Amazon, where he most recently headed distributed machine learning for Alexa AI. He also worked on the computer vision technology underpinning Amazon's "Amazon Go" cashier-less stores. 


Can A.I. help diagnose COVID-19 from CT scans? Researchers from six different medical centers across China have created a deep-learning algorithm they say can differentiate CT scan images of pneumonia caused by COVID-19 from those of other kinds of pneumonia, with an accuracy of above 96%. The study was published in the medical journal Radiology.

But some researchers have criticized this and other computer vision studies recently published on COVID-19. Luke Oakden-Rayner, director of medical imaging research at the Royal Adelaide Hospital in Australia, wrote a whole blog post debunking many of the studies, saying that almost all of them suffer from flawed methodology, including severe selection bias, inadequate control groups and a poor selection of metrics. "That 97% sensitivity report is unfounded and unbelievable," he wrote.  


Privacy could be the next victim of the coronavirus—by David Meyer

Should the government increase surveillance to help fight the coronavirus?—by Robert Hackett

How Samasource’s CEO helped turn a non-profit into a fully sustaining for-profit—by Adam Lashinsky and Aaron Pressman

How A.I. is aiding the coronavirus fight—by Aaron Pressman

Some of these stories require a subscription to access. Thank you for supporting our journalism.


In a break from all the coronavirus-related news, I want to highlight something completely different: the growing use of A.I. to unlock archaeological secrets. Back in October, DeepMind showed that it could train a deep-learning system to decipher and reconstruct passages in ancient Greek inscriptions, many of which are damaged and have letters, words or whole passages, missing. Now, researchers at Southwest University in China have used a similar method to decipher 3,000-year old ancient texts carved onto ox bones and tortoise shells, or oracle bones. The system was able to come close to the abilities of scholars who have spent their whole careers studying them. In other uses of A.I. in archaeology, researchers have used computer vision systems to try to detect likely excavation sites from aerial imagery and to help classify pottery shards found on digs. A.I. can even help archaeologists figure out how to reassemble those shards into complete vases and sculptures.

What can't A.I. do yet? Well, it can't help with arguably the most difficult part of an archaeologist's job: figuring out exactly how artifacts were used and what role they played in ancient civilizations. What exactly were the customs of those ancient cultures? That, for the moment, still requires human imagination.