CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

How Instacart fixed its A.I. and keeps up with the coronavirus pandemic

June 9, 2020, 4:11 PM UTC

Like many companies, online grocery delivery service Instacart has spent the past few months overhauling its machine-learning models because the coronavirus pandemic has drastically changed how customers behave.

Starting in mid-March, Instacart’s all-important technology for predicting whether certain products would be available at specific stores became increasingly inaccurate. The accuracy of a metric used to evaluate how many items are found at a store dropped to 61% from 93%, tipping off the Instacart engineers that they needed to re-train their machine learning model that predicts an item’s availability at a store. After all, customers could get annoyed being told one thing—the item that they wanted was available—when in fact it wasn’t, resulting in products never being delivered. “A shock to the system” is how Instacart’s machine learning director Sharath Rao described the problem to Fortune.

The reason is that the data Instacart fed into its machine learning models about shopping habits failed to take into account the new coronavirus reality. Normally, Instacart customers would buy products like toilet paper only occasionally. But then, almost overnight, it was like they were preparing for a months-long camping trip and wiped out store supplies. Many customers stockpiled bathroom tissues, wipes, and hand sanitizer gel, as well as staple foods like eggs and cheese. 

Rao explained several things Instacart did to fix the problem. It’s a lesson that other businesses that use machine learning could learn from.

Instead of training a machine-learning model based on several weeks of data (in this case, the items that delivery people mark “as found” or “not found” at stores), Instacart now uses up to ten days of data. While weeks of data may provide insights about long-term trends, sifting through data only from recent days provides more accurate results because people’s shopping habits are in flux. As he explained, Instacart had to make a tradeoff between the volume of data used to train its model and the “freshness” of data.

In the last week alone, the nationwide protests over the death of black Minneapolis resident George Floyd while in police custody had a big impact on shopping patterns, whether grocery stores were open, and whether they were fully stocked.

“The world is changing so fast,” Rao said. “Every day looks like a Monday for Instacart,” referring to every day being increasingly busy.

Instacart also increased the number of times it “scores” its model for predicting the likelihood that a certain product will be in stock. Before, Instacart would typically score its model (based on hundreds of millions of items) every three hours, but it now does so every hour to better take into account the fast-changing world. An item like a case of soda that’s marked with a lower “score,” indicates a lower chance it will be at the store when the delivery person arrives, prompting Instacart to suggest that users mark a potential replacement item. 

Instacart also performed a wonky task known as “hyper-parameter optimization,” which required its machine learning engineers to adjust certain settings of the model that influence the accuracy of its predictions. Although this task requires machine learning expertise, it can be likened to someone “pushing buttons” to get a device to work properly, Rao explained. Think of an airplane pilot who knows how to “press the right buttons” in a complex aircraft control system to ensure a smooth landing during a sudden storm.

For all of its efforts, Instacart’s technology still isn’t as accurate as before the pandemic. The metric used to evaluate Instacart’s model that predicts an item’s availability is now correct about 85% of the time, according to an Instacart graph. As that metric continually changes, Instacart’s team needs to continually train its machine learning models to keep up.

Additionally, while Instacart’s machine learning model may have been working well on the old data it was trained on, in practice it was not actually very “accurate” as the world kept changing.

It’s part of the challenge of fine-tuning a machine learning model amid uncertainty.

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

Story updated at 2:30 PM ET on Tuesday with more information about Instacart’s metrics.

A.I. IN THE NEWS

IBM waves goodbye to facial recognition. IBM said it would no longer sell facial-recognition and related analysis technology. The company announced the decision as part of a letter to Congress in which it proposed policy recommendations "to advance racial equality in our nation." We'll discuss this a bit more in the Brain Food section of this week's Eye on A.I. newsletter.

The limits of self-driving cars. A study by the Insurance Institute for Highway Safety said that autonomous vehicles won’t be able to prevent all auto accidents that are caused by human error, the Associated Press reported. While self-driving cars can prevent accidents involving “sensing and perceiving” errors, which include accidents caused by distracted or intoxicated drivers, they’ll have a tougher time preventing accidents caused by “prediction,” “planning,” and “execution” errors, which include “incorrect evasive maneuvers or other mistakes controlling vehicles,” the article said. From the article: “For example, if a cyclist or another vehicle suddenly veers into the path of an autonomous vehicle, it may not be able to stop fast enough or steer away in time…”

Robot chef gets the axe. Food delivery company Deliveroo has ended a research lab that was developing robotic food preparation technology, including “fryer bots,” The Telegraph reported. The move was part of a plan to cut costs, the report said, and underscores the challenges facing companies attempting to create capable robots that can make pizzas and cappuccinos, among other meals and drinks.

Healthcare A.I. coming to Dubai. Heatlhcare IT firm Cerner said it would debut over the next few months an A.I. research center in the United Arab Emirates by partnering with American Hospital Dubai. Cerner said the research firm will focus on creating healthcare technology for people at home and improve care related to the Covid-19 pandemic. From Cerner: The AI research center will also allow American Hospital Dubai to leverage big data analytics to better understand the healthcare needs of the UAE population.

Government wants to get cloudy. Federal lawmakers have debuted legislation intended to spearhead the creation of a “national cloud computing infrastructure” for A.I. research, according to a report by Nextgov. The goal is to ensure that the U.S. leads China in A.I. efforts, with the lawmakers warning that “China could overtake the United States in R&D spending within the next decade.” It’s unclear from the report whether any specific cloud computing vendors would be involved with the proposed project.  

EYE ON A.I. TALENT

WarnerMedia hired Richard Tom to be the media giant’s chief technology officer. Tom was previously the CTO of digital entertainment at Verizon and CTO and senior vice president at Hulu.

Cerner Corp. picked Jerome Labat to be the health care IT firm’s CTO. Labat was previously the CTO of IT company Micro Focus and CTO of Hewlett Packard Enterprise’s software unit. 

The International Research Centre for AI under the auspices of UNESCO appointed Mark Minevich to the chair of the AI Policy Committee. Minevich is an advisor to Boston Consulting Group the chief AI officer and venture advisor for Canadian Growth Investments. 

EYE ON A.I. RESEARCH

Deep learning comes to architecture. Researchers from the University of Victoria in British Columbia, Canada published a paper about using deep learning to identify symbols (drawings that represents objects like kitchen sinks, entry doors, and ovens) in architectural floor plans. One of the deep learning system’s highlights is that it can still accurately identify a symbol even when it might be covered or cluttered by other markings in the architectural floor plans, the paper’s authors wrote. The researchers also said they are “currently in the process of securing permissions from various architectural firms to release a public dataset of real-world architectural plans.” The paper was accepted to the upcoming CVPR2020 Workshop on Text and Documents in the Deep Learning Era online conference.

A.I. to help you stand up straight. Researchers from Northwestern Polytechnical University in Xi’an, China and The City College of the City University of New York published a paper in the Computer Vision and Image Understanding journal that surveys the various deep learning systems used to measure people’s posture in photos and videos. The paper’s authors said measuring posture in images and videos can benefit a number of tasks, including improving the quality of human motion-capturing systems used in movies and animations; aiding physicians involved with rehabilitation and physical training; and creating more capable self-driving car systems that “can respond more appropriately to pedestrians and offer more comprehensive interaction with traffic coordinators”

FORTUNE ON A.I.

How do you move a technology conference with 30,000 attendees online?—By Jeremy Kahn

After the tragedy of George Floyd’s death, where do A.I. and policing go from here?—By Jonathan Vanian

The digital transformation is no joke—By Adam Lashinksky

Facebook to label state-owned media outlets’ posts, pages, and ads—By Danielle Abril

BRAIN FOOD

A.I. in the aftermath of George Floyd. The recent protests related to George Floyd, who died after being pinned to the ground by a white police officer, has put the spotlight on A.I., which over the years researchers like Joy Buolamwini and Timnit Gebru have identified can be prone to bias, specifically facial-recognition technology that works better on white men than women and people of color. Now, as my colleague David Meyer reportsIBM has ended the practice of selling facial recognition technology—" the boldest move yet by a Big Tech firm to repudiate the discriminatory use of the technology.”

In a letter to lawmakers, IBM CEO Arvind Krishna said that the company “firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency,”

Krishna also supported parts of a new bill proposed by lawmakers that would “reform police rules, in order to combat misconduct and racial discrimination,” in the aftermath of George Floyd’s death.

As CNBC notes, "IBM’s facial recognition business did not generate significant revenue for the company," so making the decision to end the practice of selling tech was likely easier to do considering it wasn't a huge business. But it's still a big deal, considering the business of selling facial recognition tech could eventually become a big money-maker if more companies adopt the technology.

For an interesting look at the history of how technology can perpetuate racism, read this piece in MIT Tech Review by New York University professor Charlton McIlwain

From the article:

If we don’t want our technology to be used to perpetuate racism, then we must make sure that we don’t conflate social problems like crime or violence or disease with black and brown people. When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we design it to eradicate.