A.I. will be crucial to companies outside of Silicon Valley—and they need a new playbook for it

April 26, 2020, 4:00 PM UTC
A worker assembles mobile phones at an Indian Lava phone manufacturer factory in Noida on August 22, 2019.
A worker assembles mobile phones at an Indian Lava phone manufacturer factory in Noida on August 22, 2019. (Photo by Sajjad HUSSAIN / AFP) (Photo credit should read SAJJAD HUSSAIN/AFP via Getty Images)
Sajjad Hussain—AFP/Getty Images

While artificial intelligence has become a ubiquitous topic in the business world, there is still important work to do to translate the promising experiments we see in the news to valuable and practical implementation. Large consumer Internet companies pioneered practical A.I. deployments, but their processes do not necessarily apply in other industries where A.I. projects face unique challenges.

As a result, we frequently see non-digital companies struggle with A.I. deployment. Manufacturing, for instance, is primed for A.I. transformation, but only 5% of more than 200 manufacturers surveyed by the MAPI Foundation say they have a clearly defined strategy for A.I. In a separate Accenture report that surveyed 1,500 C-suite executives in 16 industries, 76% of respondents said they struggle with how to scale the technology. This stands in contrast to the consumer Internet industry, where large A.I. systems already power everything from producing search results to language translation to targeted advertising.

For A.I. to reach its full potential, those implementing the technology must develop new techniques to enable its deployment across all industries. (My company, Landing AI, helps companies with A.I. adoption.) In particular, companies outside Silicon Valley need to overcome three challenges to increase their odds of success.

First, they must learn to harness small data. The tech giants use vast volumes of data collected from billions of users to train A.I. models. Techniques developed for these big data settings need to be adapted to the much smaller datasets that most other industries have. 

Take the challenge of building an A.I.-powered system for a factory to detect scratches on smartphones. No smartphone manufacturer has a million scratched phones lying around from which it can capture pictures of scratches. Thus, many manufacturers do not have enough data to power conventional A.I. models. Manufacturing A.I. application builders often need to get by with 100 or fewer images.

Fortunately, new small data technologies are starting to make this possible. For example, a new data generation technique may be able to take 10 images of a rare defect and synthesize an additional 1,000 images that an A.I. system can then learn from. Using another method, an A.I. model might first learn to find dents from a large dataset of 10,000 pictures of dents collected from different products and data sources. Having learned about dents in general, it can then transfer this knowledge to detect dents in a specific novel product with only a few pictures of dents. 

Such advanced small data techniques may enable A.I. to finally break into traditional industries like manufacturing, agriculture, and health care.

Second, A.I. models serving non-digital firms must bridge the gap between research settings and the real world. Many A.I. systems that achieve high accuracy in a research paper or proof of concept do not perform as well when deployed. 

For example, many research groups have published articles that report A.I.’s ability to diagnose from x-rays or other medical images at a level of accuracy comparable or superior to that of radiologists. So why is it still so rarely used? 

One reason is that many of these studies are carried out in well-controlled settings where the A.I. learns from and is tested on consistently high-quality data. Doing well in such a setting leads to a successful proof of concept or publication. However, if the same A.I. system is deployed in a hospital where x-ray images are slightly blurrier or the protocol for collecting images is slightly different, it fails to adapt.

One solution is to start by only using A.I. to analyze images on which it has high confidence, while relying on a human radiologist for all other cases. The A.I. then learns from the radiologist and is gradually able to take on more responsibility. 

Third, non-tech companies deploying A.I. must be aware of its potential to disrupt employees, customers, and other stakeholders in the business, and appropriately manage the change the technology brings.

For instance, an A.I. system that helps doctors triage patients in an emergency room affects many—from doctors and intake nurses to insurance underwriters. To keep projects on track, people must be brought on board with A.I. implementation, and their workflow must be adjusted to take advantage of the technology.

I have seen many A.I. teams underestimate the human side of organizational change management. Overcoming this challenge is not easy, but there are steps businesses can take to mitigate disruption.

For one, organizations have to identify all the stakeholders that will be involved with the change process. Managers should either communicate with them directly or find ways to have their colleagues talk to them about what is coming. Many teams make decisions by consensus, so it is important to minimize the odds of any stakeholder blocking or slowing down implementation. 

Next, companies need to budget enough time to properly implement A.I. They must spend enough time to understand stakeholders’ roles and beliefs, assess how many roles will change, and explain to people what the A.I. will actually do and how the system may benefit them. 

It’s crucial that the company reassure stakeholders during A.I. implementation. Many people still harbor significant fear, uncertainty, and doubt about A.I. Providing a basic education about the technology eases these conversations. Organizations can also reassure people by rigorously testing and auditing the technology, and showing the results to stakeholders so they’re convinced it works safely.

Organizations should consider beginning A.I. deployment with a pilot that affects a relatively small number of stakeholders. A quick success can then be used as a showcase to get buy-in from a larger group.

PwC estimates that A.I. will generate $15.7 trillion globally by 2030. Much of this value will come from outside Silicon Valley. A.I. is on its way to transforming every industry; the process will be a lot easier if businesses take the right actions along the way.

Andrew Ng is founder and CEO of Landing AI.

Our mission to help you navigate the new normal is fueled by subscribers. To enjoy unlimited access to our journalism, subscribe today.

More opinion in Fortune:

How we can prevent being caught off guard by a pandemic like the coronavirus ever again
—An Earth Day CEO summit shows how dramatically corporate values have changed
Which companies’ stocks will thrive after the coronavirus crash?
—Northwestern Mutual CEO: 3 lessons learned from economic crises before COVID-19
—Listen to Leadership Next, a Fortune podcast examining the evolving role of CEO
—WATCH: CEO of Canada’s biggest bank on the keys to leading through the coronavirus

Listen to our audio briefing, Fortune 500 Daily

Read More

Great ResignationClimate ChangeLeadershipInflationUkraine Invasion