Great ResignationClimate ChangeLeadershipInflationUkraine Invasion

To get the most from your company’s A.I. investment, consider the uniqueness of A.I.’s risks and benefits

December 2, 2022, 10:30 AM UTC
A John Deere combine powered by artificial intelligence.
A John Deere combine harvester, which uses GPS, artificial intelligence and sensor technology to help improve crop yields, on display at the Consumer Electronics Show in Las Vegas.
Robert Lever—AFP/Getty Images

How can executives leverage the unique learning capabilities of A.I. to create new and possibly unexpected value for their business? John Deere, an agricultural equipment maker, provides an example. 

The company initially leveraged A.I. to automate its farming equipment. However, over time new benefits emerged: A.I. was learning how to improve equipment performance while also enabling customers to learn how to best utilize them and make better decisions about their plants. This created a learning loop between users and A.I., enabling the company to redesign its business model as a precision agriculture solutions provider. 

On the other hand, what happens when an A.I. algorithm learns a wrong model and evolves in unpredictable directions, or adjusts slowly to a fast-moving environment? Microsoft and Tay, its radicalized chatbot which was shut down less than 16 hours after its launch, of course offer a bleak and well-known illustration. But more recently, Zillow, the online real estate company, also found a costly answer to this question. Within two years its A.I. models contributed to a loss of more than $500 million as they failed to adjust to major changes in the housing market due to the pandemic. 

In all these situations the outcome of the A.I. investments played out different than planned, largely due to the unique capabilities of A.I. to learn, successfully or not, or support the decision-making and learning of its users. They indicate the importance for businesses to reconsider their A.I. investment processes keeping these unique features of A.I. in mind—or else face unexpected risks and miss good investments when potential benefits are overlooked.  

Understanding the new risks and benefits that come with the learning capabilities of A.I.—which, among other things, enable predictions and support decision-making—is key. We describe the two main types of each as a starting point. By nature, as learning is happening over the lifetime of an A.I. system, so these risks and benefits may only materialize well after the technology is adopted. 

A.I. benefits to consider

First, in terms of benefits, A.I. learning capabilities can lead to innovative business opportunities based on new offerings, as the John Deere example shows; on creative insights; or on algorithms able to learn from users’ or consumers’ behavior. 

Unilever provides an example of surprising insights one can get through data and A.I. The company has been using A.I. to synthesize customer insights and understand customer reactions to their products online for years, enabling Unilever’s marketing team to unearth unexpected patterns. For example, the team found that there were around 50 songs that featured lyrics on “ice cream and breakfast”, revealing a new opportunity to develop a new breakfast-flavored ice cream

Similarly, while Netflix’s cutting-edge A.I.-based content recommendation systems had the short-term impact of increasing time spent on its platform by existing customers, after some time, the data from those systems that Netflix amassed and analyzed enabled the company’s foray into original content production. It took the company six years to collect sufficient viewer data to produce House of Cards, its first original series.

Second, over time, the ability of A.I. to make predictions and support decisions, making it like a team player, can have a significant impact on culture. Potential impacts include improvements in organizational learning, collaboration, clarity of roles, and team morale. For example, KLM launched several A.I. use cases as part of their airline operations digitalization journey. One of those use cases was using A.I. to predict which passengers are more likely to miss their flights, which in turn could help the airline reduce delayed departures. The direct financial impact of this use case might have been considered relatively low when compared to the company’s other A.I. initiatives such as for fleet management. However, the use of A.I. resulted in improved culture by helping crew and maintenance teams align and improve coordination throughout the departure process. For example, based on A.I.’s predictions on which passengers are most likely to arrive at an airport but miss their flights, KLM’s crew members could place a red tag on those passengers’ luggage so that baggage handlers would load the tagged luggage onto the plane last, enabling them to speedily unload it if the passengers missed their flight. In effect, A.I.’s predictive capability enabled the teams to seamlessly work together to achieve on-time departures and improve team coordination and effectiveness.

A.I. risks to ponder

Turning to A.I. risks, the fact that an A.I. solution is technically feasible doesn’t mean that society will find its use morally and ethically acceptable, and more importantly, it doesn’t mean that it will stay that way as the A>I. learns and evolves. A.I. solutions may face potential public backlash as they evolve in unpredictable directions, especially when reaching large-scale usage. 

Consider, for instance, the A.I. algorithm used by a drugstore chain to identify how it should redeploy its store coverage to optimize sales. The A.I. was able to predict profits or losses of stores by using a large number of micro-segments, many of which may have not been considered before. But consequently, it started to consistently recommend store closures in neighborhoods with large minority and low-income populations—leading to dramatic discrimination risks, which ran counter to the company’s values and created potential reputation risks.  

A key risk to always consider is the learning limitations of A.I. models. The environment in which these models operate also evolves, and sometimes the algorithms don’t adapt fast enough. The example of the “Zillow offers” is a reminder of that. Beginning in 2019, Zillow used A.I. that had been trained on over 100 million home valuations to predict the price of homes and automatize the buying, fixing up and selling of homes. The program generated up to $2.7 billion of revenues in the first year following the launch of the offer. 

Unfortunately, as COVID-19 began to shake up the housing market in unprecedented ways, the A.I. algorithm started to deliver large price prediction errors, leading to the buying of more than 7,000 over-priced houses. Failing to react rapidly, by the end of 2021 the company had to close its offer program; Zillow cut a quarter of its staff and wrote down losses of more than $500 million on its remaining homes. 

Interestingly, there is a lot to learn about this risk from the financial sector, where models have been used for many years. Changes in market conditions or models with parameters not properly determined based on data have been key aspects of what the industry calls model risk—analogous to the risks of A.I. models. Financial losses due to model risk in banks are a good reminder for companies in all sectors leveraging A.I. 

A change in approach

Considering A.I. specific risks and benefits when investing in A.I. is necessary, but doing so may be challenging and may require changes including new practices and roles. For example, as is the case in most banks, the chief risk officer (ideally also the head of an A.I. committee) should be actively involved in the investment decision processes and sign off on every A.I. project investment. Most banks have created joint decision-making processes that involve executives from both the business and risk functions. They evaluate every risky business opportunity and “red flag” the decisions that must be escalated. 

Similarly, an A.I. board must foster links with the entire organization, not just the company’s board and top management team, to detect and mitigate new, and possibly unanticipated A.I. risks as well as considering benefits. These are the factors that may eventually determine the success or failure of an investment in these technologies. 

Read other Fortune columns by François Candelon

François Candelon is a managing director and senior partner at BCG and global director of the BCG Henderson Institute.

Theodoros Evgeniou is professor at INSEAD, BCG Henderson Institute Adviser, member of the OECD Network of Experts on A.I., former World Economic Forum Partner on A.I., and cofounder and chief innovation officer of Tremau.

Maxime Courtaux is a project leader at BCG and ambassador at the BCG Henderson Institute.

Some companies featured in this column are past or current clients of BCG. 

Our new weekly Impact Report newsletter will examine how ESG news and trends are shaping the roles and responsibilities of today’s executives—and how they can best navigate those challenges. Subscribe here.