Learning to love the bot: Managers need to understand A.I. logic before using it as a business tool

September 26, 2019, 10:30 AM UTC
Dan Saelinger—Trunk Archive

Hong Kong-based investment firm Deep Knowledge Ventures made headlines in 2014 by appointing a computer algorithm to its corporate board. The firm, which has about 100 million euros under management, wanted a way to enforce a data-driven approach to investing, rather than relying on human intui­tion and personal interactions with founders. Managing partner Dmitry Kaminskiy says the algorithm served mostly as a veto mechanism­—if it spotted red flags, Deep Knowledge wouldn’t ­invest.

In the five years since Deep Knowledge’s A.I. got its board seat, there hasn’t exactly been a stampede of companies following suit. In fact, Deep Knowledge itself shifted focus and no longer uses the algorithm. “Today, big strategy decisions are based on intuition”—that is to say, by humans—“because we have a data shortage,” says Brian Uzzi, a professor at Northwestern University’s Kellogg School of Management. Firms simply don’t make enough of these major decisions to train an algorithm effectively.

However, as more data is gathered, or models that can account for a lack of data gain commercial traction, it is only a matter of time before A.I. takes on more strategic roles: providing insights on which M&A deals to pursue, which geographies to enter, or whether to match a competitor’s product offering. 

This is creating a backlash, of sorts. Some management gurus and theorists have rushed to get ahead of the A.I. trend by counterintuitively slowing it down: flatly telling corporate boards not to abandon human intuition and common sense. 

Earlier this year, Dirk Lindebaum, from Cardiff University in the U.K., and Mikko Vesa and Frank den Hond, both from the Hanken School of Economics in Helsinki, penned a provocative essay warning corporate directors against becoming too infatuated with A.I. To make their point, the trio drew parallels with the classic E.M. Forster science fiction story “The Machine Stops,” in which humans become so dependent on an all-powerful machine that they lose the ability to think and act independently. “We give away more and more autonomy,” Lindebaum tells Fortune. “Eventually, you come to a situation where you effectively hit the end of choice. You just follow the algorithm blindly.”

Commentators including Lindebaum have pointed to the fatal crashes of two Boeing 737 Max 8 airplanes following problems with its autopilot system as a wake-up call. The pilots didn’t fully understand the system and had no way of easily determining that it was making decisions based on faulty sensor data. Such “automation surprise” is of particular concern in A.I. because many of today’s powerful machine-­learning algorithms are black boxes. Why they predict a certain outcome is opaque, even to those who write the code.

Many of today’s most powerful machine-learning algorithms are black boxes. Why they predict a certain outcome is opaque.

In life or death situations such as airline autopilot or autonomous vehicles, Lindebaum’s advice would appear sage. But in the realm of business strategy, could managers be missing out on golden insights if they don’t learn to trust the algorithms they employ?

Consider the 2016 contest between Lee Sedol, a world-class player of the ancient strategy game Go, and the ­AlphaGo algorithm created by ­Alphabet-owned A.I. firm DeepMind. In the 37th move of the second game, AlphaGo did something so unusual that, at first, Go experts commenting on the match assumed the person responsible for physically placing AlphaGo’s stones on the board had made a mistake. AlphaGo itself estimated the odds that a human player would make the same move in that situation as one in 10,000. And yet, for reasons puzzling even to DeepMind’s researchers, it also saw the move as a clear winner. (And AlphaGo did, indeed, go on to win.)

In the face of such “alien” insight, the emerging management shibboleth—don’t abandon common sense or human experience—seems wholly inadequate. Following it would mean never playing move 37. On the other hand, how will a board tell the difference between move 37 and a 737 crash?

Resolving this tension won’t be easy. Robert Seamans, a business professor at New York University who teaches courses on A.I., says this is exactly why it is critical that managers be able to understand the basis on which an A.I. system works. It isn’t just about making the “right” strategic decision; it is also about execution. Grasping the logic behind a decision, he says, is essential for getting the buy-in—from employees and investors—needed to implement it. “It is not about just spitting out a probabilistic outcome,” he says. “You will need everyone on board with the course of action, and if you can’t explain the rationale, you won’t be able to do that.” Leadership is about more than decision-making; it’s also about persuading others to follow.

A version of this article appears in the October 2019 issue of Fortune with the headline “Playing the Surprise Move.”

More must-read stories from Fortune:

—The cheapest mobile plans for your iPhone 11
—What is quantum supremacy, and why is it such a computing milestone?
Beyoncé was sued for violating the Americans with Disabilities Act. And you could be, too
—Meet the women leading Netflix into the streaming wars
—Why Discord is one of tech’s hottest startups
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.

Read More

LeadershipCryptocurrencyInflationGreat ResignationInvesting