Representatives from the U.S. Marines and Boston Dynamics with Spot, a four-legged robot designed for indoor and outdoor operation.
Photograph by Chip Somodevilla, Getty Images

Non-human "employees" won't make leadership obsolete.

By Anne Fisher
July 23, 2016

Dear Annie: I’ve been following Fortune’s recent articles about artificial intelligence, and how it will transform the business world, with great interest. I’m in my first management job, after going back to school for an MBA four years ago. Since I’m 34 now, I figure my career probably has at least three decades to go, and I’m wondering if the skills I learned in B-school will be obsolete before too long. For example, does emotional intelligence (EQ) still count if your “employees” have no emotions? What skills will managers need to develop instead? — Curious in Cupertino

Dear C.C.: Interesting questions, and all the more so because, at this moment, no one can be entirely sure of the answers. It seems, though, that human leadership skills won’t fade into obsolescence anytime soon—if ever. “Organizations will probably evolve into a two-part structure,” says Thomas Davenport. “People with high EQ will still lead the remaining humans, while other people with expertise in artificial intelligence will ‘manage’ the machines.”

Davenport, a professor of information technology and management at Babson College, teaches analytics and data science in executive programs at Harvard Business School and MIT’s Sloan School. He also co-wrote (with Julia Kirby) a fascinating book you might want to check out, Only Humans Need Apply: Winners and Losers in the Age of Smart Machines.

Even when they have more people than robots reporting to them, Davenport says, managers will increasingly have to oversee some smart machines and make decisions about how to deploy them. So you’ll need some knowledge and understanding of how artificial intelligence works, especially what it can and can’t do. Beyond that, Only Humans Need Apply identifies seven skills that managers can start developing now:

—Design and create machine “thinking.” Despite rapid leaps forward in automated programming—like the new Google googl system that is even “learning” how to recognize tones of voice and swear words—humans will continue to drive and oversee the creation process, Davenport notes.

—Provide “big picture” perspective. Humans are adept at skills like seeing how a particular solution fits into the whole, knowing that the world has changed in significant ways, and comparing multiple solutions to the same problem. “We also know (usually) whether something ‘makes sense’ or not,” Davenport writes. “Because this type of thinking is not very structured, computers aren’t good at it.”

—Synthesize information across multiple systems. “We humans know that any one system or decision approach is not likely to provide the only possible answer,” says Davenport, adding that some machines can now be programmed to try out multiple methods of solving a problem to see which works best. “But humans do it more often,” he says. So far, we’re better at it, too.

—Monitor how well a machine is working. Davenport points out that A.I. systems are designed to work best in particular contexts and, when circumstances change, are likely to work less well. They don’t know that, however, and “even if they do, they don’t retire themselves from the job,” he writes. Only humans can tell when a system needs to be updated or replaced.

Know a machine’s weaknesses and strengths. Many cognitive systems, like people, are better at some tasks than others. “An automated commercial underwriting system in an insurance company, for example, might do a great job on florists but be weak in assessing the risks of beauty salons,” Davenport writes. In that case, making sure policies are priced correctly will still be an underwriting manager’s job.

—Gather the information the system needs. Using the example of automated financial planning systems, Davenport notes that some tasks, like mapping someone’s retirement, require data inputs that can only come from human clients, and coaxing them to provide it can be tricky. “A human financial advisor can help,” he writes. “And there are many other settings in which humans can play the same type of role.”

—Persuade fellow humans to take action. Computers can be programmed to make good decisions, but getting people to act on them is something else again. For instance, Davenport writes, A.I. can come up with a perfectly accurate medical diagnosis and treatment plan—but understanding it, explaining it to the patient, and encouraging the patient to, say, lose 20 pounds will still require high-EQ humans.

If you don’t already have one or more of these skills, you’ve probably got time to learn. “No one can afford to be complacent, of course, but it always takes a while for machines to replace people,” says Davenport. He notes that, despite the proliferation of ATMs, the U.S. still has about 500,000 human bank tellers, or roughly the same number as in 1980 (albeit fewer per capita). “There are lots of automated checkout lanes in stores, but there are still plenty of checkout people too,” he adds. “And when we get those automated voicemail systems on the phone, we still punch ‘O’ and yell ‘Representative!’”

The single most crucial ability that managers will need to cultivate in an ever more automated world, Davenport believes, is one you probably tackled in B-school: Making smart judgment calls, in response to what’s happening in the business and the larger world. That includes being able to evaluate how well artificial-intelligence systems are working for you, or even whether it makes sense to use them at all. “If the financial collapse of 2008 taught companies anything,” Davenport observes, “it’s that there are few faster ways to lose a whole lot of money than by relying on a bad algorithm.”

Talkback: If you “manage” an automated system (or more than one), what new skills have you had to pick up or develop? Leave a comment below.

SPONSORED FINANCIAL CONTENT

You May Like