Why do so few businesses see financial gains from using A.I.?

Woman at a transparent board drawing a fever chart with scientific data displayed around her.
A new report from Boston Consulting Group and MIT's Sloan Management Review says companies must create virtuous circles of learning between A.I. and humans in order to realize financial gains from the technology.
Getty Images

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Is artificial intelligence giving a big bottom-line boost to most companies? No is the resounding answer, according to a new study out today from the Boston Consulting Group and MIT Sloan Management Review. It found that only 10% of businesses have seen “significant financial benefits”* from increased revenue or cost savings.

But more and more companies are adopting A.I. Of the 3,000 companies surveyed across 29 industries and 112 countries in the spring of 2020, the study found 57% had A.I. pilots underway or were using A.I. in full-scale deployments, up from 44% in 2018. And 59% of these companies also say they now have an “A.I. strategy”—up from 43% last year.

So why are so few businesses seeing any real gains from the technology? The answer lies in how companies are configuring their organizations to use A.I. The most successful A.I. adopters, the report’s authors say, create a virtuous circle of learning between human workers and A.I. systems, in which each provides valuable insights to the other.

In one example, at the carmaker Porsche, an engineer had a Eureka moment while getting coffee from an automated machine. He suddenly realized that the machine made a different sort of sound when it was making a good cappuccino versus a watery one. This provided the inspiration for Porsche to create an A.I.-based system to detect anomalies in car parts by having the software listen for subtle differences in the parts’ sounds. The A.I. itself could never have suggested that Porsche look at acoustic data—that required human imagination. But A.I. was perfect for implementing the system.

The report identifies five “modes” of human-A.I. interaction: the A.I. decides and implements a decision on its own; the A.I. makes a decision which a human implements; the A.I. makes a recommendation to a human, but the decision remains in the human’s control; an A.I. generates insights from data that help the human’s decision-making calculus; humans make decisions that an A.I. system only evaluates after the fact.

The most successful companies were more likely to use multiple modes of interaction, with almost a third of them using all five modes and an additional third using three or four different modes. Those businesses that used all five modes were six times more likely to see financial gains from A.I. than those that relied on just one kind of interaction.

Critically, the businesses that saw the biggest gains from A.I. knew when to alter these modes to suit different kinds of situations. Walmart uses an A.I.-based system to present stocking recommendations to store managers. The managers can agree or disagree with them and provide feedback on what they would change. During the COVID-19 pandemic, when consumers’ purchasing behavior suddenly shifted, managers rejected many more of the A.I. system’s recommendations. But this, along with the managers’ feedback, provided new training data for the software. After retraining the A.I., Walmart found that its managers could rely more on the A.I.’s recommendations again.

But configuring a business to take advantage of A.I. requires a lot. BCG and Sloan Management Review found the companies that made extensive changes to many business processes were five times more likely to reap financial rewards from using A.I. than those that made only small changes to organizational structure and processes.

Shervin Khodabandeh, a BCG senior partner and one of the report’s authors, says that too many companies draw a false analogy between the adoption of A.I. and another technology overhaul that had a big impact on business processes: enterprise resource planning (ERP) systems in the 1990s. There’s a big difference, he says. ERP systems tended to force all businesses to adopt similar processes for key administrative functions.

“There is no generalized, standardized A.I. process,” he says. “An A.I. solution for one company is not generalizable to another company even in the same sector.”

BCG likes to say A.I. success is a “10-20-70 problem,” he says: 10% of the effort is designing the right algorithms; 20% is building all the underlying technology to gather the data and run the A.I. system; 70% is getting the organizational structure and processes right.

Few companies have seen big financial gains from A.I. because the structural changes to undertake are extensive. But Khodabandeh says the fact that 10% of companies have seen large gains from using A.I. means that it is not impossible.

*What’s a “significant financial benefit,” by the way? The BCG and
Sloan authors defined it on a sliding scale: at least $100 million in additional revenue or cost savings for companies with more than $10 billion in annual revenues; at least $20 million in gains for those with yearly revenues between $500 million and $10 billion; at least $10 million for those with revenues between $100 million and $500 million; and at least $5 million for those with less than $100 million in annual revenues.

***

Several months ago for this newsletter, I interviewed Tom Siebel, the outspoken billionaire and CEO of C3.ai. To hear more of Siebel’s sharp takes on the development of artificial intelligence, check out this week’s edition of the Fortune’s Leadership Next podcast, hosted by Fortune‘s own CEO Alan Murray and senior editor Ellen McGirt. Siebel shares why he thinks A.I. should be regulated, why he won’t do business with China, and why, although C3.ai does work for the U.S. government, it won’t help the Pentagon build autonomous weapons. You can tune in on either Spotify or Apple podcasts.

With that, here’s the rest of this week’s news in A.I.

Jeremy Kahn 
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Nonprofit drops out of partnership over Big Tech's unwillingness to commit to concrete action. Access Now, a nonprofit dedicated to preserving individuals' digital and human rights, has dropped out of the A.I. ethics consortium Partnership on A.I., writing in an open letter that "while we support dialogue between stakeholders, we did not find that PAI influenced or changed the attitude of member companies or encouraged them to respond to or consult with civil society on a systematic basis.” Access Now was frustrated in particular that it made little headway in getting PAI to endorse calls for a ban on facial recognition and other biometric technology that can be used for mass surveillance, according to a story in the tech publication The Verge

Google announces a host of A.I.-powered enhancements to its search tools. The improvements include a better spell-checker and better ranking of the relevancy of not only pages, but also passages of text within a page. The company says the feature improves search results by 7% across all languages, according to a story in The Next Web. Another A.I.-based feature automatically tags key moments in videos, allowing a user to flick through them like chapters in a book. Google Maps will also now show how busy a particular location is, which Google says is a feature to improve social-distancing during the COVID-19 pandemic. And, in the most-buzzed innovation, it will let a user search for a song just by humming, whistling or singing a few bars—like Shazam, but even better. My Fortune colleague Danielle Abril has more on the name-that-tune search function here.

Cruise begins testing autonomous cars without safety drivers in San Francisco. The GM-backed self-driving car company has been given approval to operate its vehicles without any driver in the car on the streets of San Francisco, the company announced. The company plans to start testing the completely driverless vehicles on roads by the end of the year. Dan Amman, the company's CEO, said, "While it would be easier to do this in the suburbs, where driving is 30–40 times less complex, our cities are ground zero for the world’s transportation crisis. This is where accidents, pollution, congestion, and lack of accessibility collide. Often quite literally." Cruise's rival Waymo has been offering a completely driverless ride-hailing service to select people in Phoenix, Arizona, since last year, and earlier this month expanded its program to a larger group of initial customers.

Arm debuts another new A.I. chip. U.K. semiconductor design firm Arm, which is in the midst of being acquired by Nvidia, has added to its lineup of computer chips designed to accelerate A.I. applications on "edge" devices such as phones. The new Ethos U-65 aims to work alongside either Arm's popular Cortex A processors, the brains found in most of the world's smart phones, or its Cortex M processors, found in many power-constrained devices such as wearables and smart speakers. The chip is what's known as a micro neural processing unit (NPU), designed to accelerate the functioning of neural networks. Most NPUs have been large chips that run in data centers. But Arm has begun miniaturizing these designs so A.I. applications can run without the need for a device to be in constant communication with a data center. The Ethos U-65 can handle real-time object recognition and facial recognition as well as tasks such as speech recognition, gesture detection and biometric recognition, according to a company blog post

EYE ON A.I. TALENT

Data security company Avast has hired Shane McNamee as its chief privacy officer. McNamee had previously held data protection roles in the Irish government's finance department and with the Data Protection Commission of Ireland, the data privacy regulator of record for many of the world's largest technology companies in Europe.

Data-driven consulting company Kantar has appointed Alexis Nasard as its chief executive officer. Nasard, who assumes the job on December 30, has previously been CEO of shoe company Bata. Previously, he had senior executive roles at Heineken and Procter & Gamble. 

Private equity and venture capital firm EQT has named Marc Brown as head of a new EQT Growth unit that will invest in fast-growing companies. Brown was previously vice president for corporate development at Microsoft, where he oversaw the tech giant's M&A activity, including more than 185 acquisitions and 80 strategic equity investments. 

EYE ON A.I. RESEARCH

European Space Agency uses A.I. on a small satellite to detect clouds. The ESA says it has launched the first-ever satellite that can perform A.I. inference in space without the need to transmit data back to a server on Earth for processing. The space agency worked with companies including Intel, Ubotica Technologies, and camera-maker cosine to develop a small cube satellite about the size of a desktop computer called PhiSat-1. The satellite, which was launched onboard a rocket on September 3 from French Guyana, is now orbiting Earth at a distance of 329 miles. It will photograph the planet and send images back to the ground, and an onboard A.I. system has been trained to identify when a photographed area is obscured by clouds and to discard these images. Doing this saves about a third of the satellite's precious bandwidth and also helps preserve its power for longer, the ESA says. Here's more on the satellite from Intel, which built the computer chips that perform the A.I. inference.

FORTUNE ON A.I.

The polls are wrong. The U.S. presidential race is a near dead heat, this A.I. ‘sentiment analysis’ tool says—by Jeremy Kahn

This ace engineer powered Amazon through the COVID crisis—by Aaron Pressman

Facebook A.I. researchers push for a breakthrough in renewable energy storage—by Jeremy Kahn

GV, formerly known as Google Ventures, elevates its first Black female investing partner—by Lucinda Shen

 

 

BRAIN FOOD

A few weeks ago, the parties in the U.K. legal case Tyndaris v VWM Ltd. reached an out-of-court settlement. The settlement disappointed a lot of legal eagles who had been watching the case. That's because Tyndaris was poised to potentially set an important precedent around the idea of who, exactly, is liable when an A.I. system doesn't perform as expected and someone is harmed as a result.

The hedge fund Tyndaris said it used an A.I. system called K1 to discern shifting market sentiment from earnings reports and social media and then automatically place trades, with no human intervention. The asset management firm VWM invested in Tyndaris's A.I.-driven strategy. VWM's A.I.-managed account then suffered a large trading loss—about $22 million—and VWM refused to pay Tyndaris's fees. Tyndaris sued for its missing fees and VWM counter-sued for its losses, saying Tyndaris had misrepresented the A.I. system's capabilities.

Tom Whittaker, a lawyer with the firm Burges Salmon who has been tracking global jurisprudence around A.I., says that the law has not caught up with the ways in which artificial intelligence differs from traditional software. The closest a decision has come to addressing the issue is a case from Singapore, B2C2 v. Quoineover the misfire of an algorithmic trading system for cryptocurrency. The court ruled that algorithms "only do what they have been programmed to do. They have no mind of their own. They operate when called upon to do so in a pre-ordained manner."

B2C2 seemed to rule out the idea, which some have proposed, that A.I. itself has legal responsibility. On this point, Whittaker says, a growing body of regulatory and legal rulings from around the world (including the U.S. Patent Office rulings mentioned here before) say that A.I. does not—and probably will never have—legal personhood.  

But, as Whittaker points out, it is not clear that most A.I. systems actually behave in the way the Singaporean court says. With A.I., it may be very difficult for even the creator of such a system to know exactly how it will behave. Whittaker notes that one of the judges in the B2C2 case dissented, saying that it was unreasonable to assume the programmer of an A.I. system could anticipate every possible market circumstance that might occur and have tested the system on it. 

For all the talk about taking humans out of the loop through the use of A.I., from a legal standpoint, Whittaker says liability rests firmly with humans and human judgments. "Even if you say there is no human in the loop, they can bring up the argument that someone must have done something at some point to set the system up in a certain way, and that liability rests with that person," he says. He also says that the law has yet to make a determination about which party—the company that builds an A.I. system or the company that deploys it—should be liable when A.I. goes wrong. And the law has not yet weighed in on when a harmed party would be able to claim the A.I. developer had been negligent. 

"The nature of A.I. is novel and complex and there may need to be changes to the law," Whittaker says. On that, even most lawyers can probably agree.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet