Bridging the gap between execs and engineers is the secret to avoiding A.I. nightmares, Peak CEO says

Photo of Peak CEO Richard Potter
Richard Potter, CEO of A.I. software company Peak, says a disconnect between corporate boards and senior executives and the engineers tasked with building A.I. solutions often accounts for the failure of those projects.
Photo courtesy of Peak

In a recent Eye on A.I. newsletter, I discussed the abysmal success rate of many A.I. efforts within companies. A vast majority of these attempts to use A.I. fail to deliver the expected business impact the company desires. In that newsletter, Arijit Sengupta of A.I. company Aible argued that many data scientists and engineers tasked with delivering A.I. projects don’t fully understand the business problem the technology is trying to solve.

But Richard Potter, the chief executive officer of Peak, an A.I. software company based in Manchester, England, thinks he knows another reason many A.I. projects fail. And this time the fault is not with the data scientists and engineers. Instead, he argues, there’s often a disconnect between the expectations of top executives and corporate boards and what machine learning engineers can actually accomplish with the data a company has available.

“Often, line-of-business executives think the technology can do more than it really can,” Potter tells me. He says he has often been in meetings where an executive will be discussing a thorny business challenge and the executive will say, “we’ll just throw the question at the A.I. and get the A.I. to do that.” Potter says he always responds, “how can it do that? And what exactly is the A.I. you are talking about?” That usually stumps them, he says.

There is a fundamental misunderstanding among a lot of executives about what A.I. is and how it works, according to Potter. For instance, many executives believe that A.I. systems, once properly trained, are infallible. “Algorithms won’t get it right 100% of the time,” he says. “Then again, your team doesn’t get it right all the time either but you probably don’t measure their performance the same way and hold them to the same standard.”

He says one thing many businesses have had success with is using A.I. systems that can provide a measure of their own confidence in the predictions they are making. In cases where the system is less confident, that’s a good place to be sure that the A.I.’s predictions are being reviewed by humans, he says. (Of course, especially dangerous are situations in which an A.I. system might mistakenly have high confidence but nonetheless be wrong—and Potter says businesses should be vigilant for such situations.)

Much of the value of A.I. for companies, according to Potter, is not simply accuracy—but consistency. A.I. systems can make a decision with equal precision around-the-clock, all year long, without a break. Humans can’t do that it. And it’s this consistency—even more than absolute performance—that makes possible things such as real-time dynamic pricing, as opposed to having a human merchandizing team that can only update prices every week or every month. “That can lead to higher margins and higher inventory efficiency,” he says.

The key to bridging the gap between management’s expectations of A.I. and what those creating A.I. systems know is possible is, in Potter’s view, better education. “It should definitely be part of MBA syllabuses,” he says of A.I. This is starting to happen. But in the meantime, he says, companies such as Peak, which help businesses run A.I. systems, have a duty to educate people. And he says, many CEOs are now learning from talking to other CEOs about what works and what doesn’t. (By the way, if you’re an executive who wants to participate in this kind of sharing of experience, consider applying to attend Fortune’s Brainstorm A.I. conference in San Francisco this coming December.)

While most executives will never understand A.I. with the code-level expertise of an engineer, Potter says that achieving a basic understanding of how A.I. systems work, and being able to speak the same language as the engineers who build those systems, has important benefits for companies. That’s because he frequently encounters another disconnect: engineers who are so deep into the weeds of the tech that they can’t understand the business purpose of the systems they are trying to build and can’t think creatively about how A.I. can be applied to drive value.

One of the most important skills for anyone working with A.I., Potter says, is not simply understanding what data the business has and how it can be used today, but imagining what data the business might be able to gather in the future, and how that data can be used to create strategic advantage for the company.

And with that, here’s this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Capitol Records forced to drop its A.I.-generated rapper. Well, that was fast. In last week's Eye on A.I., my colleague Alexei Oreskovic detailed Capitol Records attempt to spin some marketing gold from signing a rapper called FN Meka, a project of advertising firm Factory New. Meka's songs were all composed by A.I. software, and, while they were voiced by a real human singer, the rapper's only visual presence was a digital avatar. Capitol got so much backlash—including claims that FN Meka's persona was based on racist stereotypes of Black musicians and criticism that Meka's A.I. composer had been trained on actual rap songs without giving the creators of that music any compensation for use of their intellectual property—that the music label had to drop the singer and cancel its deal with Factory New in less than a week. The story may be a warning for the ethical and legal thicket that many creative industries are diving straight into as they rush to embrace A.I. as a content-generating tool. My Fortune colleague Alice Hearing has more on Capitol's decision to drop Meka here.

World Economic Forum cites lack of gender diversity among those building A.I. The Swiss economic group published a report raising alarm at the lack of women among the teams building A.I. software. The WEF said that women made up only 22% of the A.I. workforce in industry, 14% of the authors on machine learning research papers, and 18% of the authors whose papers are presented at top A.I. conferences. The group said that this lack of representation makes it harder to detect biases in data and A.I. algorithms that can exacerbate existing gender inequality. It called on governments to commit more to recruiting women into science and technology fields generally, and machine learning specifically, and also said they should do more to close the pay gap between men and women working in A.I.    

French tax authorities use A.I. to find undeclared swimming pools. The French government has said it collected more than 10 million euros in additional tax revenue last year after using A.I. software that can analyze aerial imagery to spot swimming pools, the BBC reports. Having a swimming pool is assumed to increase the value of a property, leading to higher real estate taxes, and many people try to avoid mentioning it to the taxman. An average pool is taxed at about 200 euros per year, according to reports in the French media. The government is also trialing the technology to spot large home extensions for the same reason. 

EYE ON A.I. TALENT

Monumental Sports & Entertainment, the parent company of the Washington Wizards NBA team, has hired Charles Myers as its chief technology officer, according to a story in SportTechie, a website that is part of trade publication Sports Business Journal. Myers was previously vice president for media development and engineering at satellite radio company Sirius XM

Cerebras Systems, maker of A.I.-specific computer chips, has hired Lakshmi Ramachandran as head of engineering and India site lead for its new India office in Bangalore, the company said in a press release. Ramachandran was previously senior director at Intel's Data Center and AI group.

EYE ON A.I. RESEARCH

Can a neural network be taught to learn symbols? Neural networks have been responsible for most of the recent advances in A.I. And many of these advances have come in the realm of perception—identifying and categorizing things. But one of the biggest limitations of neural network-based A.I. is that it cannot reliably perform some symbolic manipulation tasks, including some elements of mathematics, grammatical parsing, and common-sense reasoning, as well as systems that have been given a set of hard-coded rules. Many A.I. researchers have proposed remedying this through a hybrid A.I. that combines some elements of neural networks and some elements of the older form of hard-coded A.I., known as symbolic A.I. Now a group of researchers from the University of Trento and the Bruno Keller Institute in Italy have proposed a clever take on this kind of hybrid system, where a neural network is used for both perception—i.e. categorizing data into discrete symbolic groups—and to learn the rules for manipulating those symbols at the same time. While the researchers only tested the system on a relatively simple character identification task, approaches like this could have broad applications in commercial A.I. systems—especially because the way the A.I. makes it decisions once it has learned the symbolic manipulation rules is inherently more interpretable than the workings of a pure neural network system. You can read their research paper on the non-peer reviewed research repository arxiv.org. 

FORTUNE ON A.I.

The Great Resignation forced U.S. companies to order a record number of robots—by Tristan Bove

Auto insurance watchdog warns that one of the most important features of self-driving cars can fail to spot pedestrians at night—by Tristan Bove

California winemakers are using A.I. to combat climate change challenges—by Stephanie Cain

At financial software giant Intuit, A.I. is becoming ‘foundational’ to how the company operates—by Jeremy Kahn

BRAINFOOD

How close are we to AGI? That's artificial general intelligence, or the kind of A.I. that you think about from science fiction films. Although are debates about how exactly to define AGI, one good working definition is A.I. that can learn to perform a wide range of disparate cognitive tasks as well or better than most humans. So far, we're not there. The big question is, though, how close are we? For a long time, there were only a handful of commercial companies dedicated to the creation of AGI, with DeepMind, which is owned by Alphabet, and OpenAI, which has a close partnership with Microsoft but which remains an independent "capped-profit" company, being the two most notable. But in the past six months, there's been a whole of host of startups that are essentially dedicated to creating "universal digital assistants" or other kinds of software that sound a bit like AGI. And these companies are getting real venture capital backing. The most recent example is Keen Technologies, a startup founded by long-time game developer and former Occulus CTO John Carmack, which received an initial $20 million from Sequoia Capital and several prominent tech angels.

Meanwhile, Ajeya Cotra, a senior research analyst at the charity Open Philanthropy, which funds a lot of projects around "A.I. safety" (or how we might seek to prevent A.I. from either purposefully or unintentionally harming humanity) and wrote an influential report about the possible timelines for AGI development in September 2020, recently caught many people's attention by advancing her own predictions for when AGI might arrive. She now says there is a 35% probability of it happening by 2036. That is three times more likely than she predicted just two years ago. And her median prediction is now 2040,  a decade earlier than her previous median.

But while almost everyone acknowledges that A.I. is becoming more capable in some domains (such as image-generation, sound generation, and some natural language processing tasks), many look at what today's A.I. still can't do—such as learn from very few examples or perform as well as most humans on common-sense reasoning tasks, or learn to do something just from reading a set of instructions, or drive a car safely in most weather and lighting conditions after just a handful of lessons —and think AGI may still be much further off. What do you think? 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet