CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Could “Mindful A.I.” be the key to successful A.I.?

September 22, 2020, 2:26 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Back in April, when the pandemic was at its peak in many parts of the world, I spoke to Ahmer Inam, the chief A.I. officer at Pactera Edge, a technology consulting firm in Redmond, Washington. At the time, Inam was focused on how the pandemic was wreaking havoc with A.I. models trained from historical data.

Last week, I caught up with Inam again. Lately, he’s been thinking a lot about why A.I. projects so often fail, especially in large organizations. To Inam, the answer to this problem—and to many others surrounding the technology—is something called “Mindful A.I.”

“Being mindful is about being intentional,” Inam says. “Mindful A.I. is about being aware and purposeful about the intention of, and emotions we hope to evoke through, an artificially intelligent experience.”

OK, I admit that when he said that, I thought, it sounds kinda out there, like maybe Inam should lay off the edibles for a week or two—and Mindful A.I. has the ring of a gimmicky catchphrase. But the more Inam explained what he meant, the more I began to think he was on to something. (And just to be clear, Inam did not coin the term Mindful A.I.. Credit should primarily go to Orvetta Sampson, the principal creative director at Microsoft, and De Kai, a professor at the University of California at Berkeley.)

Inam is arguing for a first-principles approach to A.I. He says that too often organizations go wrong because they adopt A.I. for all the wrong reasons: because the C-suite wrongly believes it’s some sort of technological silver bullet that will fix a fundamental problem in the business, or because the company is desperate to cut costs, or because they’ve heard competitors are using A.I. and they are afraid of being left behind. None of these are, in and of themselves, good reasons to adopt the technology, Inam says.

Instead, according to Inam, three fundamental pillars should undergird any use of A.I.

  • First, it should “human-centric.” That means thinking hard about what human challenge the technology is meant to be solving and also thinking very hard about what the impact of the technology will be, both on those who will use it—for instance, the company’s employees—and those who will be affected by the output of any software, such as customers.
  • Second, A.I. must be trustworthy. This pillar encompasses ideas like explainability and interpretability—but it goes further, looking at whether all stakeholders in a business are going to believe that the system is arriving at good outputs.
  • Third, A.I. must be ethical. This means scrutinizing where the data used to train an A.I. system comes from and what biases exist in that data. But it also means thinking hard about how that technology will be used: even a perfect facial recognition algorithm, for instance, might not be ethical if it is going to be used to reinforce a biased policing strategy. “It means being mindful and aware of our own human histories and biases that are intended or unintended,” Inam says.

A mindful approach to A.I. tends to lead businesses away from adopting off-the-shelf solutions and pre-trained A.I. models that many technology providers offer. With pre-trained A.I. models, it’s simply too difficult to get enough insight into critical elements of such systems—exactly what data was used, where it came from, and what biases or ethical issues it might present. Just as important, it can be difficult for a business to find out exactly where and how that A.I. model might fail.

My favorite example of this is IBM’s “Diversity in Faces” dataset. The intention was a good one: Too many public datasets of faces being used to build facial-recognition systems didn’t have enough images of Black or Latino individuals. And too often the annotations found in these systems can reinforce racial and gender stereotypes. In an effort to solve this problem, in January 2019, IBM released an open-source dataset of 1 million human faces that were supposed to be far more diverse and with much less problematic labels.

All sounds good, right? What company wouldn’t want to use this more diverse dataset to train its facial-recognition system? Well, there was just one problem: IBM had created the dataset by scraping images from people’s Flickr accounts without their permission. So users who blindly adopted the new dataset were unwittingly trading one A.I. ethics problem for another.

Another consequence of Inam’s three pillars is that A.I. projects can’t be rushed. Running a human-centric design process and thinking through all the potential issues around trustworthiness and ethics takes time. But the good news, Inam says, is that the resulting system is far more likely to actually meet its goals than one that is sped into production.

To meet all three pillars, Inam says it is essential to involve people with diverse perspectives, both in terms of race, gender and personal backgrounds, but also in terms of roles within the organization. “It has to be interdisciplinary group of people,” he says.

Too often, the teams building A.I. software sorely lack such diversity. Instead, engineering departments are simply told by management to build an A.I. tool that fulfills some business purpose, with little input during the conceptualization and testing phases from other parts of the company. Without diverse teams, it can be hard to figure out what questions to ask—whether on algorithmic bias or legal and regulatory issues—let alone whether you’ve got good answers.

As Inam was speaking, I was reminded of that old adage, “War is too important to be the left to the generals.” Well, it turns out, A.I. is too important to be left to the engineers.

And with that, here’s the rest of this week’s A.I. news.

Jeremy Kahn


JOIN US: How the Pandemic is Revealing the Power of A.I.

As companies and governments race to adjust to a world transformed by COVID-19, human-A.I. collaboration is playing an increasingly critical role. Fortune deputy editor Brian O’Keefe will moderate an in-depth discussion exploring the critical role of collaboration during crisis, how ethical human-A.I. collaborations promise a better future, and how rapidly evolving A.I. technologies can accelerate business recovery and safe, responsible community re-openings. Panelists include:

  • Tatsiana Maskalevich, Director of Data Science, Stitch Fix
  • Laura Major, Chief Technology Officer, Motional 
  • Dr. Joelle Pineau, Co-managing Director, Facebook A.I. Research; Associate Professor, Computer Science, McGill University
  • Lan Guan, Managing Director, Accenture Global Industry Applied Intelligence Network

September 23, 2020 at 11:00 AM – 12:00 PM EDT. REGISTER HERE.


Portland passes one of the strictest facial-recognition laws yet. The Oregon city has passed perhaps the most stringent restrictions yet for any U.S. city on the use of the controversial technology. Cameras that use facial recognition will not be allowed in any public or private spaces, including retail outlets, banks and train stations. The new regulations came in the form of two separate city ordinances passed in early September, according to a story in Wired

A tool powered by OpenAI's GPT-3 can write emails. Startup OthersideAI, based in Melville, Long Island, has launched a product powered by OpenAI's GPT-3 software that promises to compose complete emails based on just a few bullet points, the company said last week. It is one of several business applications being powered by the huge language model, which OpenAI unveiled this past summer. The software can compose many paragraphs of coherent text based on the content and style of a small prompt written by a human.  

A former chess champion is using DeepMind's AlphaZero to make the game more exciting. Vladimir Kramnik, a former world chess champion, has used AlphaZero, the powerful deep-learning algorithm created by London-based DeepMind, to explore variations on the traditional game of chess, including versions in which pawns can move more squares and in which castling is not allowed. AlphaZero can learn to play a number of turn-based strategy board games—including go, chess, checkers and shogi—at superhuman levels. Kramnik told Wired that playing chess variations against AlphaZero made playing chess exciting for him once again. “After three moves you simply don’t know what to do,” he says. “It's a nice feeling, like you're a child.” 

An autonomous ship will sail the Atlantic to commemorate the Mayflower's voyage. The Mayflower Autonomous Ship, a solar-powered, 50-foot-long unmanned trimaran that has been built by marine research charity Promare with help from IBM and the University of Plymouth, was officially launched last week in Plymouth, England, and is now undergoing sea trials. If these tests are successful, the ship, which is designed to be completely autonomous, will sail across the Atlantic next year. The ship was to have made the crossing this year to mark the 400th anniversary of the original Mayflower's journey, but the pandemic delayed the project. 

Indy 500 plans A.I. car race next year. Self-driving race cars created by 31 university research teams will line up on the grid of the Indianapolis Motor Speedway next year to compete for more than $1 million in prize money in the first Indy Autonomous Challenge, my Fortune colleague Jonathan Vanian reports. To win, the Dallara IL-15 race cars, modified with sensors and software to allow them to be fully autonomous, will have to complete 20 laps—which equates to a little less than 50 miles in distance—and cross the finish line first in 25 minutes or less.


Uber has hired Sukumar Rathnam to be the company's new chief technology officer, The Information reports. Rathnam has been a vice president at Amazon for the past nine years.

Huawei has appointed Li Shi head of its Cloud and AI Business Group in the Middle East, the company announced. Shi, a 15-year veteran of the Chinese tech company, was mostly recently CEO of Huawei's United Arab Emirates business.


Hardware matters. As Nvidia's deal to buy semiconductor design firm Arm shows, when it comes to A.I., hardware matters.

But too often, A.I. researchers don't pay enough attention to the role hardware plays in fundamental advances in the field, writes Sara Hooker, a researcher at Google Brain. In a paper published on the research repository, Hooker looks back on the history of computing and argues that ideas often win out not because they are necessarily better than other research avenues being pursued, but because they are better suited to the current prevailing hardware paradigm.

She coins the term "hardware lottery" to encapsulate to this idea. Her key datapoint is that all the algorithmic ideas that make deep neural networks work had already been invented by the late 1980s, but that the prevailing computer infrastructure of the time—the central processing unit (CPU)—was poorly suited to running them. Instead, CPUs were very good at following a linear set of instructions, which is one reason symbolic A.I. approaches and expert systems seemed to outperform deep learning approaches at the time. It would take the advent of graphics processing units (GPUs), which could perform many operations in parallel, and some clever tweaking of neural networks to figure out how best to run them on GPUs, to produce a sudden leap in neural networks' performance.

But at least both CPUs and GPUs were more or less general computing architectures. The problem now, Hooker writes, is that the hardware landscape is rapidly fragmenting, with many different kinds of chips being designed for specific purposes. Many of these specialized chips are designed to run existing deep-learning algorithms well, but they may not be suitable for other intelligent systems, such as spiking neural networks or other forms of brain-inspired software. Hooker says that maybe, in the future, A.I. systems will themselves be able to recommend which hardware configuration is best to use. 


What’s your biological age? A new app promises to reveal it—and help you slow the aging process—by Jeremy Kahn

How to make A.I. smarter—by Jonathan Vanian

One country is now paying citizens to exercise with their Apple Watch—by Naomi Xu Elegant

How data helped keep Red Bull’s F1 team on track during the pandemic—Jeremy Kahn

Democracy depends on Washington improving its tech—by Adam Lashinsky


One of the strange things about A.I. software is that while it often excels at tasks that humans find difficult, it frequently stumbles at those which people find fairly simple.

Take reading a spreadsheet. As my colleague Jonathan Vanian reports in the October issue of Fortune, neural networks, the brain-inspired software technique that undergirds today's most cutting-edge A.I., is really bad at processing tabular data. And for businesses hoping to capitalize on A.I.'s potential, that's a huge problem. As Jonathan writes, most of the data that businesses really care about—from sales forecasts to customer records and requisition orders—is structured data, usually held in tables and spreadsheets.

Peter Bailis, a Stanford professor and CEO of a Silicon Valley startup called Sisu Data that builds analytical tools for businesses, told Jonathan:

"The deep networks that are so cool can really do amazing things for our cars and for understanding sentiment from tweets online, but they don’t help us with understanding things like risk or customer satisfaction if our data is stored in tables.

Jonathan notes that advances in natural language processing, in particular programs that learn word embeddings, are one method people are using to overcome these limitations. He's got some interesting examples from companies like Genentech, Goldman Sachs and Instacart. Check out his story here.