With A.I., business leaders must prioritize safety over speed
Two years ago, before Apple’s launch of the Apple Card, there was much discussion about how the no-fee credit card would enable the tech giant to storm into the financial services business. However, when people discuss the Apple Card today, it’s in part because of the glitches in Apple’s artificial intelligence algorithms that determine wannabe cardholders’ credit limits.
In November 2019, a Dane tweeted that while his wife and he had both applied for the Apple Card with the same financial information, he was awarded a credit limit 20 times higher than that of his wife—even though, as he admitted, his wife had a higher credit score. Adding fuel to the fire, Apple’s cofounder, Steve Wozniak, claimed that the same thing had happened to his wife too. The card had been launched in August 2019, and it was estimated that there were 3.1 million Apple Card credit card holders in the U.S. at the beginning of 2020, so this issue may well have affected tens of thousands of women. A spate of complaints resulted in a New York Department of Financial Services investigation, which recently cleared Apple of gender-based discrimination, but only after the digital giant quietly raised wives’ credit limits to match those of their husbands.
As business sets about deploying A.I. at scale, the focus is increasingly shifting from the use of the technology to create and capture value to the inherent risks that A.I.-based systems entail. Watchdog bodies such as the Artificial Intelligence Incident Database have already documented hundreds of cases of A.I.-related complaints, ranging from the questionable scoring of students’ exams to the inappropriate use of algorithms in recruiting and the differential treatment of patients by health care systems. As a result, companies will soon have to comply with regulations in several countries that aim to ensure that A.I.-based systems are trustworthy, safe, robust, and fair. Once again, the European Union is leading the way, outlining a framework last year in its White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, as well as its proposal for a legal framework in April 2021.
Companies must learn to tackle A.I. risks not only because it will be a regulatory requirement, but because stakeholders will expect them to do so. As many as 60% of executives reported that their organizations decided against working with A.I. service providers last year due to responsibility-related concerns, according to a recent Economist Intelligence Unit study. To effectively manage A.I., business must grasp the implications of regulations and social expectations on its use even while keeping in mind the technology’s unique characteristics, which we’ve discussed at length in our recent Harvard Business Review article. Indeed, figuring out how to balance the rewards from using A.I. with the risks could well prove to be a new, and sustainable, source of competitive advantage.
To learn, or not to learn?
At the outset, consider A.I.’s much-vaunted ability to continuously become better by learning from the data it studies—a characteristic that makes A.I. a unique technology. The virtuous cycle can lead to A.I. behavior that cannot always be anticipated, as the example of Microsoft’s chatbot, Tay, showed in 2016, or to outcomes that may raise concerns of fairness, as Amazon’s use of A.I. to screen résumés vividly demonstrated. An A.I. system can make one decision one day, and, learning from the data it is subsequently fed, could arrive at a vastly different decision the very next day. That’s why U.S. regulators, such as the Food and Drug Administration, approve only algorithms that don’t evolve during their use.
Similarly, companies will need to decide whether or not to allow their A.I. systems to learn in real time. Not allowing continuous learning will, sadly, result in companies having to forgo one of the key benefits of A.I., viz its ability to perform better over time, in some cases. In others, business will need to balance the tradeoffs between risk levels and algorithmic accuracy, which will be hampered if companies don’t allow continuous learning.
Ever-evolving A.I. systems also generate operational complexities because the same A.I.-embedded product or service will work differently in each country. These operational challenges will be compounded by the subtle variations in regulations and social expectations in each nation. Companies will have to train their A.I. using local data and manage them according to local regulations. That is bound to limit A.I.’s ability to scale.
In addition, companies will have to treat their A.I. as a portfolio of applications that needs careful management. They will have to develop sentinel processes to monitor the portfolio, continuously ensuring its fair, safe, and robust functioning. Organizations will have to frequently test the output of A.I. systems, which will add to costs. For example, a 2017 New York City law mandated the creation of a task force to provide recommendations on how information on automated decision systems should be shared with the public, and how public agencies should address instances where people could be harmed by automated decision systems.
Taking responsibility for A.I.’s decisions
Another key differentiator is A.I.’s ability to make complex decisions, such as which ads to serve up online to whom or whether to grant facial recognition–based access. Responsibility comes hand in hand with the ability to make decisions. So far, companies and other organizations acting according to the principles of Responsible A.I. have focused on ensuring that A.I.-based decisions treat all stakeholders—consumers, employees, shareholders, stakeholders—fairly. If A.I. algorithms treat people unfairly, companies will face legal and reputational risks, as Apple did. They need to understand the possible impact that their algorithms can have on humans, and even choose not to use A.I. in some contexts. These concerns will be exacerbated as A.I. systems scale; an algorithm may be fair, on average, but may still be unfair in specific geographical contexts because local consumer behavior and attitudes may not correspond to the average, and thus may not be reflected in the algorithm’s training.
Companies have no option but to develop processes, roles, and functions to ensure that A.I. systems are fair and responsible. Some, like the Federal Home Loan Mortgage Corporation (Freddie Mac), have already appointed A.I. ethics officers and set up A.I. governance structures and processes—such as traceability protocols and diversity training—to tackle this challenge, which are small steps in the right direction. In addition, the pioneers are setting up auditing processes and developing monitoring tools to ensure the fair functioning of A.I. systems.
Accountability requires companies to explain why their algorithms make decisions the way they do. This idea of “explainability” will force companies to make tradeoffs. Easier-to-explain algorithms are usually less accurate than so-called black box algorithms, so if companies use only the former, it will limit the A.I.’s abilities and quality. Because executives will have to make tradeoffs between explainability and accuracy, it’s bound to create an unequal playing field across the globe since market regulations and social expectations will differ across nations.
By way of illustration: Ant Financial combines thousands of inputs from data sources in the Alibaba ecosystem to develop credit ratings for borrowers in China. The process makes it difficult for anyone, even regulators, to understand how the algorithms make decisions. While Alibaba’s systems allow the company to approve loans within minutes, it may not be able to use the same system outside China, especially in economies with regulations and expectations that demand a higher degree of explainability. Consequently, A.I. regulations will limit the markets that A.I.-driven companies can target, which has major strategy implications. In fact, a few companies, such as game developer Uber Entertainment, chose to stay away from the EU after the enactment of the General Data Privacy Regulation in 2019.
As more governments unveil rules about the use of A.I., companies will need to consider some key questions before deploying A.I. They must ask themselves:
* To what extent should we differentiate our product or service offering to follow local differences in A.I. regulations and market expectations?
* Should we still serve all these markets worldwide after accounting for the new regulatory landscape?
* If decentralizing A.I. operations is essential, should we set up a central organization to lead, or at least connect, the sharing of data, algorithms, insights, and best practices?
* Given A.I. regulations and market expectations, what are the new roles and organizational capabilities that we will need to ensure that our strategy and execution are aligned? How will we hire, or reskill, talent to acquire these capabilities?
* Is our strategy horizon appropriate to combine the short-run responses to a constantly changing technology and regulatory environment with our long-term A.I. vision?
As the use of A.I. in companies’ internal and external processes becomes more pervasive, and the expectations of stakeholders about fair, safe, and trustworthy A.I. rise, companies are bound to run headlong into man vs. machine clashes. The sooner CEOs come to grips with the value-risk tradeoffs of using A.I.-driven systems, the better they will be able to cope with both regulations and expectations in an A.I.-driven world.
François Candelon (Candelon.Francois@bcg.com) is a managing director and senior partner at the Boston Consulting Group and the global director of the BCG Henderson Institute. Theodoros Evgeniou (firstname.lastname@example.org) is a professor of decision sciences and technology management at INSEAD working on A.I. and data analytics for business.
More must-read commentary published by Fortune:
- A tale of two governors: COVID outcomes in Florida and Connecticut show that leadership matters
- With A.I., business leaders must prioritize safety over speed
- Changing the way we pay for genetic testing will save lives
- Paid family and medical leave is a civil right
- COVID survivors need just as much help as 9/11 survivors did
Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.