Path to ZeroEnergyClimate ChangeElectric VehiclesSupply Chains

How IBM is preparing for a new era of A.I. ethics

December 13, 2021, 4:00 PM UTC

For the past six years, Francesca Rossi has led the development of an ethical framework for IBM’s artificial intelligence technology—but she doesn’t like to use the term “ethical A.I.” Sitting at a well-appointed desk in her home office in Mount Kisco, N.Y., a pink orchid bowing over her shoulder, the research scientist explains the term’s limitations. 

“Technology is not ethical or unethical, it’s the whole ecosystem around it,” she says, referring to the ethics guiding its multiple stakeholders, from researchers and developers to economists, policymakers, and consumers. “The goal is obvious—to take the best out of A.I., to make it as beneficial as possible, and to avoid the negative impacts.”

As companies around the world expand their use of A.I.—more than half of companies have accelerated their A.I. adoption plans—they are taking a careful look at ethics and responsible innovation. And global spending on A.I. systems is only expected to increase from $85.3 billion in 2021 to over $204 billion in 2025, according to market research firm IDC

When Rossi joined IBM in 2015 as its A.I. ethics global leader, she gathered 40 colleagues to start the process, eventually establishing an internal A.I. board to guide the ethical development and deployment of A.I. systems and then training IBM’s more than 345,000 employees working in over 175 countries in “ethics by design,” a methodology governed by several principles and values. 

Rossi says that IBM’s researchers built their algorithms guided by these properties of trust, and worked with a diverse array of clients and partners around the world—including the UN Environment Programme, Citibank, the Urban Institute, California’s Sonoma County, Lufthansa, Allianz, and L’Oréal. Its products include IBM Watson Studio, which improves oversight and compliance with ethical A.I. standards, and IBM Cloud Pak for Data, which helps monitor and manage models to operate trusted A.I. 

IBM is well positioned to adopt ethical guidelines in its development of A.I., given its long history as a technology company and its decision in 2003 to redefine its company values, prioritizing trust and personal responsibility in all its relationships. In the nearly two decades since, it has refined those values, based upon the changing expectations of society and the increasing capabilities of technology, into pillars that guide the development of what it calls a trustworthy A.I.—privacy, fairness, explainability, robustness, and transparency. 

“Of course there are other [pillars] that are important as well, for example, the issue of accountability; that you shall have human agency; the impact of A.I. on the workforce; as well as the more general issues of a technology that has a significant impact on society because it is so fast and so pervasive,” says Rossi. 

Path To Zero Pt. 3-Francesca Rossi-IBM AI
Francesca Rossi, IBM’s A.I. ethics global leader.
Courtesy of IBM

Since she joined the company, IBM has accelerated those efforts—launching the Science for Social Good program in 2016; publishing one of the first papers on detecting and mitigating A.I. bias in 2017; and releasing the open-source toolkits, AI Fairness 360 and AI Explainability 360. For example, AIF 360 was recently used by the Ad Council to detect unwanted bias in TV and online advertising.

There are many reasons why companies are driven to devote time and resources to ethical A.I., says Rossi. They may want to adhere to values important to them; they may want to contribute in a positive way to issues like social justice, equity, and sustainability; and they may be concerned about maintaining their reputation, “because no company wants to be involved in deploying a technology” that discriminates or has unintended negative consequences. 

For example, a chatbot in France that was intended to facilitate telehealth services was halted after it started recommending suicide as an option for patients. And Uber’s early self-driving car tests, conducted without the approval of the state of California, occasionally went awry, with cars crossing through red lights in San Francisco. In addition, in recent years, there has been a lot more media scrutiny of algorithms, from facial recognition technology that discriminates to the development of more lethal military weapons. 

The companies that are able to deliver products developed with ethical A.I. have a competitive advantage, notes Rossi, as it becomes a prevailing concern. While 66% of companies surveyed by Deloitte view A.I. as critical to their success, they know that they still have a long way to go: Only 38% believe that their current use of A.I. differentiates them from competitors, and only a third of them say they have adopted “leading operational practices” for A.I. so far. 

Technology is not ethical or unethical, it’s the whole ecosystem around it.

Francesca Rossi, IBM A.I. ethics global leader

In recent years, the use of artificial intelligence has accelerated across all industries, from transportation and energy to health care and finance. And more companies are anchoring their efforts in ethical A.I.: Microsoft has created a mandatory introduction to responsible A.I. course for all of its employees, while Salesforce’s ethical A.I. practice team makes sure that its data and machine learning models are fair and used equitably.

IBM has created a number of toolkits tied to each of its five key ethical A.I. pillars that help guide companies through the process, says Rossi. As part of an interdisciplinary consortium with many other companies, she helped develop a tool that enables firms to assess the technology they’re buying from vendors. Some of the toolkits are open source—IBM has donated code, data sets, and tutorials to the Linux Foundation—and are free to developers around the globe. Others are proprietary, as part of the Watson Studio. 

When it comes to financial inclusivity, IBM relies on its Science for Social Good program, which tries to align the use of A.I. with one or more of the 17 UN Sustainable Development Goals, from zero poverty to climate action. 

“We aim to deliver a UI [user interface] that can help in one of those goals,” she says. Part of the process involves training developers to understand the societal impact of the technologies and define what they mean by inclusivity—for example, making sure that in the data used to train the A.I., there are enough representatives from all possible categories of customers they are trying to reach. In September, IBM announced that financial institutions across Africa—including Ecobank and the Co-operative Bank of Kenya—are using hybrid cloud and A.I. capabilities from IBM to help broaden access to financial services on the continent, bringing banking to the unbanked.

International A.I. standards

Until recently, the development of ethical A.I. has largely been an academic and corporate effort, with some researchers concerned that government regulation would stifle innovation. But increasingly international standards and regulations, which can vary widely across the globe, are being formulated. In April, the European Commission released the first-ever legal framework on A.I., which focuses on the risks posed by different A.I. systems and directly challenges Silicon Valley’s hands-off approach to emerging technology. The proposal outright bans certain uses of A.I. and regulates others based on the level of risk that they pose. 

And in late November, the 193 member states of the UN’s Educational, Scientific, and Cultural Organization (Unesco) adopted a series of A.I. recommendations that focus on transparency, data protection, the environment, accountability, and more. They also prioritize human rights, noting that A.I. “should not be used for social scoring or mass surveillance purposes.”

Companies that deliver A.I. in Europe will be subject to those regulations, says Rossi. “And so companies need to be prepared and educated and more mature about what it means to build or use their technology.” 

In contrast, in the U.S. there is more regulatory activity happening on the state level, which varies widely in its focus, compared with the federal government’s big-picture strategy. 

Given the differences in regulatory environments and the fact that rules are still being developed in many cases, Rossi suggests that companies take an approach grounded in trustworthy A.I., in a way that goes beyond compliance requirements: “Even if regulation allows for something, but that thing would damage the trust that your clients have in you, it is not good for the company to do it just because the regulation allows it.”

Rossi, noting that our conversation could go on for days given all the aspects to the issue, insists that she’s confident about the future of ethical A.I., despite widespread concerns over the misuse of algorithms. “I am optimistic,” she says. “Because I’ve seen in these last five or six years how much we have done so far and how much progress we’ve made.”

This story is part of The Path to Zero, a series of special reports on how business can lead the fight against climate change. This quarter’s report highlights how governments and private industry are approaching the biggest challenges and opportunities in the sustainability space.