CEOs of America’s biggest companies detail how to achieve ‘responsible A.I.’

January 26, 2022, 7:00 AM UTC

An influential lobbying group composed of the chief executives of large U.S. corporations, is wading into the thorny topic of artificial intelligence by making sweeping recommendations for how businesses and government should deal with the technology.

In two documents to be published Wednesday by The Business Roundtable spells out how businesses should responsibly deploy artificial intelligence to avoid harms that range from racist chatbots to facial recognition algorithms have led to false imprisonment of Black suspects. The group also issued recommendations for how government should regulate A.I. at a time when Congress is considering a broad crackdown on the tech industry and the industry’s critics are raising alarm bells over businesses using A.I. to create a dystopian future of mass surveillance and limited human autonomy.

For companies, the Roundtable’s roadmap lays out 10 core principles, among them the need to have diverse teams building A.I. products, the need to mitigate unfair bias, and the need to design A.I. systems whose actions are as explainable and as transparent as possible. The group also spelled out 10 principles for government, such as that regulation should be industry-specific, that government should apply and adapt existing rules whenever possible rather than enacting new ones, and that it should provide support to businesses that want to train or retrain workers.

The documents, the first of their kind to have been developed through an entirely CEO-led process, could serve as important guideposts for companies of all sizes that are hoping to develop and deploy A.I. They also show how companies are trying to influence U.S. lawmakers and regulators to implement more business-friendly rules that will not impede companies hoping to use the increasingly pervasive technology to cut costs and increase revenue.

Julie Sweet, Accenture’s CEO and leader of the Business Roundtable technology committee that developed the two documents, told Fortune that the group’s work coincides with companies starting to widely use A.I. across their business. The organization, she said, felt it needed to “act now to ensure that as we deploy at scale, we’re able to do so for the benefit of all, responsibly.”

Sweet added that these principles for “responsible A.I.” must be built into the increasingly autonomous systems companies are creating from the beginning. “You cannot reverse engineer responsible A.I.,” she said.

Sweet said that industry-specific A.I. regulation, and trying to adapt existing laws whenever possible, was important to avoid creating a confusing patchwork of overlapping rules that would slow innovation and add cost. “We really believe based on our experience, that it’s going to be important that we update and embed [A.I.] in existing regulation, because otherwise you’re going to have ambiguity,” she said. You’re going to have complications, you’ll have increased cost.”

This differs from the approach that the European Union has taken. Last year, the EU proposed a new Artificial Intelligence Act that would govern the use of the technology across different industries. That law is currently making its way through the bloc’s rule-making process. The U.K. has also considered laws that would govern the use of A.I. and algorithms across different industries. In the U.S., lawmakers proposed a similar federal Algorithmic Accountability Act in 2019, but the proposed law never made it out of various Congressional committees.

Sweet said that the Roundtable did not think such an over-arching approach would work well in the U.S., which already has so many competing, industry-specific rules between the federal and state level. At the same time, the Roundtable is calling on U.S. regulators to engage internationally to try, as much as possible, to harmonize the U.S. approach to governing the technology with what is happening elsewhere.

While many other A.I. ethics frameworks stop at these sort of broad principles, the Roundtable document is notable for including some specific guidance for how to implement each of these points. On the other hand, except in a few instances, it is not as prescriptive as some A.I. ethics advocates may have wished.

Sweet said the group had wanted to give companies concrete recommendations that C-suite executives could use, while acknowledging that A.I. remains a new and rapidly-changing area and that different industries and use cases may require different specific methods. “We started with a very simple tenet, which is that, regardless of industry, there are certain key principles that are above industry and those are all about safety, effectiveness and trust,” she said.

For instance, in the area of how to make A.I. systems transparent and explainable, the document says that companies should always tell “end users” when they are interacting with a piece of A.I. software, such as a chatbot, rather than a human. The group says companies should as much as possible try to explain how an A.I. arrives at its output—something that can be challenging in modern A.I. software— but does not specify an exact technique for doing so. It also says that different end users may require different levels and types of explanations.

Sweet said the Roundtable’s responsible A.I. frameworks came together quickly, with discussions among CEOs only starting in July 2021. Sixty-one CEOs from 18 different industries were directly involved in helping to craft the document, Sweet said, while the views were solicited through surveys and workshops from the more than 230 companies that Roundtable members. The final documents represent the consensus view, she said.

Never miss a story: Follow your favorite topics and authors to get a personalized email with the journalism that matters most to you.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward