Trust it or not, A.I. isn’t going away. For regulators, the challenge is making sure the new tech is a force for good

April 28, 2023, 4:25 PM UTC
Lionel Bonaventure—AFP/Getty

Artificial intelligence might never be smart enough to truly challenge humanity’s supremacy but, even at its current level of competence, A.I. programs are already poised to disrupt society and our established industries.

“A.I. is going to impact every product across every company,” Alphabet CEO Sundar Pichar told 60 Minutes last week, warning that “we need to adapt as a society for it.” 

But, to some ethicists, the greatest concern over the propagation of A.I. is that, as with every new tech, power will consolidate in the hands of a few players—Big Tech companies, like Alphabet.

“If you think about trust, that starts with transparency,” says Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences at Caltech and senior director of A.I. research at NVIDIA. “Unfortunately, as more and more of these models get behind closed walls, with companies leasing API access, there becomes very little discussion about how those models were trained or tested.”

Critics recently lambasted OpenAI, the creator of the popular chatbot ChatGPT that kickstarted the current round of hyperfixation on A.I., for deciding not to share details on how the company trained the latest version of its chatbot. OpenAI said, in defense of its newfound secrecy, that the competitive market drove the company to abandon its founding principles of open access, as the Microsoft-backed group looks to earn a profit.

A.I.’s retreat towards privacy reminds me of the blockchain industry. Blockchain, likewise, was a new tech with supposedly revolutionary potential to upend digital industries by decentralizing control. But even in blockchain (and its chiefly financial derivative products), power consolidated around a few key players, such as OpenSea, which facilitates the majority of NFT trades, or Bitmain, the preeminent producer of mining rigs.

Consolidation in A.I. poses a bigger challenge because of how quickly algorithmic biases are replicated as the technology scales. Without transparency around how tech companies train their A.I. products, the biases they reproduce won’t be apparent until they become a problem when it’s too late.

“We are essentially now reimagining the whole ecosystem. We already see with social media, the impact of fast information propagation. So deciding what should be automated and how it should be done is a tricky question,” Anandkumar says.

Anandkumar doesn’t know who should be in charge of setting those guardrails but says she doesn’t think it should be the same social media CEOs currently at the forefront of deploying generative A.I. systems, like Google’s Bard chatbot, or Snapchat’s My AI. Neither, however, should governments be left to regulate the tech alone.

“It’s good for the government to think about what are the right guardrails and right regulation but you really have to work with experts across all these areas together to figure that out,” Anandkumar says.

However A.I. guardrails are decided, they should be done fast. Some governments are still cleaning up the mess caused by the “move fast and break thing” era of Silicon Valley, and A.I. is only moving faster.

Eamon Barrett


A.I. knows 
Users of Snapchat’s in-app A.I. chatbot assistant, My AI, have discovered the bot isn’t exactly honest about what access it has to user data. When asked directly whether it has access to a user’s location, My AI will reply no. But, if asked where the nearest McDonald’s is, for example, My A.I. is able to give a response. A.I. ethicists note My AI isn’t “lying” as such when it misleads users this way, because chatbots have no concept of deceit. But it appears the bot has been programmed to mislead users about their privacy settings.

Stakeholder trust
Capitalism is approaching a precipice and, to survive, business leaders need to direct the economic system on a change of course, embracing stakeholder principles without sacrificing the needs of shareholders. Practicing legitimacy, transparency, accountability, and responsibility will “go a long way to restoring trust and confidence” in institutions, writes the VP for public affairs and sustainability at Pirelli Tire North America in a Fortune commentary.

Reality check
Twitter continues to erode trust in its own verified accounts system—the blue check mark. After finally proceeding with Musk’s promise to remove legacy blue checks and roll out a subscription service for verification, Twitter is now reportedly reinstating blue checks on some celebrity accounts, without account owner permission. Some of those celebs are pretty upset about it. 

Job hop
Is frequent job hopping a red flag? The internet has reignited that debate after a recruitment poster demanding prospective employees have worked no more than three jobs in the last 10 years. Seemingly, employers think job hopping—which can easily earn an employee more money than staying put—is an omen for an untrustworthy employee. But job hopping has “unfairly become synonymous with disloyalty, indifference, and chasing money,” writes Fortune’s Orianna Rosa Royle.


What unites the 100 companies ranked on Fortune’s 100 Best Places to Work list, created in partnership with the Great Place to Work initiative? Well, here’s the answer from Michael Bush, CEO of Great Place to Work:

“What are they doing that you should too? Make a high-trust culture your No. 1 priority. Go all in. When your people win, you win. When everyone—no matter who they are or what they do—feels cared for, they’ll take care of you. It’s a virtuous cycle of care and support.”

For more on what that means, read his introduction to the list here.

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet