The A.I. meteor is coming, and we are not prepared.
That’s my strong sense as I write this letter, on a plane home from the World Economic Forum in Davos, Switzerland. Artificial intelligence was a hot topic among the gathered leaders, and views ranged from optimistic to fearful.
First, the hopeful takes. For Adena Friedman, CEO of Nasdaq, A.I. is a potential solution to some of the toughest business problems, from the labor shortage to supply-chain inefficiencies. For Illumina CEO Francis deSouza, it’s making possible the early-stage detection of 50 different cancers, using a new blood test from Illumina’s Grail subsidiary. For others, it’s the key to hitting net-zero goals.
Still, anxious scenarios loomed in every conversation, sparked by the buzz around OpenAI and its so-called generative A.I. platform, ChatGPT. The chatbot can write just about anything in response to plain-language human commands—in a way that’s disturbingly convincing. And it’s creating a frenzy among venture capitalists, high school students, and anybody who does creative work.
When it comes to concerns about how such A.I. should be regulated, how to prevent it from cranking out false information (a phenomenon called “hallucination”), or how to keep criminals and terrorists from misusing it, no one has solutions. But there isn’t much time to answer critical questions. The tech has already been largely built; it just hasn’t been widely deployed.
“ChatGPT is child’s play compared to what big tech companies are sitting on,” one tech executive told me. He’s probably right. The “T” in ChatGPT, after all, comes from an algorithm called Transformer that Google invented. Do we really think Alphabet doesn’t have the same chatbot capabilities—and much more?
If Big Tech is sitting on more robust A.I., why isn’t it being rolled out? One possible reason is the fear of business-model disruption. Generative A.I. could eliminate the need for Google searches, for example; with 60% of Google revenue currently coming from search, the company may not want to threaten its own golden goose.
Fear of government and regulatory interference is another factor. There’s also the fear of what unleashing A.I. would mean for society. Not enough questions were asked about the impact that social media would have on the world. Big Tech doesn’t want to make the same or even bigger mistakes with A.I.
Even Sam Altman, who cofounded OpenAI, sees the future he is shaping as both exciting and alarming. “I think the best case is so good that it’s hard to imagine,” Altman recently said. “I think the worst case is lights-out for all of us.”
For more on ChatGPT and why its success could be A.I.’s “Netscape Navigator moment,” read our cover story by Jeremy Kahn.
On a less dystopian note, the 25th edition of the Fortune World’s Most Admired Companies list publishes online on Feb. 1. Find it then at fortune.com/ranking.
This article appears in the February/March 2023 issue of Fortune with the headline, “The A.I. meteor is coming, and we are not prepared.”
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.