So let’s get this straight: OpenAI is now taking a 10% stake in AMD, while Nvidia is investing $100 billion in OpenAI; and OpenAI also counts Microsoft as one of its major shareholders, but Microsoft is also a major customer of AI cloud computing company CoreWeave, which is another company in which Nvidia holds a significant equity stake; and by the way, Microsoft accounted for almost 20% of Nvidia’s revenue on an annualized basis, as of Nvidia’s 2025 fiscal fourth quarter. In less than three years, OpenAI has gone from a parlor game to a pillar of the global economy.
You cannot help but ask, “Is this like the Wild West, where anything goes to get the deal done?” One company grants a chip supplier equity for financing data-center buildouts but takes an ownership stake in another manufacturer for the eventual development of a similar product. It is hard to imagine that Jensen Huang outsmarted his equally talented cousin, the AMD CEO, Lisa Su. The lines between revenue and equity are blurring among a small group of highly influential technology companies, to the tune of hundreds of billions of dollars.
Su has defended AMD’s deal with OpenAI, asserting that market bears are “thinking too small.” The chief executive describes AI’s potential as sparking a new decade-long “Supercycle” that will “transform industries, from finance to healthcare and research.”
We have seen this story before, back during the “cable cowboy” days. Programmers paid the distributors. Or wait—distributors paid the programmers.
And of course, there are the frequent comparisons to the run-up to the dot-com bubble. When a dramatic technological change occurs, people are often unsure exactly what to do, but they frequently act as if they do confidently know the best path forward.
Major players in the industry have begun to call out the AI euphoria. Just last Friday alone, three of them spoke out, hedging hard on what’s to come. Goldman Sachs CEO David Solomon said he expects there to be “a lot of capital that was deployed that [doesn’t] deliver returns.” Amazon founder and executive chairman Jeff Bezos called the current environment “kind of an industrial bubble.” Sam Altman, CEO of OpenAI, warned that “people will overinvest and lose money” during this phase of the AI boom.
Pockets of concern
At our Yale Chief Executive Leadership Institute CEO Summit in June, we heard similar admonitions from over 150 top CEOs, including seasoned venture capitalists, renowned technology founders, and global partners from leading consulting firms.
While the commercial outlook for AI among business leaders was enthusiastic, there were significant pockets of concern, extending beyond safety fears, to question the frenzied paths of investment. Sure, 60% of CEOs polled didn’t believe that AI hype had led to overinvestment; however, the other 40% raised significant concerns about the direction of AI exuberance, believing a correction to be imminent.
Reports estimate that AI-related capital expenditures surpassed the U.S. consumer as the primary driver of economic growth in the first half of 2025, accounting for 1.1% of GDP growth. JP Morgan Asset Management’s Michael Cembalest notes that “AI-related stocks have accounted for 75% of S&P 500 returns, 80% of earnings growth and 90% of capital spending growth since ChatGPT launched in November 2022.”
More concerning, RBC’s Kelly Bogdanova points out that after the massive earnings growth of 2023 and 2024, growth rates between the “Magnificent Seven” and the rest of the S&P 500 are expected to converge next year. Notably, she recognizes that “the gap between the Tech sector’s share of market cap and net income has widened significantly” since late 2022.
At our June CEO Summit, David Siegel, a computer scientist and an early student of AI at MIT, and later a Co-Founder of quantitative hedge fund Two Sigma, candidly advised, “[AI technologies are] transforming business … but I also believe that the current wave of AI hype continues to mix fact with speculation freely.” Siegel continued, “Rarely does anyone speak about the limitations of current AI technologies.”
The renowned investor and technologist has long held these beliefs but was emboldened by the groundbreaking report from Apple suggesting that the reasoning capabilities of AI models may not be as sophisticated as many assume. Siegel explained in simple language what the findings may mean:
“AI researchers have long worried that the impressive benchmarking results [of AI models] may be due to data contamination, where the AI training data contains the answers to the problems used in benchmarking. It’s like giving a student the answers to a test before they take the exam. That would lead to exaggerations in the models’ abilities to learn and generalize.”
A recent study from MIT—released after the June CEO Summit—backed Siegel’s claims. Their research revealed that 95% of the 52 organizations considered had achieved zero return on investment, despite spending $30 billion to $40 billion on GenAI across more than 300 initiatives.
With uncommon public candor from the consulting world, AlixPartners Co-CEO Rob Hornby, recognized among business leaders for his expertise in the technology industry, shared a similar view, telling the CEO Summit group, “I don’t think [AI models are] ready for sustaining long chains of activity in ways that displace people … AGI is just not close … I don’t think artificial intelligence and humans have that much in common, right now.”
Hornby’s comments starkly contrast with the supposed reckoning of mass layoffs that the founders of leading AI companies believe will soon come to the labor force. Anthropic CEO Dario Amodei made headlines recently after telling Axios, “AI could wipe out half of all entry-level white-collar jobs—and spike unemployment to 10%-20% in the next one to five years.”
Another candid consulting leader, Asutosh Padhi, Senior Partner and Global Leader of Firm Strategy at McKinsey & Company, took a more balanced view of AI in the workforce. He framed the technology as a source of enhanced productivity, not necessarily a mass replacement for people. McKinsey will continue to “hire extraordinary people, [with AI] helping them be even better at what they do,” said Padhi.
Greycroft founder Alan Patricof, the venture capital pioneer with over 60 years of experience, extended a more nuanced view of AI to the investment community: “The AI revolution is a true revolution … [but] I am cautious about valuations and what people think can be accomplished in the short term … A lot of people have run into this field, and just because ‘AI’ is attached to the name, or they incorporate it into their business plan … [it] gets a lot of people excited.”
Pitchbook reported that nearly two-thirds of deal value in the U.S. went to AI and Machine Learning startups in the first half of 2025, up from 23% in 2023. The meteoric rise can be primarily explained by the increased focus of venture organizations, such as Andreessen Horowitz and Y Combinator, on AI startups and the mindboggling valuations of those emerging companies.
Under such exuberant conditions, Patricof reflected, “There will be winners and losers, and the losses will be pretty significant.”
The warnings of exuberance may be mounting, but how the bubble pops is a question that has gone unanswered. The possibilities are endless, but three stand out as having a higher likelihood of occurring.
1 – Concentration leads to contagion
A small group of companies is securing most of the major deals. News about multibillion-dollar investments from familiar companies such as OpenAI, Nvidia, CoreWeave, Microsoft, Google, and a few others is reported almost daily. Should the bold promises of AI fall short, the dependence among these major AI players could trigger a devastating chain reaction, causing a widespread collapse similar to the 2008 Great Financial Crisis.
Worse yet, the ambitions are numerous and compounding, with large energy and grid infrastructure buildouts, highly advanced agentic capabilities, and widespread commercial and consumer adoption all anticipated over the next five years.
Take one example. OpenAI is committed to investing $300 billion in computing power with Oracle over the next five years, which averages $60 billion per year. Besides losing billions of dollars annually, OpenAI’s projected revenues are expected to reach $13 billion in 2025, requiring even larger amounts to cover future shortfalls. The announcement of the deal caused Oracle shares to soar by over 40%, adding nearly one-third of a trillion dollars to the company’s market value in a single day. OpenAI’s valuation has almost doubled from $300 billion to $500 billion in less than a year.
Notably, recent reporting by CNBC suggests that the deal for Oracle may be costly, with the company expecting to “lose considerable sums of money” on its rental of data centers, primarily to OpenAI, and already incurring a $100 million loss in the most recent quarter. Another report says the loss may be a simple timing issue.
2 – Governance conflict exposes AI shortcomings
Not long ago, Sam Bankman-Fried promised to revolutionize financial market operations with cryptocurrency exchange FTX and trading firm Alameda Research. However, poor governance and limited regulatory oversight proved disastrous for Bankman-Fried and his backers when his fraudulent activities were exposed. Nefarious actors using Binance for money laundering shortly after the collapse of Alameda Research set the industry further back. Blockchain technologies do offer promising advances to antiquated sectors and practices, but they must be in the right hands and have the proper guardrails.
AI is in a similar position to the cryptocurrency exchanges of the early 2020s, with much to offer but disparate governance practices and minimal regulatory oversight. But back then, the cryptocurrency market was still relatively small and viewed as risky by the average investor, which limited the fallout. The perceived value of AI is exponentially larger, and the potential damage from bad or even questionable actors is, therefore, much greater.
Anthropic CEO Dario Amodei, Google CEO Sundar Pichai, and xAI CEO Elon Musk have each raised concerns about the “probability of doom” from AI misuse. Amodei estimates there is a 25% chance that AI will go “really, really badly.” Ironically, Musk’s Grok, xAI’s large language model, provided a recent example of what happens when tampering with the inner workings of AI models goes awry. It is not difficult to imagine a major, publicly available AI model going rogue and inflicting significant damage to financial markets or national security systems. Such an action would require a national moratorium on comparable AI models until the damage is contained and the risk mitigated.
3 – Emerging innovators’ disruptive substitutions
In a powerful Washington Post op-ed, Bethany McLean poignantly recalls the overbuilding of fiber-optic cable infrastructure during the 1990s dot-com bubble. Part of the problem was due to circular financial engineering, but the other factor was due to “a [technological] breakthrough that made each line exponentially more powerful, multiplying existing capacity,” rendering much of the infrastructure unnecessary for decades.
For AI, further innovation in semiconductor chip design or major advances in quantum computing, as hundreds of billions of dollars in data center infrastructure are being deployed, would immediately leave much of that investment useless in the medium to long term. That is not to say that the spare “compute” will not be needed in the future, but as McLean notes, just like the fiber-optic cable infrastructure, it could be years before those data center investments start generating a return for their backers.
In the business classic Extraordinary Popular Delusions and the Madness of Crowds, Charles Mackay examined the psychology of crowd behavior and mass hysteria throughout history, from the Dutch Tulip Mania of the 1630s to humanity’s historical obsession with transmuting base metals into gold. While Mackay wrote his book in 1841, the AI mania continues to validate his conclusion: “Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, one by one.”
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.