AI isn’t a bubble—but it’s showing warning signs

By Beatrice NolanTech Reporter
Beatrice NolanTech Reporter

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

Balloons in the shape of the letters "AI" about to be popped by someone with a pin.
Is the AI bubble about to burst?
Photo illustration from Getty Images

Hello and welcome to Eye on AI. In this edition: Why AI isn’t a bubble quite yet…ChatGPT gets chattier…Microsoft connects U.S. datacenters into the first “AI superfactory”…and “shadow” AI systems are causing problems for organizations.

Hello, Beatrice Nolan here, filling in for Sharon Goldman while she’s on vacation this week. Lately, there’s one question investors can’t seem to stop asking: Has the AI boom crossed into bubble territory?

One analyst thinks he has an answer to, and a way to keep track of whether the AI industry is in a boom or bust phase through a special mechanism that measure key industry stressors on a scale of safe, cautious, or dangerous.

The framework was created by Azeem Azhar, a renowned analyst and author, who says the data shows that the AI industry is not in a bubble—at least not yet.

What’s the difference between a healthy boom and a dangerous bubble? According to Azhar, the two are very similar, but a bubble is “a phase marked by a rapid escalation in prices and investment, where valuations drift materially away from the underlying prospects and realistic earnings power of the assets involved.” In a boom, by contrast, the fundamentals eventually catch up.

“Booms can still overshoot, but they consolidate into durable industries and lasting economic value,” Azhar writes.

Azhar’s framework for determining which situation we’re in relies on five indicators—economic strain, industry strain, revenue momentum, valuation heat, and funding quality—which have been tested against past boom-and-bust cycles and converted into a live dashboard.

According to this dashboard, if none or one gauge is in the dangerous or “red” zone, it indicates the AI industry is still in a boom; two reds mean caution; and three or more mean imminent trouble and definite bubble territory. Since Azhar launched this in September, just one of the gauges has slipped into the red zone.

Perhaps unsurprisingly, that gauge is “industry strain,” which tracks whether AI industry revenues are keeping pace with the massive capital investment flowing into infrastructure and model development. Capital expenditure from Big Tech and hyperscalers is being funneled into data centers, GPUs, and chips at a much faster rate than the revenues generated from AI products and services. While AI revenue is rising, it still only covers about one-sixth of total industry investment.

(It’s worth noting that the gauge’s flip to red was also partly attributed to a methodological update. Earlier estimates included forward projections for 2025 revenue. The new model now measures both revenue and investment based on trailing 12-month actual data, rather than forecasts.)

Funding conditions and valuation heat have also veered into cautious and worsening territory. This is largely due to questions about the stability of financing, such as riskier deals like Oracle’s $38 billion debt raise for new data centers and Nvidia’s backing of xAI’s $20 billion round. Getting financing for big data center buildouts is starting to become more complicated and slightly riskier, even as the companies continue to deliver solid finances and steady cash flow.

The gap between investor optimism and “earnings reality” is also widening, with industry price-earnings multiples increasing though still well below dot-com era peaks. Revenue momentum, as well as economic strain, are still in the “safe” green zone, but are both worsening.

At a glance, all this means we are in an AI boom, at least for now. And other analysts agree, including Goldman Sachs, which said in a note earlier this week that although AI-related equities are highly valued, the U.S. market isn’t yet displaying the broad macroeconomic distortions typical of past asset bubbles like the late-1990s tech boom.

While there’s reason to stay cautious—and no shortage of froth—it still might be too early to call this a bubble.

And with that, here’s the rest of the AI news.

Beatrice Nolan
bea.nolan@fortune.com
@beafreyanolan

FORTUNE ON AI

The rise of Yann LeCun, the 65-year-old NYU professor who is planning to leave Mark Zuckerberg’s highly paid team at Meta to launch his own AI startup — by Dave Smith

Exclusive: Beside, an AI voice startup, raises $32 million to build an AI receptionist for small businesses — Beatrice Nolan

Why Land O’Lakes is piloting a new AI tool called ‘Oz’ in bid to help boost profits on cost-pressured American farms — John Kell

OpenAI says it plans to report stunning annual losses through 2028—and then turn wildly profitable just two years later — Dave Smith

CoreWeave’s earnings report highlights $56 billion in contracted revenue, but its guidance and share price tick down amid AI infrastructure bubble fears — Amanda Gerut

AI IN THE NEWS

ChatGPT gets chattier with GPT-5.1. OpenAI has rolled out GPT-5.1, which the company is hailing as a smarter and more conversational upgrade to its popular chatbot. The new version is aimed at making the chatbot feel warmer, as well as quicker and better at following directions. Users can now tweak tone and style with presets such as Professional, Quirky, and Candid—or even adjust how “warm” or emoji-filled responses are. GPT-5.1 comes in two modes, Instant and Thinking, which the company says balances speed with deeper reasoning. The update starts rolling out to paid users this week. Read more from OpenAI here. 

Anthropic's $50 billion U.S. AI infrastructure push. AI startup Anthropic plans to spend $50 billion building data centers across the U.S., starting in Texas and New York, in partnership with GPU cloud provider Fluidstack. The build-out aims to support Anthropic’s enterprise growth and research ambitions, creating 800 permanent jobs and 2,000 construction roles, with the first sites live in 2026. The move positions Anthropic as a key U.S. infrastructure player amid growing political focus on domestic AI capacity—and as a rival to OpenAI’s $1.4 trillion infrastructure plans. CEO Dario Amodei said the effort will help power “AI systems that can drive scientific breakthroughs.” Read more in CNBC here.

Microsoft connects U.S. datacenters into first ‘AI superfactory.' Microsoft has activated a new AI datacenter in Atlanta, linking it to its recently announced Wisconsin facility to form what the company calls its first “AI superfactory.” The connected sites, part of Microsoft’s Fairwater project, use a dedicated fiber-optic network to act as a single distributed system for training advanced AI models at unprecedented speed. The Fairwater design features NVIDIA’s new Blackwell GPUs, a two-story layout for higher density, and nearly water-free liquid cooling. Executives say the networked datacenters will power OpenAI, Microsoft’s AI Superintelligence Team, and Copilot tools — enabling breakthroughs in AI research and real-world applications. Read more from The Wall Street Journal here. 

Michael Burry says AI giants are inflating profits. The “Big Short” investor Michael Burry—known for calling the 2008 crash—accused major AI and cloud providers of using aggressive accounting to boost reported earnings. In a post on X, Burry alleged that hyperscalers like Oracle and Meta are understating depreciation expenses by extending the estimated life span of costly Nvidia chips and servers, a move he says could inflate industry profits by $176 billion between 2026 and 2028. He claimed Oracle’s and Meta’s earnings could be overstated by as much as 27% and 21%, respectively. Read more from Bloomberg here.

AI CALENDAR

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego.

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

76%

That's the number of organizations that have already faced a security problem with their AI systems. According to a new report from Harness, an AI DevOps platform company, enterprises are struggling to keep track of where and how AI is being used, and it’s creating new security risks. According to the research, 62% of security teams can’t identify where large language models (LLMs) are deployed within the company, while 65% of organizations say they have “shadow AI"—where employees use AI tools for work without their company's approval—systems running outside official oversight. As a result, 76% of these organizations have already suffered prompt-injection incidents, and 65% have experienced jailbreaking attempts. The report warns that traditional security tools can’t keep up with the fast-evolving nature of AI tools and employee use of such tools. The report also noted that developers and security teams are often misaligned, with only a third notifying security before starting AI projects.

“Shadow AI has become the new enterprise blind spot,” said Adam Arellano, Harness’ Field CTO. “Security has to live across the entire software lifecycle — before, during, and after code.”

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.