“It’ll happen slowly, and then all at once.”
That’s how Jim Morrow, founder and chief investment officer of Callodine Capital, describes the eventual – inevitable – unwinding of what he calls “the most crowded trade in history.”
Of course, he isn’t just paraphrasing Ernest Hemingway—he’s talking about the AI race, and the trillion-dollar deals so overstretched they’re better described as knots than trades. And he’s not alone in sounding alarms.
Michael Burry—the investor of Big Short fame who famously predicted the 2008 housing collapse—broke a two-year silence this week to say nearly the same thing: that Big Tech’s AI-era profits are built on “one of the most common frauds in the modern era”—stretching the depreciation schedule (some, including Burry, would say cheating the depreciation schedule).
And it landed with extra weight: earlier this week, Burry quietly deregistered his investing firm, Scion Asset Management, effectively stepping away from managing outside money or filing public disclosures. Some analysts interpreted the move as less of an ominous sign and more, as Bruno Schneller, managing director at Erlen Capital Management, told CNBC, stepping away “from a game he believes is fundamentally rigged.”
“On to much better things,” Burry hinted on X, with a new launch expected on November 25.
Freed from the obligations of reporting and client management, Burry returned to X with a message that cut straight through the current AI euphoria. To him, the boom in GPUs, data centers, and trillion-dollar AI bets isn’t evidence of unstoppable growth; it’s evidence of a financial cycle that looks increasingly distorted, increasingly crowded, and increasingly fragile.
Burry put numbers to it. In a post on X, the investor estimated that Big Tech will understate depreciation by $176 billion between 2026 and 2028, inflating reported profits by 26.9% at Oracle and 20.8% at Meta, to name two of his specific targets.
Meta did not respond to a request for comment. Oracle declined to comment.
“He’s spot on,” Morrow tells Fortune. Morrow has been making the arguments for months, warning that a “tsunami of depreciation” could quietly flatten Big Tech’s AI profits. Behind the trillion-dollar boom in chips, data centers, and model training, he argues, lies a simple but powerful illusion: companies have quietly changed the length of time they account for their machines—and their semiconductor chips—wearing out and depreciating.
“Companies are very aware of it,” Morrow claims. “They’ve gone to great lengths to change their accounting and depreciation schedules to get ahead of it—to effectively avoid all this capex hitting their income statements.”
Burry’s post drew viral attention. Morrow’s been making the case longer, but he thinks the sudden resonance means investors are finally waking up to something fundamental.
“These aren’t small numbers—they’re huge. And the fact that someone like Burry is calling it out tells you people are starting to notice what’s happening between the lines of the balance sheet.”
The great depreciation stretch
Here’s how depreciation works—or doesn’t. When tech giants like Microsoft, Meta, and Oracle build AI data centers, they buy tens of billions of dollars in GPUs, servers, and cooling systems. Normally, those assets lose value fast, cutting into profits. But recently, Burry claims, many companies quietly extended how long they claim those machines last—from roughly three years to as many as six.
That simple change lets them spread out their costs and report fatter earnings now.
“Had they not made those changes,” Morrow says, “their earnings would be dramatically lower.”
Meta’s filings, for one, seem to at least corroborate the directionality of Burry’s and Morrow’s claim. Until 2024, servers and network gear were depreciated over four to five years; effective January 2025, Meta said they would “extend the estimated useful lives” of “certain servers and network assets” to 5.5 years.
“Depreciation expense on property and equipment,” Meta wrote in its annual filing for 2022, was “$8.50 billion, $7.56 billion, and $6.39 billion for the years ended December 31, 2022, 2021, and 2020, respectively.”
In other words: depreciation was a major cost already, and management clearly chose to stretch the timeline over which those costs are recognized. The policy change doesn’t prove Burry’s dollar totals across firms, but it markedly moves reported earnings in the direction he describes by lowering near-term depreciation expense and pushing more of it into later years.
Morrow argues that the timing makes little sense. As the pace of technological change accelerates—Nvidia now releases new chips every 12 to 18 months instead of every two years—the hardware becomes obsolete faster, not slower.
It’s like any old technology. Take laptops: imagine trying to run the newest version of Adobe Premiere Pro on a 2018 MacBook. Sure, it might boot up, but it’s going to overheat, lag, or crash, because it simply wasn’t built for today’s compute demands. Old chips are the same way: they don’t stop working, but they quickly lose their economic value as newer, faster models render them functionally obsolete, Morrow argues.
To be sure, Morrow’s expertise is in value investing and high dividend companies, as opposed to the trendier, high-growth tech stocks: in fact, he said he has no long positions in technology broadly. So, he benefits if Big Tech multiples compress or if the markets begin to reprice the costs buried in AI spending. Still, his critique aligns with growing unease from other analysts.
Richard Jarc, an analyst at Uncovered Alpha, has raised similar alarms about the mismatch between AI chip lifecycles and corporate accounting.
He has said that the latest generation of GPUs wears out far faster than companies’ amortization schedules suggest. While some point to the continued use of Nvidia’s H100 chips—released three years ago—as evidence of longer utility, Jarc says that’s misleading.
Demand remains high largely because a handful of companies are subsidizing compute costs for end users, a dynamic that depends on investor cash flow rather than fundamentals, Jarc argues. More importantly, Nvidia has now shifted from releasing new chips every 18–24 months to an annual cadence. In that context, Jarc said, treating GPUs as if they’ll remain valuable for five or six years is unrealistic: their true economic life, he estimates, is closer to one or two.
The Economist in Sept. called the delayed depreciation “the $4 trillion accounting puzzle at the heart of the AI cloud,” noting that Microsoft, Alphabet, Amazon, Meta, and Oracle have each extended the useful life of their servers even as Nvidia shortens its chip cycle to one year.
By The Economist’s estimates, if those assets were depreciated over three years instead of the longer timelines companies now assume, annual pre-tax profits would fall by $26 billion, roughly an 8% hit. A two-year schedule would double that loss, and if depreciation truly matched Nvidia’s pace, the implied market value hit could reach $4 trillion.
Not everyone buys the doom loop. In a note to clients sent this week, Bank of America’s semiconductor team argued that the market’s sudden skepticism about AI capex is evidence that the trade is far less overcrowded than credits claim.
The recent selloff in megacap AI names, the team led by Vivek Arya wrote, was driven by “correctable macro factors”—shutdown jitters, weak jobs data, tariff confusion, even misinterpreted OpenAI comments—rather than any real deterioration in AI demand. In fact, the firm pointed to surging ancillary segments like memory and optical (up 14% last week), as well as Nvidia’s disclosure of $500 billion-plus in 2025–26 data-center orders, as signs the underlying spending cycle remains “robust.”
Growth or just capital intensity?
But Morrow is most worried that investors are mistaking raw spending for real growth. The market, he argues, have stopped distinguishing between capital intensity and genuine productivity. The AI boom has pushed valuations from the realm of software multiples into something closer to industrial-scale infrastructure math. Building just one hyperscale, one-gigawatt data center—enough to support a cutting-edge AI model—can run around $50 billion, with the majority devoted to GPUs, followed by the buildings, cooling systems, and power infrastructure.
Reality check—“none of these companies has ever managed a $50 billion project before,” he says. “Now they’re trying to do fifty of them at once.”
The irony, he adds, is that many of those facilities can’t even run yet. Data centers across Santa Clara and Northern Virginia are sitting idle, waiting for grid hookups that could take years.
“Every month a $35 billion stack of GPUs sits without power, that’s a billion dollars of depreciation just burning a hole in the balance sheet,” he says. “So of course they’re panicking — and ordering their own turbines.”
He thinks the result will be a massive power glut by the late 2020s: an overbuilt grid, over-leveraged utilities, and ratepayers stuck with the bill.
“We’ve seen this movie before—in shale, in fiber, in railroads,” he says. “Every capital-spending boom ends the same way: overcapacity, low returns, and a bailout.”
The biggest risk, he says, is that investors have stopped looking at balance sheets entirely. He points to the sheer concentration of today’s market: nearly half of all 401(k) money now effectively tied to six megacaps.
“This is the most crowded trade in history,” he says. “When it turns, it’s going to turn fast.”
