Nvidia is invincible. Unless it isn't.

Nvidia’s rise seemed unstoppable, but cracks may be appearing in the strategy that built its $4.5 trillion empire

Nvidia is invincible. Unless it isn't.
ILLUSTRATION BY LINCOLN AGNEW; ALL PHOTOS FROM GETTY IMAGES. PHOTOS OF JENSEN HUANG BY ARTUR WIDAK/NURPHOTO; THIBAUD MORITZ/AFP (2); ANNABELLE CHIH/BLOOMBERG; JOSH EDELSON/AFP; JOHANNES NEUDECKER/PICTURE ALLIANCE; MUSTAFA YALCIN/ANADOLU. PHOTO OF HUANG IN BEIJING BY PANG XINGLEI/XINHUA. PHOTO OF HUANG’S HAND WITH PROCESSOR BY PATRICK T. FALLON/BLOOMBERG. OTHER PHOTOS BY E+ AND MOMENT
Shawn TullyBy Shawn TullySenior Editor-at-Large
Shawn TullySenior Editor-at-Large

    Shawn Tully is a senior editor-at-large at Fortune, covering the biggest trends in business, aviation, politics, and leadership.

    In late October, Nvidia cofounder and CEO Jensen Huang took the stage at the company’s annual GTC conference to make a typically sweeping declaration. Nvidia, he pronounced, sits “at the epicenter of the largest industrial revolution in human history,” eclipsing the advent of the steam engine and electricity. He went on to unveil a stunning array of partnerships: Nvidia plans to build 100,000 driverless cars alongside Uber and join Palantir to supply software and chips to accelerate the transfer of products from warehouses to doorsteps; it has also hatched a blueprint showing “hyperscalers” how to build “AI factories,” giga-scale data centers that only operate at their most potent and efficient deploying, guess what, Nvidia systems. 

    Huang’s speech so pumped Nvidia fans that they sent its market cap soaring 5%, a one-day, $250 billion jump that’s 60% bigger than Boeing’s entire worth. The move made Nvidia the world’s first company to touch a $5 trillion valuation. According to MarketBeat, 46 out of 47 analysts have a “strong buy” or “buy” rating on Nvidia stock. 

    A small chorus of extremely insightful skeptics, however, aren’t so sure everything is quite as rosy as Huang and investors seem to believe. Their worries coalesce around how Nvidia is striving to maintain its supremacy, by assembling what we’ve never seen before on this scale: a complex superstructure encompassing investments and financing for its own customers designed to boost and perpetuate demand for its own products. The strategy—centered mainly on OpenAI and “neocloud” CoreWeave—is as much about financial engineering as accelerated compute engineering. 

    As Jay Goldberg of Seaport Global Securities, who’s issued the only Nvidia “sell” on the Street, puts it, “Nvidia is buying demand here.” Lisa Shalett, chief investment officer at Morgan Stanley Wealth Management, says the issue is all about excessive debt, not Nvidia’s but its customers’. “Nvidia is in a position to prop up customers so that it’s able to grow,” avows Shalett. “It’s getting more and more complicated because the ones they’re funding are weaker, and Nvidia’s enabling them to take on borrowing.” 

    Nvidia declined to comment for this story. But to understand how this powerful symbol of the new American economy got here, you have to grasp three things: the power dynamics at play among the big hyperscalers like Amazon, Microsoft, Alphabet, and Meta that are erecting giant AI data centers; the neoclouds rising to challenge them; and Jensen Huang’s most gripping fear. 

    Nvidia’s path to AI dominance

    During most of its 32-year existence, Nvidia focused on making 3D graphics chips for the PC gaming market—and struggled in the early days, at one point running out of cash. In 1999, it unveiled a super-advanced GPU (graphics processing unit) that generated images by handling complex computations at a far higher speed than its predecessors. A series of breakthroughs in the 2000s enabled a kind of AI known as “deep learning”—and allowed the technology to do things it had struggled to do previously—first translation, then image recognition. By around 2012, Nvidia had gone all in on GPUs and other hardware optimized for AI, which it coupled with its custom CUDA software that empowers developers to write applications for its groundbreaking chips. 

    Once OpenAI released ChatGPT in late 2022, the AI boom was starting, and Nvidia had brilliantly positioned itself to ride the wave. Its GPUs ranked as the industry’s most powerful and, crucially, most efficient per unit of power consumption in running the stupendous number of mathematical calculations deployed in AI. In fiscal year 2023, it made a modest $4.4 billion in profits. That figure has soared to $86.6 billion over the past four quarters, making Nvidia the largest tech earner behind Alphabet ($124.3 billion), Apple ($112 billion), Microsoft ($104.9 billion), and ahead of Amazon and Meta. 

    Unlike past high-fliers that got heavily touted and collapsed—notably drivers of the telecom bubble of the late 1990s and early 2000s—nobody is questioning that Nvidia makes tons of money. But it’s apparent that the chip colossus is reliant on a few huge customers. In the second quarter of this year, Nvidia collected 52% of its sales from three customers it didn’t disclose, but which analysts identify as Microsoft, Amazon, and Alphabet. Meta and Elon Musk (via xAI and Tesla) are also big buyers of its systems. Diversifying internationally is not easy, especially as the Chinese government cut off imports of Nvidia’s lowerpowered chips as a riposte in the trade war, and the Trump administration’s export controls have blocked sales of its most coveted and sophisticated GPUs. Last year, the world’s second-largest economy contributed more than one in eight dollars of sales via such clients as Baidu and Alibaba. That number has already been shrinking and is bound to fall further if not disappear, potentially heightening Nvidia’s reliance on the biggest of big players.

    Huang believes that leaning so heavily on a few, superpowerful buyers is dangerous. According to a source who’s been active in developing AI infrastructure, Nvidia’s prime motive is avoiding the fate of almost all tech hardware suppliers: getting commoditized.

    “Hardware is a low-margin business, and Nvidia knows that you can only maintain a technological edge in hardware for so long,” says this source. “They’re afraid of getting commoditized like the makers of CPUs [the basic “brains” of today’s computers]. They’re afraid their bargaining power will decline over time and that their huge margins will collapse.” Right now, Nvidia’s a wondrous exception in tech hardware. In fiscal year 2025, it registered fantastic gross margins of nearly 80%, easily beating AMD, at just under 50%, not to mention Intel, at 30%. 

    Until now, Nvidia has held the whip hand. “Its GPU is so superior that it has a near monopoly,” says analyst Goldberg of Seaport. As a result, an Amazon or a Meta hasn’t been able to deploy the tremendous negotiating clout their immense size gives them versus their hardware suppliers. The big hyperscalers are striving hard to regain the advantage. 

    How? The major players don’t just want to be Nvidia customers; their long-term goal is to compete with Nvidia by fashioning their own in-house alternatives. Microsoft is working on a version dubbed Maia, and Amazon has deployed two—Trainium and Inferentia. Google has already largely shifted to its own silicon, which it calls a TPU. 

    All this competition is likely to depress prices. “Most AI to date has been built on one chip provider. It’s pricey,” Amazon CEO Andy Jassy wrote in its 2024 annual report. “It doesn’t have to be as expensive as it is today, and it won’t be in the future.” Microsoft’s top executives have also stressed that they want their own silicon for powering their data centers. Gil Luria, head of technology research at D.A. Davidson, puts it simply: “Nvidia wants to diversify away from the Big Four, and the Big Four want to diversify away from them.” (“Big Four” here meaning Microsoft, Amazon, Google, and Meta.)

    But plenty of other players want to diversify into Nvidia’s world. AMD is becoming a big player in inference, where Nvidia dominates; Qualcomm is trying to get into the data center chip game, as are a slew of startup AI chip companies like Groq, which are beginning to get some traction, especially in the Middle East. Even OpenAI, which is forging tight bonds with Nvidia, is simultaneously going all in on making its own silicon as it tries to diversify away from Nvidia. Dylan Patel, who writes the widely respected SemiAnalysis newsletter, wrote in mid-November that OpenAI’s entrant may be so good that Microsoft may choose to use it over its own product, Maia. 

    80%


    Nvidia’s gross margins in FY 2025, beating AMD (50%) and Intel (30%) by a wide margin

    So the relationships are increasingly complicated as these players strike deal after deal. CoreWeave’s biggest customer by miles is Microsoft; it wants to broaden its reach to serve OpenAI and other AI startups. OpenAI meanwhile is a CoreWeave customer, a CoreWeave investor, and soon to be a CoreWeave competitor once the Stargate consortium gets up and running. 

    CoreWeave has found a hardy backer in Nvidia—but analysts worry about its mounting debt.
    Michael Nagle—Bloomberg/Getty Images

    But within this web of relationships, it’s two that Nvidia has developed—with OpenAI and CoreWeave—that have caught the attention of the “what could go wrong” crowd. “Nvidia’s been clever in propping up another ecosystem of their own so that they can compete with the big hyperscalers in data center capacity,” says the person involved in AI infrastructure. 

    To do so, Nvidia is using something akin to “vendor financing.” It’s the long-standing, totally legitimate practice of making loans to customers that spur them to buy your products—it’s how automakers boost sales by providing car loans. It works fine as long as there’s real demand for your products. 

    The Nvidia approach might be characterized as a push for “direct” or “curated demand.” That formula’s downside: The lavish financing can trigger excessive borrowing and push AI computing capacity way beyond the point where demand can catch up. However, it’s important to note that Nvidia’s campaign doesn’t amount to the notorious “roundtripping” that sunk such telecoms as Nortel and Lucent, which made big loans to strapped telecom fiber-optic purveyors so they could buy their products, even though they were just piling up unused inventory. 

    So far, Nvidia’s boldest move in challenging the behemoths is its breathtaking deal with OpenAI. Huang considers the still-private inventor of ChatGPT as the spearhead of the new paradigm, and by implication, Nvidia’s largest customer going forward. He recently predicted that OpenAI will become the “next multitrillion-dollar hyperscale company.” Prior to the new tie-up, OpenAI was just getting started as a hyperscaler via its participation in Stargate. It didn’t operate data centers on its own and rented capacity from the Big Four, notably Microsoft. The Nvidia partnership aims at making OpenAI a major competitor, in addition to its status as a customer, to the giant incumbents. 

    Under the arrangement, announced in late September, Nvidia would purchase up to $100 billion in OpenAI equity, enabling it to outfit numerous data centers offering an astounding 10 gigawatts of capacity at an estimated total cost of roughly $500 billion. The build-out would happen in tiers, and for each one Nvidia provides $10 billion in financing, OpenAI throws in $40 billion, and an estimated $30 billion gets spent on Nvidia chips.

    $700 billion
    A conservative forecast of what the four big hyperscalers will spend this year and next on AI infrastructure. To garner even a decent 15% return on those expenditures alone, they’d need to pocket an extra $105 billion a year in AI profits, a number that seems unreachable.

    But Luria cautions that a lot could go wrong. OpenAI needs to find the extra $40 billion required for each installment on its own. That’s a high bar. To this point, OpenAI has succeeded in funding its operations by raising equity. But it’s now entering a new era of big capital investments at the same time it’s bleeding loads of cash; in the first half of this year, it lost $13.5 billion on just $4.3 billion in revenue. A recent report suggests the company will see operating losses of $74 billion in 2028 and not break even till 2030. Hence, it’s likely that OpenAI will need to borrow most of the additional $400 billion.

    OpenAI has now made total commitments of around $1.4 trillion for its AI infrastructure projects, including $300 billion worth to be built by Oracle (which incorporates huge purchases of Nvidia GPUs). Given the frenzy of optimism surrounding OpenAI, it may succeed in borrowing hundreds of billions to amass all those GPUs. But as Nick Del Deo, analyst at MoffettNathanson, notes, “OpenAI is an unproven business model.” Significantly, Nvidia is only an equity investor so far. It hasn’t offered its balance sheet to backstop the immense debt OpenAI will need to push the program forward. Adding one more wrinkle to the competitive landscape: OpenAI doesn’t want to be completely beholden to Nvidia, either; it is talking to other chip providers and has even begun efforts to design its own bespoke chips. 

    Nvidia secured its second marquee partnership with CoreWeave. It’s the largest by far of the “neoclouds” that are rapidly installing AI infrastructure. The former crypto miner, however, is struggling under a mountain of debt that analysts are skeptical it can “scale” out of. So Nvidia is giving the company lots of help to buy its chips—in far larger quantities than would be possible without those billions in support. Nvidia invested $250 million to bolster CoreWeave’s troubled IPO this year and holds an ownership stake of over 6%. CoreWeave, like OpenAI, is so far an “Nvidia-only” house. It already operates 33 data centers, all holding its big backer’s GPUs, and boasts a $56 billion backlog of contracts for AI infrastructure. Nvidia is also using its muscle to help CoreWeave attract promising startups that, without the chipmaker’s backing, couldn’t raise the financing to rent space from the rising hyperscaler. In late September, Nvidia signed an agreement guaranteeing to purchase $6.3 billion in computing capacity if CoreWeave can’t lease or sell it. “It’s like cosigning on a loan,” says Del Deo. According to a source familiar with CoreWeave’s thinking, that support will empower the operator to provide computing capacity to such outfits as Mistral and Cohere that could evolve into big AI success stories.

    “Nvidia is in a position to prop up customers so that it’s able to grow. It’s getting more and more complicated because the ones they’re funding are weaker, and Nvidia’s enabling them to take on borrowing.”
    Lisa Shalett, Chief Investment Officer, Morgan Stanley Wealth Management

    For Nvidia, CoreWeave fits the OpenAI mold: attract purchasers that used Microsoft or Amazon centers—powered by GPUs they bought from Nvidia—to CoreWeave facilities. In an excellent 80-page report on CoreWeave released in March, Del Deo praises its technical expertise and customer service. The potential problem: When CoreWeave signs a contract with a financially sound major hyperscaler that needs lots of capacity beyond what it can handle in-house, those rental payments are “money good.” CoreWeave can raise financing, since it’s certain the client (say, Microsoft) will pay. 

    But as part of growing its own and the new Nvidia complex, CoreWeave is taking on cash-burning newcomers that lack credit ratings. The biggest case in point, in fact, is its arrangement with OpenAI itself— another indication of how deeply the Nvidia deal-grid is intertwined. The terms call for CoreWeave to supply $22.4 billion in AI infrastructure capacity, certain to run on Nvidia GPUs, over about five years. Based on the great expectations for OpenAI, CoreWeave already financed part of the arrangement through bank loans. But the borrowings were expensive at over 8%. “The deal with OpenAI and other non-investmentgrade clients raises the risks for CoreWeave,” concludes Del Deo. 

    In addition, CoreWeave, like OpenAI, harbors big cash flow deficits and is borrowing heavily to finance its mushrooming footprint. Its cofounder and CEO, Michael Intrator, claims he’s driving a “race car, not minivan,” and that “debt is the fuel for this company.” But could it also be its downfall?

    A debt debacle?

    To see where the dangers lie, it’s important to grasp who owes all this debt, and to what parties. The brick-and-mortar data center shells are generally built by real estate companies, chiefly REITs. They finance the projects with construction loans provided by, say, banks or private equity firms, and lease the space to the hyperscalers. The biggest among them may use their own cash flow to buy the systems that fill the campuses, but a CoreWeave or OpenAI doesn’t have the resources and must borrow the funds from a variety of lenders.

    The new, self-assembled Nvidia galaxy is heavily populated with OpenAI and other companies that haven’t proved they can make money based on their AI offerings, but are poised to shoulder big borrowings for their AI infrastructure empires.

    Once again, if demand from Microsoft’s enterprise and app customers proves much lower than anticipated, it will simply keep paying the lease on the building to the REIT and interest and principal on the loans for the tech gear to that lender. No risk there. The new, selfassembled Nvidia galaxy is heavily populated with the likes of OpenAI and startups that haven’t proved they can make money based on their AI offerings, but are poised to shoulder big borrowings for creating and expanding AI infrastructure empires.

    CoreWeave and OpenAI are indeed signing leases with their AI customers to rent that forthcoming deluge of computing capacity as it comes online. But to keep renting, these customers need to mint rich profits from what’s advertised as the greatest technological leap and force multiplier for profitability ever. If that doesn’t happen, the market could get flooded by partly vacant data centers and by GPUs that securitized the borrowings, and get reclaimed by the lenders, sending all those unused Nvidia chips back to the market. In this scenario, the new hyperscalers Nvidia backed won’t be able to pay the rent on the campus buildings, nor the interest and principal on the debt that bought the GPUs. That could wallop lending for data centers and demand for Nvidia’s GPUs.

    That scenario isn’t dissuading true believers. “Nvidia’s stayed ahead time and time again,” says one executive from a large Nvidia customer who was not authorized to speak publicly. “They keep shortening the product cycles. Everyone is playing catch-up with their continued innovation.” 

    But any clearheaded industry watcher has to at least raise an eyebrow about how much money is flooding into AI, and how little payoff there has been so far. All told, the four big players will spend around $330 billion on AI infrastructure in 2025, according to their own forecasts, and all are pledging significant increases next year. Citigroup forecasts that the figure for all hyperscalers next year will reach $490 billion. So let’s take the conservative view that the Big Four will spend $700 billion over the two-year span. To garner even a decent 15% return on those expenditures alone, they’d need to pocket an extra $105 billion a year in AI profits. That additional $100 billion–plus amounts to nearly one-third of the $350 billion in total GAAP net profits that they generated in their past four quarters. 

    Notes one executive who’s raised large sums for AI infrastructure in the past: “I share the concern that such massive amounts of capital are going into it without a clear line of sight on profitability. We have a sense for productivity savings and research breakthroughs. But what are the killer apps that are going to drive massive value? We don’t know yet.” This executive asks the question that Huang may ponder when he stops selling and thinks deep thoughts: “If the return on all that capital is disappointing, where does the pain go?” As long as the AI hype narrative continues, Nvidia’s money-spinning machine will keep whirring. But if the story changes, Nvidia will feel the pain—and so will its investors.

    This article appears in the December 2025/January 2026 issue of Fortune with the headline “Nvidia is invincible. Unless it isn’t.”