Two months after Nvidia and OpenAI unveiled their eyepopping plan to deploy at least 10 gigawatts of Nvidia systems—and up to $100 billion in investments—the chipmaker now admits the deal isn’t actually final.
Speaking Tuesday at UBS’s Global Technology and AI Conference in Scottsdale, Ariz., Nvidia EVP and CFO Colette Kress told investors that the much-hyped OpenAI partnership is still at the letter-of-intent stage.
“We still haven’t completed a definitive agreement,” Kress said when asked how much of the 10-gigawatt commitment is actually locked in.
That’s a striking clarification for a deal that Nvidia CEO Jensen Huang once called “the biggest AI infrastructure project in history.” Analysts had estimated that the deal could generate as much as $500 billion in revenue for the AI chipmaker.
When the companies announced the partnership in September, they outlined a plan to deploy millions of Nvidia GPUs over several years, backed by up to 10 gigawatts of data center capacity. Nvidia pledged to invest up to $100 billion in OpenAI as each tranche comes online. The news helped fuel an AI-infrastructure rally, sending Nvidia shares up 4% and reinforcing the narrative that the two companies are joined at the hip.
Kress’s comments suggest something more tentative, even months after the framework was released.
A megadeal that isn’t in the numbers—yet
It’s unclear why the deal hasn’t been executed, but Nvidia’s latest 10-Q offers clues. The filing states plainly that “there is no assurance that any investment will be completed on expected terms, if at all,” referring not only to the OpenAI arrangement but also to Nvidia’s planned $10 billion investment in Anthropic and its $5 billion commitment to Intel.
In a lengthy “Risk Factors” section, Nvidia spells out the fragile architecture underpinning megadeals like this one. The company stresses that the story is only as real as the world’s ability to build and power the data centers required to run its systems. Nvidia must order GPUs, HBM memory, networking gear, and other components more than a year in advance, often via non-cancelable, prepaid contracts. If customers scale back, delay financing, or change direction, Nvidia warns it may end up with “excess inventory,” “cancellation penalties,” or “inventory provisions or impairments.” Past mismatches between supply and demand have “significantly harmed our financial results,” the filing notes.
The biggest swing factor seems to be the physical world: Nvidia says the availability of “data center capacity, energy, and capital” is critical for customers to deploy the AI systems they’ve verbally committed to. Power buildout is described as a “multi-year process” that faces “regulatory, technical, and construction challenges.” If customers can’t secure enough electricity or financing, Nvidia warns, it could “delay customer deployments or reduce the scale” of AI adoption.
Nvidia also admits that its own pace of innovation makes planning harder. It has moved to an annual cadence of new architectures—Hopper, Blackwell, Vera Rubin—while still supporting prior generations. It notes that a faster architecture pace “may magnify the challenges” of predicting demand and can lead to “reduced demand for current generation” products.
These admissions nod to the warnings of AI bears like Michael Burry, the investor of “the Big Short” fame, who has alleged that Nvidia and other chipmakers are overextending the useful lives of their chips and that the chips’ eventual depreciation will cause breakdowns in the investment cycle. However, Huang has said that chips from six years ago are still running at full pace.
The company also nodded explicitly to past boom-bust cycles tied to “trendy” use cases like crypto mining, warning that new AI workloads could create similar spikes and crashes that are hard to forecast and can flood the gray market with secondhand GPUs.
Despite the lack of a deal, Kress stressed that Nvidia’s relationship with OpenAI remains “a very strong partnership,” more than a decade old. OpenAI, she said, considers Nvidia its “preferred partner” for compute. But she added that Nvidia’s current sales outlook does not rely on the new megadeal.
The roughly $500 billion of Blackwell and Vera Rubin system demand Nvidia has guided for 2025–2026 “doesn’t include any of the work we’re doing right now on the next part of the agreement with OpenAI,” she said. For now, OpenAI’s purchases flow indirectly through cloud partners like Microsoft and Oracle rather than through the new direct arrangement laid out in the LOI.
OpenAI “does want to go direct,” Kress said. “But again, we’re still working on a definitive agreement.”
Nvidia insists the moat is intact
On competitive dynamics, Kress was unequivocal. Markets lately have been cheering Google’s TPU – which has a smaller use-case than GPU but requires less power – as a potential competitor to NVIDIA’s GPU. Asked whether those types of chips, called ASICS, are narrowing Nvidia’s lead, she responded: “Absolutely not.”
“Our focus right now is helping all different model builders, but also helping so many enterprises with a full stack,” she said. Nvidia’s defensive moat, she argued, isn’t any individual chip but the entire platform: Hardware, CUDA, and a constantly expanding library of industry-specific software. That stack, she said, is why older architectures remain heavily used even as Blackwell becomes the new standard.
“Everybody is on our platform,” Kress said. “All models are on our platform, both in the cloud as well as on-prem.”











