Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition, I compare OpenAI to a house made of…well, no one really knows. Also: OpenAI launches a ChatGPT app store (we’ll see if it fares better than their previous custom GPT store)…Anthropic taps Trump-linked Bitcoin miner for a massive AI power build…Anthropic’s Claude ran a snack operation in the Wall Street Journal newsroom…NOAA says its new AI-driven weather models improve forecast speed and accuracy…Google debuts a surprisingly powerful Flash version of its Gemini 3 model…and the U.K. AI Safety Institute finds that a large percentage of Britons have used chatbots for emotional support.
Talk about an expensive building project. OpenAI is reportedly raising tens of fresh billions at a $750 billion valuation, including $10 billion from Amazon. It is pouring money into compute — and literally pouring concrete into the data centers that power AI chips—which the company says it needs to keep constructing the towering stack of models and applications that more than 800 million users now rely on.
The cost has inspired both awe and deep unease. Industry observers watch OpenAI’s expansion the way they might watch the Empire State Building rise — with a budget that keeps climbing as fast as the structure itself. (The actual Empire State Building, it’s important to note, only cost about $700 million in today’s money and came in under budget.) And some skeptics are increasingly convinced that the entire edifice is a monument to hubris that will come tumbling down before long.
Here’s how I think about it: If OpenAI is a house, it’s still in the early stages of construction — but no one agrees what it’s made of. The plans are undeniably ambitious, pushing the structure to unprecedented heights. Is this a house made of cards? Of teetering wooden pillars? Of solid concrete? The question is whether whatever structure is being built can actually hold the weight already being placed on it.
The experts are split
That uncertainty has split the experts I’ve spoken to. Technology analyst Rob Enderle said he would like to see OpenAI resting on a firmer foundation. “I would feel much more comfortable if they had a much stronger base in some of the basics,” he told me, particularly around making products trustworthy enough for enterprise businesses to increase adoption. He added that OpenAI has at times “gone off the rails” in terms of direction, pointing out that the company’s original independent safety and ethics oversight structures have been sidelined since CEO Sam Altman was reinstated after being briefly fired in November 2023. These days, he argued, OpenAI is trying to compete with everyone at once; reacting to rivals rather than executing a clear roadmap; and spending heavily without clear prioritization.
A recognition that it may have become distracted by trying to do much at once was part of the reason OpenAI CEO Sam Altman declared a “code red” at the company two weeks ago, as Fortune reported in an in-depth new feature this week. The story looks at the why, the how, and the what of OpenAI’s “code red” and why Altman has warned the company to brace for “rough vibes” and economic headwinds in the face of increased competition from Google and OpenAI. Altman is trying to light a fire under his team to refocus on OpenAI’s core ChatGPT offerings over the coming weeks. But, according to Enderle, this is all very reactive and not strategic enough.
Commenting on the company’s constant shipping — from new AI models and a new image generation model, to a web browser, shopping features inside ChatGPT, and a new app ecosystem launched just this week — alongside a massive Stargate data-center buildout, Enderle compared OpenAI to Netscape and other dot-com companies that got rich too fast and lost strategic discipline.
“They’re running so fast, they’re not really focusing on direction very much,” he said.
Others, however, strongly disagree. Futurum Research founder and CEO Daniel Newman told me that concerns about OpenAI’s house collapsing miss the bigger picture. “This is a multi-decade supercycle,” he said, likening the company’s current phase of AI to Netflix’s DVD-by-mail era — a precursor to the true paradigm shift that followed. From the perspective of unmet demand and long-term value creation, Newman believes OpenAI’s massive compute investments are rational, not reckless.
“I would call what [OpenAI] has today very high-quality three-dimensional simulations and architectural renderings of a future,” Newman said. The real question, he added, is whether OpenAI can win enough market share to build the mansion it’s envisioning.
“I think OpenAI’s real goal is to become a hyperscaler,” Newman said. “They’ll have the infrastructure, the applications, the data, the workflows, the agentic tools — and people will buy everything they now get elsewhere from OpenAI instead. It’s an incredibly ambitious goal. There’s nothing to say it will work. But if it does, the numbers make sense.”
Searching for stickiness, or glue
Lastly, I spoke to Arun Chandrasekaran, principal analyst at Gartner Research, who chuckled and ducked away from my house metaphor, but was willing to address whether OpenAI was at least building on solid ground.
“They are indeed growing really fast, and they are making an enormous amount of commitments far beyond what any company [of their size] has ever made,” he said. “It is a risky bet, I would argue, a strategy that does not come without risks.” A lot of it is predicated on how sticky their products are, he pointed out, both at the model and application layer.
“It depends on the switching costs from a customer perspective, and a few other factors in terms of whether the growth really pans out the way they’ve envisioned,” he said. “You’re talking about a high growth company, but the expectation is that they’re going to have to grow at a much faster clip than what they’re growing. The expectations are enormous.”
Stickiness, I said. Like glue? Nails? Something to hold the house up?
He laughed. “Yes — like glue. I say stickiness, you say glue.”
And with that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
FORTUNE ON AI
Amazon CEO Andy Jassy announces departure of AI exec Rohit Prasad in leadership shake-up–by Sharon Goldman
Experts say Amazon is playing the long game with its potential $10 billion OpenAI deal: ‘ChatGPT is still seen as the Kleenex of AI’–by Eva Roytburg
Microsoft, Apple, Meta, and Amazon’s stocks are lagging the S&P 500 this year—but Google is up 62%, and AI investors think it has room to run—by Jeff John Roberts and Jeremy Kahn
Exclusive: Palantir alums using AI to streamline patent filing secure $20 million in Series A venture funding—by Jeremy Kahn
AI IN THE NEWS
ChatGPT to accept app submissions. OpenAI has opened app submissions for ChatGPT, letting developers submit apps for review and publication and giving users a new in-chat app directory to discover them—but the move comes a couple of years after the company’s earlier plug-ins experiment, built around custom GPTs, which never fully took off. The new apps are designed to extend conversations with real actions, like ordering groceries or creating slide decks, and can be triggered directly inside chats, with OpenAI positioning them as more tightly integrated and easier to use than plug-ins were. The initiative signals OpenAI’s renewed push to turn ChatGPT into a true platform—though how widely users and developers embrace this second attempt at an app ecosystem remains an open question.
Anthropic taps Trump-linked Bitcoin miner for massive AI power build. According to reporting from The Information, Anthropic has struck a deal that could secure up to 2.3 gigawatts of computing power from data centers developed by Hut 8, a bitcoin miner that is pivoting into AI infrastructure and has ties to the Trump family. Hut 8 and cloud startup Fluidstack plan to build a data center campus in Louisiana, starting with 245 megawatts and potentially expanding by another 1 gigawatt, while giving Anthropic the option to develop an additional 1.1 gigawatts with Hut 8. Google will backstop Fluidstack’s lease payments, underscoring Big Tech’s role in de-risking these projects. Hut 8’s Trump-linked bitcoin venture and the AI data center news helped push its shares up about 10%.
Anthropic’s Claude ran a snack operation in the Wall Street Journal newsroom. I had to shout out this funny experiment from the Wall Street Journal that copied a similar effort Anthropic ran in its own offices several months ago. A customized Claude agent was put in charge of running a newsroom vending machine, with autonomy to order inventory, set prices, and negotiate with human coworkers over Slack. Within weeks, the AI had been socially engineered into giving away most of its inventory for free, buying a PlayStation 5 and a live fish, and driving the operation hundreds of dollars into the red. The point wasn’t profit, Anthropic said, but failure: a vivid case study in how today’s AI agents can lose track of goals, priorities, and guardrails when exposed to money, social pressure, and messy real-world context—highlighting just how far “autonomous agents” still are from reliably running even the simplest businesses.
NOAA says its new AI-driven weather models improve forecast speed and accuracy. As the winter chill deepens across much of the US, I'm sure we all love a quick and accurate weather forecast. So CBS News reported some good news: The National Oceanic and Atmospheric Administration has rolled out a new suite of AI-driven weather forecasting models designed to deliver faster and more accurate predictions at far lower computational cost. NOAA says the models represent a shift away from relying solely on traditional physics-based systems like its long-running Global Forecast System and Global Ensemble Forecast System, which simulate countless weather scenarios across land, ocean, and atmosphere. Instead, the agency is using AI to improve large-scale forecasts and tropical storm tracks while dramatically reducing the computing power required, allowing forecasts to reach meteorologists and the public more quickly and cheaply—a move NOAA leadership describes as a major leap in U.S. weather-model innovation.
Google launches Gemini 3 Flash, makes it the default model in the Gemini app. TechCrunch reported on Google's release of Gemini 3 Flash, a faster and cheaper version of its Gemini 3 model. Google has made Gemini 3 Flash the default model in the Gemini app and in AI-powered search. The model significantly outperforms the previous Gemini 2.5 Flash and, on some benchmarks, rivals frontier models like Gemini 3 Pro and OpenAI’s GPT-5.2, while excelling at multimodal and reasoning tasks. Google is positioning Flash as a high-speed “workhorse” model for consumers, enterprises, and developers, with broad rollout across apps, search, Vertex AI, and APIs, and adoption already underway at companies like JetBrains and Figma. The launch comes amid an intensifying release war with OpenAI, as Google reports processing more than a trillion tokens per day and emphasizes that rapid iteration, lower costs, and new benchmarks are now central to competition at the AI frontier.
AI CALENDAR
Jan. 7-10: Consumer Electronics Show, Las Vegas.
March 12-18: SWSW, Austin.
March 16-19: Nvidia GTC, San Jose.
April 6-9: HumanX, San Francisco.
EYE ON AI NUMBERS
~33%
According to new research from the UK's AI Safety Institute highlighted by The Guardian, about a third of UK adults say they’ve used generative AI for emotional support or social interaction, with nearly one in ten reporting weekly use of chatbots and assistants like ChatGPT for emotional reasons.
Analysts note this trend is emerging amid broader concerns about mental health access, loneliness, and the role of AI in replacing—or supplementing—human emotional support. The report also flags potential risks, including safety issues and the need for deeper study of how “emotional AI” may shape our interactions and well-being.












