Enterprises invested $30 billion-$40 billion in generative AI pilots in 2024, yet an influential MIT study found that 95% delivered zero measurable business return. Do the math: that’s roughly $30 billion in destroyed shareholder value in a single year.
The failure isn’t happening where most executives think it is.
I spent two decades at Microsoft and SAP watching enterprises make the same mistake: optimizing the wrong layer of the technology stack. Today’s AI failures follow that same pattern. Companies chase the newest models and flashiest applications while the data infrastructure beneath them quietly buckles. Few can process data fast or cheaply enough to feed these models at scale.
The Analytics Heritage Nobody Talks About
Before AI became every board’s obsession, enterprises spent a decade building analytics infrastructure. These systems handled overnight reports just fine because the economics worked: smaller data volumes, predictable workloads, and manageable costs.
AI changed all that. Overnight runs became continuous processing. Sample data became complete datasets. Batch jobs became real-time inference. The analytics-era infrastructure simply can’t sustain AI’s pace and cost demands.
That’s the real reason behind MIT’s 95% failure rate.
The Economic Trap
Roughly a quarter of enterprise cloud spend is wasted on inefficient resource use, much of it tied to data processing. For a company spending $100 million a year on cloud services, that’s tens of millions burned — money that could fund real AI innovation.
The irony is companies are spending tens of billions of dollars on database and analytics infrastructure, yet starving the one layer that actually makes AI economically viable. They’re building skyscrapers on foundations designed for strip malls.
I saw the same story across hundreds of enterprise deployments: companies process only 20-30% of their available data because processing everything would blow their compute budgets by 5x to 10x.
One Fortune 100 retailer I worked with had 15 years of customer interaction data but could only afford to process 30% of it. Their AI was essentially flying blind, and management couldn’t understand why results were underwhelming.
Once you’re in this trap, it feeds on itself: incomplete data produces mediocre results, leadership questions AI ROI, budgets tighten, and teams process even less data. This spirals until someone kills the pilot.
I realized the bottleneck isn’t ambition — it’s architecture. This conviction pushed me to leave Big Tech to focus on solving the problem from the ground up. I’ve seen firsthand what happens when enterprises modernize their data processing foundations, and the impact is dramatic: costs more than halved, and performance improves by an order of magnitude.
The Architectural Mismatch
Today’s data processing frameworks were built for an era where computing meant rows of identical CPU clusters. But the world looks very different now, with modern infrastructure spanning CPUs, GPUs, FPGAs, and custom AI accelerators across hybrid clouds.
The software layer hasn’t caught up. Most data engines still assume a one-size-fits-all architecture, so they don’t automatically send the right jobs to the right hardware. The result is expensive accelerators sit idle while CPU clusters max out on tasks other hardware could complete far faster.
Enterprises pay premium prices for next-generation hardware but still operate at legacy performance levels. Until the software layer learns to match each workload to the best compute available, this efficiency gap will keep slowing AI progress.
Why Optimization Beats Overhaul
Twenty years in enterprise IT taught me that transformation budgets are fantasy. CIOs get fixed budgets and pressure to cut costs while expanding capabilities. That’s why “rip and replace” keeps failing — migration costs and operational risk exceed any reasonable budget.
The companies succeeding with AI aren’t running different infrastructure. They’re running the same infrastructure vastly more efficiently. A major e-commerce platform processing half a petabyte of data daily saw 3x speedup and 80% cost reduction, with no code changes and no migration. A social platform serving 350 million users achieved 2x performance improvement and 50% cost savings using the same pattern.
When you intelligently route operations across CPUs, GPUs, and specialized processors, you extract order-of-magnitude improvements from infrastructure you already own.
The Strategic Inflection Point
The next decade’s winners won’t be the ones with the biggest models or flashiest applications. They’ll be the ones that solve data economics first, processing complete datasets at sustainable cost.
I’ve watched this pattern play out in every major infrastructure shift. Early adopters chase features, while enduring leaders optimize the foundation.
The question for enterprise leaders isn’t which models to deploy, but whether their infrastructure can process all their data at costs the organization can actually sustain. If the answer is no — and for most enterprises today it is — every AI initiative carries execution risk.
This infrastructure blindspot isn’t just a technical flaw. It’s the defining strategic opportunity of the next decade and what I’ve bet my career on. The enterprises that act on this will set the competitive rules in their markets, while those that don’t will keep funding pilots that never escape the lab.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
