Tech layoffs tied to AI are dominating headlines. Coders are being displaced by agents. Software headcount is shrinking. The message from Silicon Valley is that AI is restructuring the workforce in real time—and that the rest of corporate America should brace for the same.
Box CEO Aaron Levie has a message back: not so fast.
“My job these days,” Levie said Monday on a16z’s podcast, “is just bring reality to the valley, and then bring the valley to reality.” It’s a line that sounds glib until you understand what he actually means—and why the gap between AI’s impact in tech versus the broader Fortune 500 may be one of the most misunderstood economic dynamics of the moment.
Two very different worlds
The reason AI is so disruptive in Silicon Valley right now is specific to Silicon Valley: its workers are engineers, its outputs are verifiable, and its tools are flexible. When an AI agent writes code, a human can test whether the code works. When something breaks, an engineer debugs it. The feedback loop is tight, the productivity gains are measurable, and the headcount math changes accordingly.
Walk into a regional bank, a healthcare network, or a 30-year-old manufacturer, and almost none of those conditions apply. Workers are less technical. Data is scattered across legacy systems built over decades. And the consequences of an AI agent making a wrong call aren’t a failed unit test—they’re a botched claim, a miscalculated payment, or a compliance violation. “The workflows are quite different, the users are less technical, the data is much more fragmented, the systems are much more legacy,” Levie said.
That’s not a temporary lag that will resolve itself in a few quarters. It’s a structural difference that could take years to close.
The mandate problem
Making things worse: many large companies are trying to force AI adoption from the top down, with predictably poor results. Boards pressure CEOs. CEOs hire consultants. Centralized AI initiatives launch without buy-in from the people who’d actually use them. Martin Casado, general partner at a16z, described the failure mode with some frustration: “They have some centralized project that — nobody knows how it works. They haven’t aligned their operations, and those things will fail.”
That failure mode has a cultural dimension too. May Habib, CEO of AI platform Writer, recently described Fortune 500 executives as having a “collective panic attack” about AI’s implications—a vivid illustration of the kind of reactive, top-down pressure Casado is describing.
The desperation to show progress has produced some genuinely strange outcomes. Levie recounted being told by an employee at a large company—he didn’t name it—that workers there are being measured on AI adoption by token usage, the computational units that run through AI models. The result: employees have set agents to perform “useless tasks” purely to hit their numbers. It’s a near-perfect illustration of Goodhart’s Law — as soon as a measure becomes a target, it ceases to be a good measure — and of how far some organizations are from meaningful AI transformation.
The wall no model can climb
Even well-run enterprise AI programs collide with the same structural obstacle: integration. Steven Sinofsky, the former top Microsoft executive, now a board partner at a16z, put it plainly. “Any enterprise of a thousand people or more—or that’s older than 10 years—is just a mass of stuff sitting there waiting to be integrated,” he said. “AI actually doesn’t help to integrate anything.”
What that means in practice: AI agents, like any new employee, need access to the right systems and data to do useful work. In most large companies, that access is informal, undocumented, and navigated through relationships. A human worker figures it out by asking a colleague. An AI agent has no colleague to ask. Until companies do the hard, expensive, unsexy work of cleaning up their data and modernizing their access controls, agents will keep hitting walls.
That helps explain why enterprise AI adoption looks wide but shallow: 72% of enterprises have at least one AI workload in production as of Q1 2026, up from 55% in 2024—but only 28% describe their AI adoption as “mature.” Just 38% of employees use generative AI daily, even as 65% of enterprises claim to use gen AI regularly. The gap between what companies say they’re doing with AI and what’s actually happening on the ground is enormous.
A bellwether from Salesforce
One major company is betting that meeting agents where they are—rather than forcing them through legacy human interfaces—is the path forward. Salesforce launched “Headless 360” last month, making its entire platform—data, workflows, and business logic—accessible to AI agents without a browser or human UI. CEO Marc Benioff framed it bluntly at the company’s TDX developer conference: “No browser required. Our API is the UI.”
Levie sees it as a harbinger. If enterprise software is rebuilt to be consumed by agents rather than humans, the addressable market for “users” expands by orders of magnitude—and the integration wall gets lower. But that rebuild is still largely ahead of us, not behind us.
Here’s where Levie’s argument gets most interesting—and most at odds with the prevailing Silicon Valley narrative on jobs. In the narrow slice of the economy that looks like a tech company, AI-driven displacement is real. But in the broader Fortune 500, Levie says the math actually runs the other way: more AI-generated code means more complex systems, which means more engineers are needed to manage them when things go wrong.
“The funniest concept is that the more code we write, the less we would need engineers,” Levie said. “It would be the opposite, because now your systems are even more complex than before—which means you’re going to be running into even more challenges when you need to do a system upgrade, or when there’s downtime, or when there’s a security incident.”
It’s a historically grounded point. The internet didn’t shrink IT departments—it built them. Cloud computing didn’t displace systems integrators—it created a generation of them. The workers getting squeezed today are concentrated in a particular kind of role, at a particular kind of company, in a particular geography.
Wall Street, notably, isn’t waiting for the debate to resolve. Of 28 tech companies that announced AI-related layoffs this year, 17 saw their stock prices rise on the day of the announcement—a signal that investors are actively rewarding headcount destruction in the sector. (This is known as “AI washing,” a phrase memorably repeated by Sam Altman himself.) That dynamic has no analog in the broader Fortune 500, where AI-driven cuts remain rare enough to be newsworthy when they happen.
For everyone reading layoff headlines and wondering when the wave will reach their office: if Levie is right, the answer for most of the Fortune 500 isn’t displacement—it’s a long, painful, expensive technology upgrade. Which is a different problem entirely.
For this story, Fortune journalists used generative AI as a research tool. An editor verified the accuracy of the information before publishing.












