Intel CEO Brian Krzanich said Wednesday that the exponential advances in semiconductor manufacturing that enable faster and cheaper computing and storage every two years are now going to come closer to a rate of every two and half years. If Intel’s CEO had said it couldn’t keep cramming more transistors on a chip a decade and a half ago or even a decade ago, it would have shocked the tech world.

But today, adjusting the formula for what is known as Moore’s Law—named after Intel co-founder Gordon Moore—Krzanich was making a significant announcement for Intel shareholders and customers, but a less important one for the tech industry overall.

That’s because Moore’s Law is less relevant. The need to put more transistors on a chip is still important, which is why everyone was super excited about IBM’s announcement that it had made a super-dense 7-nanometer chip last week. However, density is no longer the most important aspect of a chip. Cramming more transistors mattered when all you wanted was faster chips that were good at one style of computing. But we don’t actually live in that world anymore.


Intel’s style of computing is really good at corporate jobs on desktops, servers and things that require linear progression, stuff like spreadsheets and word processors and everything associated with the phrase Wintel. And that remains a big part of computing today. But it’s not great for an increasing swath of computing jobs, such as transcoding graphics, networking, climate or seismic simulations, or Monte Carlo simulations required by trading desks. Those jobs require parallel computing that is much better handled by graphics processing units (or GPUs) offered by AMD or Nvidia.

This isn’t a new trend. The GPU folks were calling the end of Moore’s Law in 2010 and I was writing about it then too. Concerns about power consumption in the data center also helped drive the adoption of GPUs and research into other types of semiconductors from novel architectures that never quite caught on to Intel’s own research into chips such as its Larrabee design.

And as mobility became a top concern, Moore’s Law took another blow. Storage inside phones still relies heavily on Moore’s Law, so the cost of the memory cards and upgrades matters, although technologies such as stacking and new memory alternatives can help there as well. But when it comes to the brains for the handset, power consumption is the priority. This means the emphasis is on assigning the right task for the right chip, which is why graphics processors and specialized sensor processors such as the M7 inside the iPhone are being popped into handsets.

The emphasis isn’t on a massive, super-dense semiconductor that is marching down the process node following Moore’s Law. And as we progress into intensive real-time data processing and artificial intelligence, researchers are looking beyond Intel architectures as well. Google is trying out quantum computing and IBM, HP, the U.S. military and others are spending on processors that mirror the human brain. Others are trying to make computers that are right only some of the time (instead of all of the time) in an effort to cut down on power usage.

So when Krzanich said of Intel’s efforts to follow Moore’s Law, “The last two technology transitions have signaled that our cadence today is closer to 2½ years than two,” analysts paid attention because they want to understand when Intel gets to reap the benefits of its next-generation chip. But the rest of the computing world glanced back, nodded and kept their eyes on the real prize—taking us beyond the limits of Moore’s architecture.

Subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology.