For A.I., one-size-fits-all computer chips are a tight squeeze

November 9, 2021, 7:12 PM UTC
Fortune’s Verne Kopytoff moderates a panel with Rene Haas, then president of the IP products group at Arm, and Andy Hock, VP of product at Cerebras Systems, at Fortune Brainstorm A.I. in Boston, Nov. 8, 2021.
Rebecca Greenfield/Fortune

While computers have helped ensure that innovation is one of the only constants of modern life, the design of computer chips hasn’t changed much in decades. Neither has where they live (yes, inside the computer). But artificial intelligence could upend even those constants.

“I think we’re in the early stages of an evolutionary divergence in the chip architecture space,” said Andy Hock, vice president of product management at Cerebras Systems.

Those changes are being driven by A.I. and the increasing number of industries putting machine learning to use.

“I wouldn’t want to put a percentage on it, but if I go forward 10 years, I would imagine virtually every single chip that we’re shipping has some A.I. component to it,” said Rene Haas, president of the IP products group for semiconductor maker ARM. “There will be some level of A.I. and learning taking place on literally everything in the next 10 years.”

Last year ARM shipped “about 25 billion” chips, he said.

Discussing the amount of memory and communication resources that are needed to drive A.I., Hock described the massive sizing up that Cerebras has done on the design of one of its chips. The new chip is 8.5 inches by 8.5 inches of unbroken silicon.

“The compute demands for large-scale deep learning are growing exponentially such that state-of-the-art models often take days or weeks or sometimes even months of compute time to train to convergence, even on very capable large-scale clusters of processors,” said Hock.

The larger chip will “make processing more efficient” so users can “compute not only at higher performance but unlock that performance with greater efficiency.”

But corporate accounts need not worry that their company’s existing computers will get the heave-ho to fit the new chips. Cerebras’ system team “took a systems engineering approach” and built new packaging, power delivery technology, and a cooling system, Hock explained, so that the “wafer scale engine” could fit into the standard rack mountable devices already sitting into a lab’s cluster or a cloud facility.

But supersizing isn’t the only way forward.

ARM has been working with some companies that have been developing low-power edge devices. Haas gave an example of a tiny battery-powered sensor that could not only detect particles in the air but also “do some level of machine learning around the data.”

Even four or five years ago, he added, conversations about that type of device wouldn’t have happened.

“People thought about IoT was a receiver or a transmitter and a small computer, but doing machine learning on the edge device? That wasn’t something people talked about,” said Haas. “Now we’re just seeing all kinds of demand for those applications.”

More must-read business news and analysis from Fortune:

Subscribe to Fortune Daily to get essential business stories straight to your inbox each morning.

Read More

Brainstorm HealthBrainstorm DesignBrainstorm TechMost Powerful WomenCEO Initiative