Nvidia Corp. said it’s offering the company’s first server microprocessors, extending a push into Intel Corp.’s most lucrative market with a chip aimed at handling the most complicated computing work. Intel shares fell more than 2% on the news.
The graphics chipmaker has designed a central processing unit, or CPU, based on technology from Arm Ltd., a company it’s trying to acquire from Japan’s SoftBank Group Corp. The Swiss National Supercomputing Centre and U.S. Department of Energy’s Los Alamos National Laboratory will be the first to use the chips in their computers, Nvidia said Monday at an online event.
Nvidia has focused mainly on graphics processing units, or GPUs, which are used to power video games and data-heavy computing tasks in data centers. CPUs, by contrast, are a type of chip that’s more of a generalist and can do basic tasks like running operating systems. Expanding into this product category opens up more revenue opportunities for Nvidia.
Founder and Chief Executive Officer Jensen Huang has made Nvidia the most valuable U.S. chipmaker by delivering on his promise to give graphics chips a major role in the explosion in cloud computing. Data center revenue contributes about 40% of the company’s sales, up from less than 7% just five years ago. Intel still has more than 90% of the market in server processors, which can sell for more than $10,000 each.
The CPU, named Grace after the late pioneering computer scientist Grace Hopper, is designed to work closely with Nvidia graphics chips to better handle new computing problems that will come with a trillion parameters. Systems working with the new chip will be 10 times faster than those currently using a combination of Nvidia graphics chips and Intel CPUs. The new product will be available at the beginning of 2023, Nvidia said.
Nvidia is pitching the new CPU to data center owners—so-called hyperscalers such as Amazon.com Inc.’s AWS and Alphabet Inc.’s Google— as a way to harness artificial intelligence software more effectively and improve the ability to make sense of the flood of data they receive.
Training a program using a trillion data points of information might take as long as a month currently. Grace will reduce that to three days, according to Ian Buck, an Nvidia vice president. For end-users of cloud services, that will lead to computers that can understand natural human language and make online automated help much more effective, he said.
The Swiss National Supercomputing Centre provides scientific computing. The decision to use Grace will help with the amount of machines they’re able to deploy, said CSCS director Thomas Schulthess in an interview. Arm technology is widely used in smartphones and other mobile technology where battery life constraints mean that chips have to be more efficient. Nvidia’s decision to use Arm’s know-how as the basis for its CPU will likely help owners of data centers who have power constraints.
Using Grace-based systems could help CSCS to move forward the art of complex calculations such as weather forecasting, Schulthess said. Computers could predict more accurately the track of a storm or even provide season-long climate outlooks to provide early warning of a drought, he said.
The new processors are made to work closely with Nvidia graphics chips for the data center. Faster connections, new memory chips and the ability to share memory will help the overall performance of the product, Nvidia said.