• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechNvidia

The Next Big Challenge for Chipmakers Is AI

By
Stacey Higginbotham
Stacey Higginbotham
Down Arrow Button Icon
By
Stacey Higginbotham
Stacey Higginbotham
Down Arrow Button Icon
November 12, 2015, 12:50 PM ET
Micron

Artificial intelligence is the next big frontier in technology. While we’re still riding the wave of hype wrought by big data and connected sensors that has led to constant coverage of the Internet of things tech giants ranging from IBM to Facebook have been investing in ways to make the coming information overload manageable. IBM calls it cognitive computing. Facebook and Google call it machine learning or artificial intelligence.

No matter what you call it, it is inevitable.

This week Google made headlines because it released the code behind its Deep Mind neural network software, so anyone with the necessary programming chops could try to experiment with their own version of AI. The software, called TensorFlow, is on GitHub for the world to see. Perhaps people will build something like Google’s Smart Reply service that will respond to emails for you, or maybe you can challenge a computer to Go as Facebook apparently is doing.

Google's Smart Reply in action.
Google’s Smart Reply in action.Image courtesy of Google
Image courtesy of Google

However, no matter what your fancy, there’s one thing that’s often overlooked in the discussion about artificial intelligence, and that’s the hardware it runs on. When a programmer fires up TensorFlow, the best option isn’t to run it on a traditional Intel-based x86 processors, but graphics processors called GPUs. More often than not, these are coming from Nvidia.

Nvidia announced two new graphics accelerators Tuesday, which are aimed at helping large companies like Facebook, Baidu and Google develop new deep learning models and then deploy those models at a massive scale without requiring huge, expensive banks of servers all hooked up to their own power plant. The new processors, plus several software elements that will make the hardware easier to use, are part of a rush in the semiconductor world to build silicon that can handle what every single company in technology believes is the next big thing on the horizon.

Nvidia’s (NVDA) new processors include the Tesla M40, which is the powerhouse of the two, and is capable of delivering an incredible amount of performance for researchers trying to train their own neural networks. The second module is the Tesla M4, which is the first-ever low-power graphics processor from Nvidia, and is designed to fit into the servers of companies like Google (GOOG), Facebook (FB), or Amazon (AMZN) who need to deliver super fast results from an AI. For example, when you use the new Smart Reply feature from Google, the Tesla M4 might eventually be the workhorse chip that scans the words in the email and runs the natural language processing algorithms against the email so it can suggest a few replies.

The Nvidia M40 processor for training neural networks.Nvidia
Nvidia

The reason this takes an entirely new style of chip, as opposed to the existing Intel processors currently whirring away inside hundreds of thousands of servers inside the data centers the size of football fields, is because GPUs are well equipped to handle computing jobs that can be divided up into many different tasks. A GPU has hundreds of different computing cores, whereas an Intel Xeon processor has up to 18. And training a neural network requires a lot of similar, repetitive steps perfect divvying up on a GPU, which makes it a more efficient chip to use for the job.

Earlier this year, Yann LeCun, director AI research at Facebook, explained that GPUs are the preferred chip for use in training neural networks, but said that there were still some challenges. Namely he said that it was too challenging to get more than a few GPUs to work together on a single neural network because the software to manage them wasn’t robust enough. Nvidia has solved this with new software that lets users group more GPU cards together, which should help researchers like LeCun out.

However, Nvidia isn’t the only company trying to make a big splash in AI with dedicated processors or software. IBM has Watson, which is a form of AI that it called cognitive computing. Watson originally built to run on IBM’s Power architecture, although now Watson can run on SoftLayer’s cloud, which uses x86-based and Power servers. IBM has also announced features in its next-generation Power chips related to faster networking that will benefit the needs of AI researchers according to Guruduth Banavar, VP of cognitive computing at IBM Research.

However, IBM is also handling another challenge outside of training neural nets and then running them at large scale in a data center.

Guru Banavar, VP, Cognitive Computing Research at IBM.
Guru Banavar, VP, Cognitive Computing Research at IBM.IBM.
IBM.

Both IBM (IBM) and Qualcomm (QCOM) are trying to think about how to scale the awesome power of AI to fit on a battery-powered smartphone. Qualcomm’s answer its Zeroth technology, which basically runs a neural network on a chip. For example, it offers image recognition using a digital signal processor that is part of a number of chips sold as the brains inside mobile phones. Much like some of the dedicated processors for speech recognition on the iPhone or other handsets, this isn’t as impressive as what IBM is attempting.

IBM’s strategy is more ambitious. It wants to create a chip that mimics the human brain and will perform similar AI-level tasks on a smartphone. It’s using the human brain as a model because the brain is the most efficient computer we know of, using only 20 watts of power. In comparison, the new Nvidia “low-power” GPU consumes between 50-75 watts of power, which would drain your battery faster than you could say neural network.

So far, the resulting synaptic chip is still at the early stages, but it does exist in silicon, which was a big deal when IBM launched it in 2014. We won’t see this in smartphones anytime soon. For now, the heavy lifting of AI on our battery-powered devices will be shuffled off to dedicated silicon trained on one, or maybe two, specific types of neural networks.

With IBM, Nvidia, Qualcomm, and even Micron (which has a new processor called the Automata Processor that does pattern recognition) investing in artificial intelligence and deep learning, where’s the world’s largest chipmaker?

Intel (INTC) recently made headlines with its purchase last month of an AI startup called Saffron, but the chip giant has been remarkably quiet when it comes to discussing its efforts around machine learning. The closest it comes is when it’s justifying its $16.7 billion purchase of Altera, a company that makes programmable chips. At an event in August Intel said it can use a combination of those programmable chips plus its own Xeon processors to run specialized algorithms such as those used to train neural networks.

This relative silence is troubling, given that silicon advancements require years planning. Intel has surprised the community with new technologies that it has developed in secret, such as its 3-D transistors from 2011, but in the chip community its lack of machine learning products are a gaping hole in its portfolio. “In chips, we have to plan two to four years out, so you have to figure out, what are the key apps,” said Jim McGregor, principal analyst with semiconductor research firm Tirias Research. “Intel missed out on mobile. It doesn’t want to miss out on this.”

Meanwhile, as the giants in the technology world invest in machine learning, the chip world is trying to give them a silicon platform that will allow them to deliver results both in the data center and on our handsets.

For more on AI, watch this Fortune video:

Subscribe to Data Sheet, Fortune’s daily newsletter on the business of technology.

About the Author
By Stacey Higginbotham
See full bioRight Arrow Button Icon

Latest in Tech

Sarandos
Arts & EntertainmentM&A
It’s a sequel, it’s a remake, it’s a reboot: Lawyers grow wistful for old corporate rumbles as Paramount, Netflix fight for Warner
By Nick LichtenbergDecember 13, 2025
3 hours ago
Oracle chairman of the board and chief technology officer Larry Ellison delivers a keynote address during the 2019 Oracle OpenWorld on September 16, 2019 in San Francisco, California.
AIOracle
Oracle’s collapsing stock shows the AI boom is running into two hard limits: physics and debt markets
By Eva RoytburgDecember 13, 2025
4 hours ago
robots
InnovationRobots
‘The question is really just how long it will take’: Over 2,000 gather at Humanoids Summit to meet the robots who may take their jobs someday
By Matt O'Brien and The Associated PressDecember 12, 2025
17 hours ago
Man about to go into police vehicle
CryptoCryptocurrency
Judge tells notorious crypto scammer ‘you have been bitten by the crypto bug’ in handing down 15 year sentence 
By Carlos GarciaDecember 12, 2025
18 hours ago
three men in suits, one gesturing
AIBrainstorm AI
The fastest athletes in the world can botch a baton pass if trust isn’t there—and the same is true of AI, Blackbaud exec says
By Amanda GerutDecember 12, 2025
18 hours ago
Brainstorm AI panel
AIBrainstorm AI
Creative workers won’t be replaced by AI—but their roles will change to become ‘directors’ managing AI agents, executives say
By Beatrice NolanDecember 12, 2025
19 hours ago

Most Popular

placeholder alt text
Economy
Tariffs are taxes and they were used to finance the federal government until the 1913 income tax. A top economist breaks it down
By Kent JonesDecember 12, 2025
1 day ago
placeholder alt text
Success
Apple cofounder Ronald Wayne sold his 10% stake for $800 in 1976—today it’d be worth up to $400 billion
By Preston ForeDecember 12, 2025
23 hours ago
placeholder alt text
Success
40% of Stanford undergrads receive disability accommodations—but it’s become a college-wide phenomenon as Gen Z try to succeed in the current climate
By Preston ForeDecember 12, 2025
22 hours ago
placeholder alt text
Economy
For the first time since Trump’s tariff rollout, import tax revenue has fallen, threatening his lofty plans to slash the $38 trillion national debt
By Sasha RogelbergDecember 12, 2025
18 hours ago
placeholder alt text
Economy
The Fed just ‘Trump-proofed’ itself with a unanimous move to preempt a potential leadership shake-up
By Jason MaDecember 12, 2025
16 hours ago
placeholder alt text
Success
At 18, doctors gave him three hours to live. He played video games from his hospital bed—and now, he’s built a $10 million-a-year video game studio
By Preston ForeDecember 10, 2025
3 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.