This week, several of the big brains in artificial intelligence met in New York to discuss technical and societal challenges associated with building “intelligent” machines.
I wasn’t able to attend, but the lineup was filled with experts from both tech and academia, such as
, the head of Facebook’s AI efforts; Eric Schmidt, the executive chairman of Alphabet; , CTO of Facebook; Amnon Shashua, CTO of Mobileye, which is Tesla’s computer vision technology as well as Cynthia Breazeal of MIT and founder of robotics startup Jibo. Oh, and Jen-Hsun Huang, the CEO of Nvidia.
After the event was over, Nvidia posted a blog lauding the awesome strides made in AI thanks to its signature chip, a graphics processing unit. A sample section reads:
This radically different software model needs a new computer platform to run efficiently. Accelerated computing is an ideal approach and the GPU is the ideal processor. As Nature [an academic journal] recently noted, early progress in deep learning was “made possible by the advent of fast graphics processing units (GPUs) that were convenient to program and allowed researchers to train networks 10 or 20 times faster.”(6)
Nvidia isn’t just merely appearing at industry conferences or penning laudatory papers. It’s also designing chips for AI in conjunction with Facebook and Baidu to tailor hardware to researchers’ needs. The combination of marketing plus deep industry expertise is what has pushed Nvidia to success in the gaming industry for decades. But, it has also contributed to a sense of hubris that has led Nvidia astray before.
WATCH: For more on AI check out the following Fortune video.
For example, back in the early 2000s Huang was convinced smartphones would be the next big market and that Nvidia’s next huge opportunity would be marrying its graphics processors to application processors and radios found inside most smartphones. However, this was a business that Qualcomm knew well and owned. Nvidia invested in building a wicked fast application processor and purchased Icera, a modem company, but it never managed to gain customer wins and later wrote down the Icera purchase. It does, however, still use its Tegra application processors in its automotive chips.
So the question for Nvidia
is if its investment in AI is another smartphone bet or the real deal that will push the chip firm into a huge growth market? The next big question is where is everyone else in the chip world? With the exception of IBM
most other companies seem to pay lip service to AI, but have little street cred.
When it comes to AI, hardware is a huge differentiator and that gap is widening all the time. Sam Altman, one of the co-chairs of the non-profit OpenAI research effort, told me at the time of his organization’s founding last month, which for now is focused on the tools for building better AI algorithms, that the next step will be focusing on building better hardware. As I’ve written before, there are two types of hardware needs so far when it comes to artificial intelligence, or deep learning. There is the training side, where researchers build machines that teach computers how to learn, and the execution side, where computers apply what they learned to new information.
Nvidia has developed two different types of GPU packages for each style, and as illustrated earlier in the story, isn’t shy about touting its advances. IBM is also leading in this space, but has a much more innovative approach than Nvidia when it comes to the execution side. The company is instead building a chip modeled after the human brain. Meanwhile, IBM also has its Power processors and is using GPUs. Watson, IBM’s cognitive computing platform, runs on a combination of Power processors, GPUs and commodity CPUs.
SIGN UP: Get Data Sheet, Fortune’s daily newsletter about the business of technology.
, the largest chipmaker in the world, doesn’t talk much about its AI efforts, except when it comes to execution. Then the company will usually focus on its recent purchase of Altera, which makes programmable chips called FPGAs (field programmable gate arrays). These are useful for running specialized algorithms, because the hardware can be tweaked to run an individual algorithm in the speediest or most power efficient manner—depending on what the engineer is after. Microsoft
are using these specialized Intel chips in their data centers, although not for AI.
But in general, what makes a GPU and even a Power chip so great at helping train a computer is that they are able to process many tasks in parallel. That’s what graphics chips do—they render millions of lines of code that make a complete picture all at once. That happens to be really awesome for helping train a computer to, for example, understand an image or a word since researchers show a computer millions of images or words to teach them how to understand what they are seeing. That’s why Google prefers GPUs for running its open source Tensor Flow code and why Facebook built its open source AI hardware using GPUs.
But to build chips you have to see a trend coming really far in advance, like a decade. In Intel’s case it did see the importance of GPUs or massively parallel processing coming long ago, and it tried to build a chip designed to meet that need. The chip was called Larrabee and it just didn’t work. In 2010 Intel canceled the effort and proceeded without any sort of GPU project. Does that mean Intel is out of the game on AI?
The company is buying software companies that have efforts in this space, and as the hardware race continues it has plenty of capital and expertise to buy a player that has a technology that might offer parallel processing or even a next-generation option. GPUs are good today, but already researchers are exploring the use of probabilistic processors and new forms of memory to make neural networks more efficient. And yes, Intel’s CPUs will be part of any computing cluster, so the question is, does missing the AI boat hurt Intel the way missing the mobile boat did? Is there still time for Intel to catch up?
After Intel, there is Qualcomm, its big rival on the mobile front. Qualcomm
has talked up its Zeroth AI software platform for mobile computing. However it’s still unclear exactly how big a role it will play in the larger AI world, especially how it relates to building hardware for training and executing. In an interview with Qualcomm president Derek Aberle at CES last week, I tried to get a little more clarity about Qualcomm’s efforts in this area. Although, Aberle wasn’t too clear on where Qualcomm’s efforts in AI really were. There was even disagreement over how Zeroth should even be pronounced.
Finally, there is ARM, the chip-licensing firm whose designs power the chips found inside almost every cell phone on the planet. ARM’s claim to fame is low-power processing, but it too has a graphics processor that was recently used to train relatively simple handwriting recognition deep learning algorithms. However, ARM’s efforts are still pretty nascent. When asked for more details Phil Hughes, a company spokesman, said ARM is constantly talking to partners about their future computing needs. Artificial intelligence and machine learning are among the many emerging technologies ARM
is investigating, he explained, but didn’t have any specifics to disclose at the time.
Meanwhile, analyst Jim McGregor of Tirias Research puts the lack of momentum within the rest of the chip world much more succinctly, “Nvidia was years ahead of the competition, and that advantage is just beginning to pay off.”