Intel is bolstering its artificial intelligence efforts by acquiring Nervana Systems, a two-year-old startup considered among the leaders in developing machine learning technology.
Nervana has built an extensive machine learning system, which runs the gamut from an open-sourced software platform all the way down to an upcoming customized computer chip. The platform is used for everything from analyzing seismic data to find promising places to drill for oil to looking at plant genomes in search of new hybrids.
Intel (INTC) declined to disclose the purchase price for the deal, which is expected to close in about one month.
After transitioning from mainframes to PCs to servers to cloud-based data centers, computing is about to make another transition, says Intel vice president Jason Waxman, who runs the data center solutions group. “Right now, we’re on the precipice of the next big wave, artificial intelligence,” he tells Fortune. “We’re already seeing real deployments.”
Get Data Sheet, Fortune’s technology newsletter.
Most of Intel’s efforts so far have focused on adapting its popular line of Xeon general purpose computing chips for use in machine learning and artificial intelligence deployments. Last year, it released a specialized version of the chip, dubbed the Xeon Phi, which had more cores for more processing in parallel. That’s intended to beat out graphics chips from Nvidia (NVDA) and others that also have large-scale parallel processing capabilities.
Nervana brings expertise on the software side that can be used right away as well as designs for specialized chips that can be accelerated with Intel’s chipmaking know-how.
The startup’s chip design “slots in perfectly with this Intel acquisition,” says CEO Naveen Rao, who worked on developing neural networks inspired by biological brains at Qualcomm (QCOM) before co-founding Nervana in 2014. Customers will be able to try the first Nervana chip designs early next year, he says.
Nervana’s platform had been running on Nvidia chips previously. Rao says that was to get to market quickly with its software before the customized chips were ready.
A higher rate of data flow will be among the key differences between the upcoming specialized chips and the general purpose and both the graphics chips widely in use today.
Machine learning systems work more like a brain, which has billions of neurons each linked to thousands of synapses all working in parallel. The systems need to read and write data from storage, sharing that data internally among chips very quickly in order to mimic the parallel operation of the neurons and synapses.
The new Nervana chips will be able to transfer data in and out at 2.4 terabytes per second, and with very low latency at a rate five to 10 times faster than the fastest input-output interfaces for traditional chips, Rao says.
For more on the future of AI, watch:
Initially, the new chips will be aimed at cloud data centers, where big companies are making extensive use of machine learning, Intel’s Waxman says. But eventually the technology will wind its way into smart devices such as self-driving cars and wearables to fuel the growth of the Internet of things, he says.
Still, there’s plenty of competition in the race to offer the fastest and most efficient machine learning platform. The field ranges from from giants like IBM (IBM), with its Watson effort, and Google’s (GOOG) DeepMind to startups like Osaro and Skymind. Intel is hoping its acquisition of Nervana will help it lead the way.