Google believes that the best way to improve artificial intelligence is new hardware technology.
The search giant debuted a new microchip on Wednesday tailored for certain types of artificial intelligence projects that require crunching enormous amounts of data. Google CEO Sundar Pichai announced the new chip during the annual Google I/O conference for developers in Mountain View, Calif.
The so-called Tensor Processing Unit, or TPU, is the latest version of a similar chip Google (GOOG) announced at last year’s Google I/O event. Google does not plan to manufacture and sell the chip like Intel (INTC) or AMD (AMD), but instead will let companies rent access to the chip via Google’s cloud computing service.
Get Data Sheet, Fortune’s technology newsletter.
The search giant has been trying to distinguish itself as the leading cloud computing company when it comes to the red-hot field of artificial intelligence. The advent of AI techniques like deep learning has made it possible for computers to quickly learn to recognize images in photos and translate languages on the fly.
Like Microsoft, Google executives say they are indirectly “democratizing AI” for the general public by selling data crunching services that they claim speeds up computing.
“This is why I joined Google,” said Fei-Fei Li, the chief scientist of artificial intelligence for Google’s cloud unit. She said she wants “to ensure everyone can leverage AI to innovate and stay competitive,” referring to other businesses.
Jeff Dean, a co-founder of Google’s Brain research team for artificial intelligence said that by tethering multiple TPUs together, Google was able to reduce the time it takes to train one of its language translation systems to “just six hours” instead of a full day using chips called graphics processing units. These GPUs are typically sold by companies like Nvidia (NVDA) and AMD.
The new chip performs two tasks related to artificial intelligence projects, including the training of data and making sense of the data, known as inference, Dean said. The older chip was unable to train data, a more heavy-duty task that companies typically use GPUs for.
Dean also said that Google would give the “top machine learning researchers” access to 1,000 free TPUs via a new cloud computing service for academics who are researching AI. In order to qualify, researchers must publish their findings and potentially make their research’s software available for free for others to access in an open-source model, he said.
Harvard Medical School has signed up to the new research project, and will be using Google’s technology to help it discover “treatments they can’t do now,” Dean said. The school did not elaborate any further.
For more about technology and finance, watch:
Businesses must ask to use Google’s new chips on a test basis before Google debuts them to the public at an unspecified time later this year. Google did not say how much it would charge to use its new chips after they officially debut.
In order to squeeze the best performance from the new chips, organizations will have to use Google’s TensorFlow AI software tool kit used for deep learning projects, which may be difficult for companies using competing AI software supported by Facebook, Amazon, and Microsoft. But Dean said there is “nothing stopping” the other companies from making their respective AI software work well with Google’s chips.