The race to build talking and listening software continued this week as Google said it has made a piece of its technology available to any developer who wants to use it. The Google Speech application programming interface (API) promises to give programmers an easier way to build speech recognition into their own software.
The resulting software could, for example, let users dictate the contents of a document, rather than typing it into their computer. Or developers can build customer service applications that do a better job recognizing the issue that a person is describing.
According to Tuesday’s Google blog post announcing the news, early users found the Google Speech API a useful way to add voice search and voice commands to applications and to analyze speech to applications running on the broader Google Cloud Platform.
Get Data Sheet, Fortune’s technology newsletter.
Google (GOOGL) said the API uses the same technology that that powers Google Assistant, a virtual personal assistant like Apple Siri that runs on many smartphones.
Google cited both interactive voice response (IVR) systems, such as the customer service application mentioned above, as a primary use, as well as speech-controlled, in-car navigation systems.
David Mytton, chief executive of Server Density, a London-based tech company said Google is giving developers lots of options to make their software smarter. “You can build your own speech recognition on top of Cloud ML [Google’s machine learning engine] or use pre-built systems like Cloud Speech API, which Google manages all for you. The former gives you more control and flexibility, the latter is like a managed service but is quite a lot more expensive,” Mytton said.
Making software easier to use without a keyboard or screen is becoming a hot market that is hotly contested.
Amazon Web Services already offers tools for building Amazon Echo’s Alexa voice interface, to third-party developers.
Developers can try out the Google Speech API now for free.