Amazon unveiled new artificial intelligence services to help users of its data center services to build better, smarter applications. The services include tools to help developers add text-to-speech, image recognition, and the technology behind Amazon’s Alexa personal assistant to their applications.
The news came out of AWS re:Invent, Amazon’s fifth annual conference for users and partners of its cloud technology, held this week in Las Vegas. The announcements confirm some of what Fortune reported exclusively last week.
The AI services, under the umbrella term Amazon AI, shows the public cloud giant playing catch up with Google
. Both of those cloud rivals already offer AI capabilities that are seen as more advanced than what AWS has offered to date.
Amazon’s ace in the hole here is Amazon Echo, the home speaker device, and Alexa, the personal assistant AI software that runs it. The goal here is to help improve the brains behind Alexa and make that technology broadly available to AWS developers so that they can build smart applications using their data.
Another new service, Amazon Lex, will let AWS developers “bake intelligence” into the software they create, said Matt Wood, general manager of product strategy for AWS. “This is a form of automatic speech recognition so you can build conversational, intuitive interfaces to your applications and business data.” he said.
That would make it easier to build a travel booking application that you can speak to and that would know,based on past interactions, what that you prefer airline window seats and that your favorite hotel in San Francisco is the Four Seasons.
Get Data Sheet, Fortune’s technology newsletter.
Second, there was Amazon Rekognition, an image recognition service that learns what objects are by shifting through huge libraries of digital images to help it recognize people, things, even facial expressions. It was not mentioned on stage, but this looks to be an outgrowth of Amazon’s acquisition last year of Orbeus, and its image recognition technology.
Rekognition can detect objects within an image, said AWS chief executive Andy Jassy. “It can pick out an image you ask for, of a woman, a car, a steering wheel and from that can search for images of women driving a car,” he said.
It can also tell you how many people are in a given image, and tell you based on their expressions (frowns vs. smiles) what their emotional state is. And by crunching through millions of images either in batch form (going through huge troves at a specified time) or in real-time, Jassy noted. And the more images it goes through the more accurate the service will get.
Microsoft has demonstrated this sort of sentiment analysis technology for more than a year as has Google.
The third service, Amazon Polly, is a text-to-speech recognition service that takes words you provide in text form, and translates them to voice, with some smart editing.
If you type a question about the temperature in “Wa.” or “Wash.,” it will know that you are referring to the state of Washington and will provide an MP3 audio file converting your typed input to voice in one of 47 voice options speaking in one of 24 languages.
The AI news was preceded by the usual array of new AWS computing options and, as expected, a new managed or Aurora version of the popular PostgreSQL database.
For more on AWS watch:
Polly and Rekognition are now available and Lex is available in preview mode.
The general gist of what Amazon, Google, Microsoft, IBM
and others are doing here is enabling a new generation of smart software services that know a lot about you personally and about the population generally. That can enable some big productivity gains, but for many people, particularly those of a certain age, it also raises questions of how much data they really want to share with smart software and the giant companies behind it.