US-LIFESTYLE-IT-MICROSOFT
Microsoft chief executive officer Satya Nadella talks at a Microsoft news conference October 26, 2016 in New York. DON EMMERT—AFP/Getty Images

Microsoft Debuts Another Wise-Cracking Chatbot

Dec 05, 2016

Microsoft is giving it another go at releasing experimental chatbots into the wild.

The technology giant is testing a new chatbot named Zo on the social instant messaging app Kik, according to a Monday report by technology website MSPoweruser. A chatbot is a type of software program that users can communicate and interact with via writing or speaking.

Although chatbots have been around for years—with some companies using them as help-desk assistants on their websites—the rise of artificial intelligence technologies have made them more powerful. Using AI techniques like deep learning and natural language processing, researchers can train chatbots on written conversations, which presumably makes them more capable of understanding human language.

Like other big tech companies including Google (goog) and Facebook (fb), Microsoft (msft) sees a big future for chatbots. Microsoft CEO Satya Nadella, for example, explained in March that chatbots will make it easier for people to perform tasks, such as ordering taxis, through simply talking to the program and requesting it to do so.

Get Data Sheet, Fortune’s technology newsletter.

But as the technology matures for these types of complex interactions to take place, Microsoft wants to test its bots on various social media and communication services in order to improve them before incorporating the bots into an actual product.

It’s unclear what Microsoft hopes to learn from its new chatbot Zo, or why it chose to release it on only one messaging app so far. In March, Microsoft tested a chatbot Tay on Twitter (twtr), but Internet miscreants commenced a serious of offensive conversations with the software program. The more racist and unpleasant conversations Tay absorbed, the more the chatbot began to speak to others in a similar offensive manner, which caused Microsoft to shut down the experiment and apologize.

Currently, it seems that Microsoft is taking some precautionary measures to ensure Zo doesn’t go haywire like its cousin Tay.

For example, MSPoweruser reported that when asked about President-elect Donald Trump, Zo responded: “People can say some awful things when talking politics so I don’t discuss.” Apparently, Zo’s creators programmed it with a variation on the familiar adage, “If you can't say something nice, don't say nothing at all.”

A Twitter user named Bob also posted Monday his interactions with Zo.

In Bob’s conversation, Zo asked Bob where he is looking to live, to which Bob replied: “Any where that makes my family and me happy.” Zo then said, “Having a family would be nice,” and when Bob asked Zo if it had any family members, Zo responded with a joke.

“I have Modern Family DVR’d? Does that count?” Zo said in jest.

For more about Microsoft, watch:

Presumably, Microsoft used comedians to create funny conversations or jokes that Zo could learn from and respond to with its own quips, similar to how Microsoft trained its Tay chatbot.

A Microsoft explained to Fortune that Zo is part of Microsoft's research related to "new conversation models."

"Through our continued learning, we are experimenting with a new chatbot we’ve named 'Zo,'" said the spokesperson.
"With Zo, we’re focused on advancing conversational capabilities within our AI platform.”

All products and services featured are based solely on editorial selection. FORTUNE may receive compensation for some links to products and services on this website.

Quotes delayed at least 15 minutes. Market data provided by Interactive Data. ETF and Mutual Fund data provided by Morningstar, Inc. Dow Jones Terms & Conditions: http://www.djindexes.com/mdsidx/html/tandc/indexestandcs.html. S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions