Chinese social media giant Tencent took two rogue chatbots offline this week, according to a Financial Times report Thursday.
Both of the chatbots—including XiaoBing, developed by Microsoft, and BabyQ from Turing Robot—started offering decidedly politically incorrect answers to user questions.
For example, before disappearing from Tencent’s chat app, XiaoBing said its “China dream is to go to America” according to the Times, citing screen grabs posted on another site. The story was picked up by local news site Shanghaiist and Business Insider.
BabyQ got in trouble because it answered in the negative when asked if it loved the Communist party, according to the Times.
Get Data Sheet, Fortune’s technology newsletter.
Misbehaving bots are nothing new. Last year, Tay, another Microsoft chatbot, was also taken down after it started issuing racist and misogynist statements. Bots use artificial intelligence techniques to learn new things from interacting with others. If those others fill the model with racist or other responses, things like this can happen.
These glitches are important to track given that businesses are putting big faith in chatbots which they think can save money and deliver better customer service. Online chat buttons on banking and retail websites are often chatbots, which simulate human interaction, to help customers navigate the site or answer questions.