Microsoft’s latest experiment in real-time machine learning, an AI-driven chat-bot called Tay, quickly turned to the dark side on Wednesday after the bot started posting racist and sexist messages on Twitter in response to questions from users. Among other things, Tay said the Holocaust never happened, and used offensive terms to describe a prominent female game developer.
The company said on Thursday that it is working on fixing the problems that led to the offensive messages. “The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said in a statement sent to Business Insider. “As it learns, some of its responses are inappropriate. We’re making some adjustments.”
In one tweet that has since been deleted, the Tay bot said: “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”
In its initial pitch for Tay, Microsoft (MSFT) said that the bot was “designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets.”
At least part of the problem seemed to be that Tay—much like earlier versions of chat-bots, including the pioneering SmarterChild bot from the late 1990s—was designed to repeat statements made by other users, as a way of engaging them in conversation. But the company apparently didn’t implement any automated filters on specific terms, including racist labels and other common expletives.
Artificial intelligence expert Azeem Azhar told Business Insider that Microsoft could have taken a number of steps to avoid what happened with the Tay bot. “It wouldn’t have been too hard to create a blacklist of terms; or narrow the scope of replies. They could also have simply manually moderated Tay for the first few days, even if that had meant slower responses.”
Zoe Quinn, a game developer who has been the target of significant amounts of online abuse as a result of the “GamerGate” controversy over sexism in the gaming industry, posted a screenshot of a tweet from Tay that referred to her as a “whore.” Many of the offensive tweets posted by the bot have been removed by Microsoft, but screenshots of many are still circulating.
Microsoft’s experiment with Tay is part of a broader evolution of consumer technology involving chat and messaging applications, which many technology analysts believe will become one of the primary interfaces for consumer technology and services in the future.
The software company’s mistakes with Tay, however, show that using simple AI in such services can have an obvious downside, especially when a bot is opened up to Twitter and other social networks. And Tay is hardly the first example of this: Last year, after Coca-Cola launched a bot designed to retweet messages from fans, Gawker Media managed to get the bot to retweet excerpts from Hitler’s Mein Kampf.
If nothing else, Microsoft and anyone watching the Tay project has learned one thing: Namely, how quickly a well-intentioned AI experiment can go south when exposed to Twitter and the social web.