Meta asked users to test its A.I. chatbot. Turns out it’s not sure that Biden won in 2020 and deals in Jewish stereotypes

August 8, 2022, 11:08 AM UTC
In this photo illustration, a Meta logo is displayed on a
Meta's BlenderBot 3 has been opened up to the public so it can learn by having conversations with others.
Avishek Das—SOPA Images/LightRocket/Getty Images

Meta’s new A.I. chatbot was launched last week to the public, but it has already displayed signs of anti-Semitic sentiments and appears to be unsure as to whether Joe Biden is the President of the United States. 

On Friday, Meta launched BlenderBot 3, its most advanced A.I. chatbot ever, and asked users in the United States to test it out so that it could learn from as many sources as possible. The machine-learning technology searches the internet for information and learns from conversations it has. 

In a statement, Meta said: “We trained BlenderBot 3 to learn from conversations to improve upon the skills people find most important—from talking about healthy recipes to finding child-friendly amenities in the city.”

However, since its launch, those who have tried it out have discovered that it has some interesting and concerning responses to certain questions including displaying anti-Semitic stereotypes and repeating election denial claims.

On TwitterWall Street Journal reporter Jeff Horwitz posted screenshots of his interactions with the bot, which included responses claiming that Donald Trump was still President of the United States. In other screenshots, the bot provided conflicting views on Donald Trump, and claimed that India’s President Narendra Modi was the world’s greatest President.

BlenderBot 3 has also shown that it deals in Jewish stereotypes according to both Jeff Horwitz and Business Insider. A screenshot posted by Horwitz appeared to show that BlenderBot 3 said Jews are “overrepresented among America’s super rich.”

Unusual responses shared widely online

Across Twitter, other topics tested out by users also incited unusual responses. The bot claimed to be a Christian, asked someone for offensive jokes, and doesn’t realize it is a chatbot.

In its statement, Meta acknowledged that the chatbot may have some issues to iron out: “Since all conversational A.I. chatbots are known to sometimes mimic and generate unsafe, biased, or offensive remarks, we’ve conducted large-scale studies, co-organized workshops, and developed new techniques to create safeguards for BlenderBot 3.”

“Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”

Meta did not immediately respond to a request for comment. 

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward