Buzzy ChatGPT chatbot is so error-prone that its maker just publicly promised to fix the tech’s ‘glaring and subtle biases’

Sam Altman, chief executive officer and co-founder of OpenAI.
Chona Kasinger—Bloomberg/ Getty Images

OpenAI, the artificial-intelligence research company behind the viral ChatGPT chatbot, said it is working to reduce biases in the system and will allow users to customize its behavior following a spate of reports about inappropriate interactions and errors in its results.

“We are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs,” the company said in a blog post. “In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should.”

OpenAI is responding to reports of biases, inaccuracies and inappropriate behavior by ChatGPT itself, and criticism more broadly of new chat-based search products now in testing from Microsoft Corp. and Alphabet Inc.’s Google. In a blog post on Wednesday, Microsoft detailed what it has learned about the limitations of its new Bing chat based on OpenAI technology, and Google has asked workers to put in time manually improving the answers of its Bard system, CNBC reported.

San Francisco-based OpenAI also said it’s developing an update to ChatGPT that will allow limited customization by each user to suit their tastes, styles and views. In the US, right-wing commentators have been citing examples of what they see as pernicious liberalism hard-coded into the system, leading to a backlash to what the online right is referring to as “WokeGPT.”

“We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” OpenAI wrote on Thursday. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging — taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs. There will therefore always be some bounds on system behavior.” 

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward