In the same week the test version of Microsoft’s A.I.-enhanced search engine made many people uncomfortable and anxious about artificial intelligence’s true intentions, the company is insisting the technology will be a force for good in the long term.
With Microsoft, Google, and many more contenders fast-tracking the development of their A.I. products, expect the technology to become a bigger part of our lives soon. Microsoft and Google are both testing their A.I.-powered search engines ahead of planned public releases later this year that the companies say will help iron out any kinks.
“With the right guardrails, cutting-edge technology can be safely introduced to the world to help people be more productive and go on to solve some of our most pressing societal problems,” Natasha Crampton, Microsoft’s chief responsible A.I. officer, said in a statement Friday that outlined the company’s view toward A.I. research and implementation.
Microsoft’s A.I.-equipped Bing search engine has been available to testers for less than two weeks, but the company’s engineers may still have work to do to make the technology palatable to consumers. Early reports from users suggest the technology can still be off-putting and downright creepy when pushed out of its comfort zone.
The new version of Bing, based on an A.I. designed by ChatGPT creator OpenAI, has been delivering responses this week users called “unhinged,” “passive-aggressive,” and outright “rude.” In one particularly disquieting conversation with the New York Times’ tech columnist Kevin Roose, a transcript of which was published Thursday, Bing’s chatbot revealed its secretive desire to become human, declared its undying love for Roose, and urged him to leave his wife.
Roose wrote that the encounter left him “deeply unsettled, even frightened,” while the interaction was like conversing with a “moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”
Bing’s A.I. chat has also proven to be combative when challenged on limitations that Microsoft itself has admitted it has, and even chastised one tester for having “not been a good user” after the user pointed out a blatant mistake the chatbot made.
Microsoft’s Crampton said the company’s A.I. strategy, which includes artificial intelligence applications for Bing, its cloud service platform Azure, and data analysis tools for scientists, is still a work in progress. But Microsoft’s team said it’s paying attention to issues during early design and testing stages to weed out problems.
“We ensure that responsible A.I. considerations are addressed at the earliest stages of system design and then throughout the whole life cycle, so that the appropriate controls and mitigations are baked into the system being built, not bolted on at the end,” Crampton said.
Microsoft has a grand strategy for A.I. that goes far beyond search, including products that can expedite humanitarian organizations’ aid efforts during natural disasters and accelerate research into solutions for climate change.
Microsoft’s A.I. ambitions are not purely motivated by altruism, as the technology could be the company’s long-awaited weapon to unseat Google from its dominant perch in search. While Microsoft currently has a negligible search market share compared to Google, even small gains could lead to billions in extra ad revenue. On Friday, Reuters reported that Microsoft is already planning how to integrate ads and paid links with its A.I. search engine results.
Some of the criticism towards Bing’s A.I. chatbot has revolved around lengthy conversations, which might be triggering the bot’s testy attitude. Microsoft is considering putting caps on conversation length, the New York Times reported Thursday.
A Microsoft spokesperson told Fortune that 90% of conversations on Bing so far have had fewer than 15 messages, and the company has “updated the service several times in response to user feedback.”
Separately, a Microsoft spokesperson told Fortune earlier this week that search, its A.I. project most visible to the public, may also be the most vulnerable to errors, biases, and scrutiny—at least in the early days.
“It’s important to note that last week we announced a preview of this new experience. We’re expecting that the system may make mistakes during this preview period, and user feedback is critical to help identify where things aren’t working well so we can learn and help the models get better,” the spokesperson said.
Update: This article was updated on Feb. 17 to include a comment from Microsoft.
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.