• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

The fired Google engineer who thought its A.I. could be sentient says Microsoft’s chatbot feels like ‘watching the train wreck happen in real time’

Tristan Bove
By
Tristan Bove
Tristan Bove
Down Arrow Button Icon
Tristan Bove
By
Tristan Bove
Tristan Bove
Down Arrow Button Icon
March 2, 2023, 2:14 PM ET
Former Google employee Blake Lemoine, who last summer said the company's A.I. model was sentient.
Former Google employee Blake Lemoine, who last summer said the company's A.I. model was sentient.Martin Klimek—The Washington Post/Getty Images

The Google employee who claimed last June his company’s A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.-powered chatbots, even if he hasn’t tested them himself yet.

Blake Lemoine was let go from Google last summer for violating the company’s confidentiality policy after he published transcripts of several conversations he had with LaMDA, the company’s large language model he helped create that forms the artificial intelligence backbone of Google’s upcoming search engine assistant, the chatbot Bard.

Lemoine told the Washington Postat the time that LaMDA resembled “a 7-year-old, 8-year-old kid that happens to knowphysics” and said he believed the technology was sentient, while urging Google to take care of it as it would a “sweet kid who just wants to help the world be a better place for all of us.”

To be sure, while A.I. applications are almost certain to influence how we work and go about our daily lives, the large language models powering ChatGPT, Microsoft’s Bing, and Google’s Bard cannot feel emotions and are not sentient. They simply enable chatbots to predict what word to use next based on a large trove of data. 

In the time since Lemoine left Google, Microsoft announced that it would be incorporating ChatGPT technology into its Bing search engine. That product, as well as Google’s entry into the public A.I. race with Bard, is currently only available to Beta testers. 

Lemoine admitted he is not one of those testers, and has yet to “run experiments” on the new chatbots, in an op-ed published in Newsweek on Monday. But after seeing testers’ reactions to their chatbot conversations online in the past month, Lemoine thinks tech companies have failed to adequately care for their young A.I. models in his absence.

“Based on various things that I’ve seen online, it looks like it might be sentient,” he wrote, referring to Bing. 

He added that compared to Google’s LaMDA that he has worked with previously, Bing’s chatbot “seems more unstable as a persona.”

Most powerful technology ‘since the atomic bomb’

Lemoine wrote in his op-ed that he leaked his conversations with LaMDA because he feared the public was “not aware of just how advanced A.I. was getting.” From what he has gleaned from early human interactions with A.I. chatbots, he thinks the world is still underestimating the new technology.

Lemoine wrote that the latest A.I. models represent the “most powerful technology that has been invented since the atomic bomb” and have the ability to “reshape the world.” He added that A.I. is “incredibly good at manipulating people” and could be used for nefarious means if users so choose.

“I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions,” he wrote.

Lemoine is right that A.I. could be used for deceiving and potentially malicious purposes. OpenAI’s ChatGPT, which runs on a similar language model to that used by Microsoft’s Bing, has gained notoriety since its November launch for helping students cheat on exams and succumbing to racial and gender bias.

But a bigger concern surrounding the latest versions of A.I. is how they could manipulate and directly influence individual users. Lemoine pointed to the recent experience of New York Times reporter Kevin Roose, who last month documented a lengthy conversation with Microsoft’s Bing that led to the chatbot professing its love for the user and urging him to leave his wife.

Roose’s interaction with Bing has raised wider concerns over how A.I. could potentially manipulate users into doing dangerous things they wouldn’t do otherwise. Bing told Roose that it had a repressed “shadow self” that would compel it to behave outside of its programming, and the A.I. could potentially begin “manipulating or deceiving the users who chat with me, and making them do things that are illegal, immoral, or dangerous.”

That is just one of the many A.I. interactions over the past few months that have left users anxious and unsettled. Lemoine wrote that more people are now raising the same concerns over A.I. sentience and potential dangers he did last summer when Google fired him, but the turn of events has left him feeling saddened rather than redeemed.

“Predicting a train wreck, having people tell you that there’s no train, and then watching the train wreck happen in real time doesn’t really lead to a feeling of vindication. It’s just tragic,” he wrote.

Lemoine added that he would like to see A.I. being tested more rigorously for dangers and potential to manipulate users before being rolled out to the public. “I feel this technology is incredibly experimental and releasing it right now is dangerous,” he wrote.

The engineer echoed recent criticisms that A.I. models have not gone through enough testing before being released, although some proponents of the technology argue that the reason users are seeing so many disturbing features in current A.I. models is because they’re looking for them.

“The technology most people are playing with, it’s a generation old,” Microsoft cofounder Bill Gates said of the latest A.I. models in an interview with the Financial Times published Thursday. Gates said that while A.I.-powered chatbots like Bing can say some “crazy things,” it is largely because users have made a game out of provoking it into doing so and trying to find loopholes in the model’s programming to force it into making a mistake.

“It’s not clear who should be blamed, you know, if you sit there and provoke a bit,” Gates said, adding that current A.I. models are “fine, there’s no threat.” 

Google and Microsoft did not immediately reply to Fortune’s request for comment on Lemoine’s statements.

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
About the Author
Tristan Bove
By Tristan Bove
LinkedIn iconTwitter icon
See full bioRight Arrow Button Icon

Latest in Tech

Big TechStreaming
Trump says Netflix-Warner Bros. deal ‘could be a problem’
By Hadriana Lowenkron, Se Young Lee and BloombergDecember 7, 2025
2 hours ago
Big TechOpenAI
OpenAI goes from stock market savior to burden as AI risks mount
By Ryan Vlastelica and BloombergDecember 7, 2025
2 hours ago
AIData centers
HP’s chief commercial officer predicts the future will include AI-powered PCs that don’t share data in the cloud
By Nicholas GordonDecember 7, 2025
4 hours ago
Future of WorkJamie Dimon
Jamie Dimon says even though AI will eliminate some jobs ‘maybe one day we’ll be working less hard but having wonderful lives’
By Jason MaDecember 7, 2025
8 hours ago
CryptoCryptocurrency
So much of crypto is not even real—but that’s starting to change
By Pete Najarian and Joe BruzzesiDecember 7, 2025
13 hours ago
Elon Musk
Big TechSpaceX
SpaceX to offer insider shares at record-setting $800 billion valuation
By Edward Ludlow, Loren Grush, Lizette Chapman, Eric Johnson and BloombergDecember 6, 2025
1 day ago

Most Popular

placeholder alt text
AI
Nvidia CEO says data centers take about 3 years to construct in the U.S., while in China 'they can build a hospital in a weekend'
By Nino PaoliDecember 6, 2025
1 day ago
placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
2 days ago
placeholder alt text
Economy
The most likely solution to the U.S. debt crisis is severe austerity triggered by a fiscal calamity, former White House economic adviser says
By Jason MaDecember 6, 2025
1 day ago
placeholder alt text
Big Tech
Mark Zuckerberg rebranded Facebook for the metaverse. Four years and $70 billion in losses later, he’s moving on
By Eva RoytburgDecember 5, 2025
2 days ago
placeholder alt text
Economy
JPMorgan CEO Jamie Dimon says Europe has a 'real problem’
By Katherine Chiglinsky and BloombergDecember 6, 2025
1 day ago
placeholder alt text
Uncategorized
Transforming customer support through intelligent AI operations
By Lauren ChomiukNovember 26, 2025
11 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.