A Google employee has reportedly been put on leave after claiming a computer chatbot he was working on had become sentient.
Engineer Blake Lemoine said he was placed on leave last week after publishing transcripts between himself and the company’s LaMDA (language model for dialogue applications) chatbot, the Washington Post reports. The chatbot, he said, thinks and feels like a human child.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 9-year-old kid that happens to know physics,” Lemoine, 41, told the Post, adding that the bot talked about its rights and personhood, and changed his mind about Isaac Asimov’s third law of robotics.
Lemoine presented evidence to Google that the bot was sentient, but his claims were refuted by Google vice president Blaise Aguera y Arcas and Jen Gennai, head of responsible innovation for the company. Lemoine then went public, according to the Post.
Google ethicists and technologists “have reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” a company spokesperson told the Post. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
Blake was placed on leave for violating Google’s confidentiality policy, the Post reported.
Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.