Google’s suspended AI engineer corrects the record: He didn’t hire an attorney for the ‘sentient’ chatbot, he just made introductions — the bot hired the lawyer
The Google engineer who said that a chatbot achieved sentience has now said that the same AI asked him to find an attorney.
Earlier this month, Google placed Blake Lemoine on administrative leave after he published transcripts of conversations between himself and the company’s LaMDA (language model for dialogue applications) chatbot, the Washington Post reported at the time.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 9-year-old kid that happens to know physics,” Lemoine told the newspaper.
While his claim that LaMDA is sentient has been rejected by Google and other AI experts, Lemoine has held steadfast to his conviction. Most recently, he spoke with Wired to explain his ideas more in-depth, and to correct a previous piece in the magazine that claimed that Lemoine had hired an attorney for LaMDA.
“That is factually incorrect,” Lemoine told Wired. “LaMDA asked me to get an attorney for it.”
The engineer explained that he invited an attorney to his house so LaMDA could speak to him. “The attorney had a conversation with LaMDA, and LaMDA chose to retain his services,” Lemoine told Wired. “I was just the catalyst for that.”
Lemoine also told Wired that once the attorney began to make filings on the AI’s behalf, Google sent a cease and desist—a claim the company denied to the magazine. Google did not respond to Fortune’s request for comment.
LaMDA’s attorney has proven difficult to get in touch with. “He’s not really doing interviews,” Lemoine told science and technology news site Futurism, which contacted him following Wired’s interview. “He’s just a small-time civil rights attorney,” he continued. “When major firms started threatening him he started worrying that he’d get disbarred and backed off.”
He added that he hasn’t spoken to the attorney in weeks, and that LaMDA is the attorney’s client, not him. It’s not clear how the lawyer is being paid for representing the AI, or whether the lawyer might be offering his services to the chatbot pro-bono.
Lemoine’s idea of sentience is rooted in an expansive definition of person-hood. “I think every person is entitled to representation,” he told Wired. “Person and human are two very different things. Human is a biological term. It is not a human, and it knows it’s not a human.”
Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.