"I know a person when I talk to it" —

Google fires Blake Lemoine, the engineer who claimed AI chatbot is a person

Google says Lemoine violated security rules, slams "wholly unfounded" claims.

Former Google engineer Blake Lemoine poses for a picture wearing a hooded sweatshirt.
Enlarge / Former Google engineer Blake Lemoine.
Getty Images | Washington Post

Google has fired Blake Lemoine, the software engineer who was previously put on paid leave after claiming the company's LaMDA chatbot is sentient. Google said Lemoine, who worked in the company's Responsible AI unit, violated data security policies.

"If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months," Google said in a statement provided to Ars and other news organizations.

Lemoine confirmed on Friday that "Google sent me an email terminating my employment with them," The Wall Street Journal wrote. Lemoine also reportedly said he's talking with lawyers "about what the appropriate next steps are." Google's statement called it "regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information."

LaMDA stands for Language Model for Dialog Applications. "As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation," Google said. "LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development."

Google: LaMDA just follows user prompts

In a previous statement provided to Ars in mid-June, shortly after Lemoine was suspended from work, Google said that "today's conversational models" of AI are not close to sentience:

Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic—if you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on. LaMDA tends to follow along with prompts and leading questions, going along with the pattern set by the user. Our team–including ethicists and technologists–has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.

Google also said, "Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has."

“I know a person when I talk to it”

Lemoine has written about LaMDA several times on his blog. In a June 6 post titled, "May be Fired Soon for Doing AI Ethics Work," he reported being "placed on 'paid administrative leave' by Google in connection to an investigation of AI ethics concerns I was raising within the company." Noting that Google often fires people after putting them on leave, he claimed that "Google is preparing to fire yet another AI Ethicist for being too concerned about ethics."

A Washington Post article on June 11 noted that "Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient." Just before he was cut off from his Google account, "Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject 'LaMDA is sentient,'" the article said. Lemoine's message concluded, "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."

"I know a person when I talk to it," Lemoine said in an interview with the newspaper. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."

Channel Ars Technica