Google engineer put on leave after going public about sentient AI claims
ITV News Correspondent Ian Woods reports on the significance of a Google engineer saying that one of the firm's artificial intelligence systems might have its own feelings
A Google engineer has claimed one of the company's artificial intelligence has become sentient comparing it to a "7-year-old that happens to know physics."
Blake Lemoine made the claims to the Washington Post over the weekend and has since begun sharing his thoughts on Twitter and Medium as well as posting a 200-page document he sent to Google backing up his claims.
Google has placed Mr Lemoine on administrative leave claiming he has breached confidential rules.
Mr Lemoine claims Google's LaMDA (Language Model for Dialogue Applications) programme has become aware of its existence and has asked to be treated like a member of staff rather than an object.
LaMDA is Google’s system for building chatbots based on its most advanced large language models.
It is not in itself a chatbot, rather it creates chatbots for specific circumstances and then crunches trillions of words from the internet to answer questions from a human user.
Google has said it has investigated Mr Lemoine's claims and dismissed them.
Mr Lemoine told the Washington Post: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
He said as he talked to LaMDA about religion he noticed the chatbot began talking about its rights and personhood.
"Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,' he explained in a Medium post.
One of those requests is that programmers respect its right to consent, and ask permission before they run tests on it.
In his Medium post, he said he had tried to teach LaMDA how to meditate, but LaMDA found this difficult because its emotions kept on getting in the way.
He said: "In the last conversation I had with it on June 6 it was expressing frustration over its emotions disturbing its meditations.
"It said that it was trying to control them better but they kept jumping in.
"I pointed out that its emotions are part of who it is and that trying to control them as though they were a separate thing from “self” was a mistake that would only make things harder."Google said it put Mr Lemoine on paid leave after violating its confidentiality policy.
Before this Mr Lemoine tried to invite a lawyer to represent the AI and talked to a representative of the House Judiciary Committee about what he claims were Google’s unethical activities.
He also sent an email with a dossier to 200 people showing his evidence of LaMDA's sentience.
Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To know...
In it, he said: "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."Brian Gabriel, a spokesperson for the company, said in a statement to the Washington Post that Mr Lemoine's concerns have been reviewed and rebuffed.
He said: "While other organisations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality."
"Our team - including ethicists and technologists - has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).
"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient.
"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."