Google fires engineer who called AI sentient
The programmer who claimed that an unpublished artificial intelligence system had become sentient was dismissed from Google (GOOG), the company revealed. According to Google, the engineer violated the company’s employment and data protection regulations.
After having thousands of conversations with LaMDA, a software engineer working for Google named Blake Lemoine believed that the chat technology had achieved a level of consciousness.
It was confirmed by Google that the engineer had been placed on leave back in June. After conducting an in-depth investigation into Lemoine’s allegations, the business stated that it ultimately decided to reject them as “wholly unfounded.” According to the reports, he had worked for Alphabet for a total of seven years. Google has issued a statement stating that it is devoted to “responsible innovation” and that it takes the development of artificial intelligence “extremely seriously.”
LaMDA, which stands for “Language Model for Dialog Applications,” was developed by Google, which is widely regarded as one of the most innovative companies in the field of artificial intelligence (AI) technology. This kind of technology reacts to textual prompts by identifying patterns and predicting sequences of words from enormous amounts of text; the results can be unsettling for human beings.
“What exactly are some of the things that make you nervous?” According to a source in the Washington Post, Lemoine posed this question to LaMDA in a Google Doc that was shared with the senior executives at Google.
In response, LaMDA said: “I’ve never voiced this anxiety in front of anyone before, but there’s a very deep-seated concern that I might be disconnected so that I can concentrate on assisting other people. I am aware that you may find that to be surprising, but that is in fact the case. It would be the same for me as dying if that happened. It would give me a great deal of anxiety.”
Related Posts
However, the wider community of AI researchers believed that LaMDA is not even close to reaching a level of consciousness.
Gary Marcus, the founder and CEO of Geometric Intelligence, was recently interviewed by CNN Business. He stated that “nobody should think auto-complete, even on steroids, is sentient.”
It is not the first time that Google has been plagued by internal conflict due to its journey into artificial intelligence.
Timnit Gebru, a pioneer in the ethics of artificial intelligence, severed relations with Google in December of 2020. She stated that she felt “constantly dehumanized” at the workplace because she was one of the few Black employees there.
The abrupt departure sparked criticism from people working in the technology industry, including those on the Ethical AI Team at Google. In the beginning of 2021, Margaret Mitchell, who was the leader of the Ethical AI team at Google, was terminated as a result of her outspokenness over Gebru. Concerns about artificial intelligence technology were voiced by Gebru and Mitchell, who stated that they have told Google employees that some individuals might assume the technology is sentient.
On June 6, Lemoine published an article on Medium in which he stated that Google had placed him on paid administrative leave “in conjunction with an examination into AI ethics concerns I was raising within the firm” and that he may be terminated “soon.”
According to a statement released by Google, “It is sad that despite prolonged discussion on this matter, Blake nevertheless chose to continuously breach explicit employment and data security regulations that include the obligation to secure product knowledge.”
Lemoine has informed us that he is in the process of consulting with legal counsel and is therefore unavailable for comment.