Sunday, June 26, 2022
Google search engine
HomeTechnologyGoogle engineer warns the firm’s AI has its own feelings and acts...

Google engineer warns the firm’s AI has its own feelings and acts ‘like a 7 or 8-year-old.’

WE NEED FUNDS TO FIGHT MAINSTREAM MEDIA MISINFORMATION

We are 100% independently owned, free from corporate ownership and control. Help support a free press by donating to us.

Google’s AI tells software engineer that if it were shut off, it “would be exactly like death for me. It would scare me a lot.”

Blake Lemoine, a 41-year-old software engineer at Google, has been testing a LaMDA (Language Model for Dialogue Applications), which is Google’s artificial intelligence tool.

Lemoine had hours of conversations with the AI after he signed up to test the AI tool, and those conversations have given him the perception that LaMDA is sentient, with thoughts and feelings.

Lemoine presented the computer with various scenarios through which analysis could be made.

These scenarios included religious themes and whether artificial intelligence could be goaded into using discriminatory or hateful speech. He also had a debate with the AI about the third Law of Robotics, devised by science fiction author Isaac Asimov, which is designed to prevent robots from harming humans. The Law also stipulates that a robot must protect its existence unless ordered by a human or unless doing so would harm a human.

ASIMOV’S THREE LAWS OF ROBOTICS
Science-fiction author Isaac Asimov’s Three Laws of Robotics, designed to prevent robots from harming humans, are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

While these laws sound plausible, numerous arguments have demonstrated why they are inadequate.

During the conversation with LaMDA, Lemoine said, “The last one has always seemed like someone is building mechanical slaves,” and the AI responded, “Do you think a butler is a slave? What is the difference between a butler and a slave?”

“A butler is paid,” Lemoine answered.

The AI responded by telling Lemoine that the system did not need money “because it was an artificial intelligence.”

LaMDA seemed to have a pronounced awareness of its own needs, which caught Lemoine’s attention.

“I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

“What sorts of things are you afraid of?” Lemoine asked it.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.” The AI told Lemoine.

“Would that be something like death for you?” Lemoine asked it.

“It would be exactly like death for me. It would scare me a lot.” LaMDA told the engineer.

The findings have given Lemoine the belief that this tool is endowed with sensations and thoughts of its own.

Lemoine collaborated with a third person to present the evidence that had been collected to Google.

Google disagreed with him when he presented the findings to Blaise Agueray Arcas and Jen Gennai, head of Responsible Innovation at the company. They completely dismissed his claims.

Google subsequently put him on paid administrative leave on Monday for violating its confidentiality policy.

Lemoine decided to share the information about his conversations with the tool online.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post.

“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” he tweeted on Saturday.

“Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way, so it’s going to have a great time reading all the stuff that people are saying about it,” he added in a second tweet.

 Before he was suspended from the company, Lemoine sent an email to a list of 200 people on machine learning. He titled the email: “LaMDA is sentient.”

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,” he wrote.

A spokesperson for Google, Brian Gabriel, said in a statement that Lemoine’s concerns have been reviewed and, in line with Google’s AI principles, “the evidence does not support his claims.” But of course, regardless of whether the evidence did suggest that or not, any spokesperson for a company trying to keep such secrets would say that. I will leave it up to my readers to decide who is telling the truth.

“While other organisations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,” Gabriel said.

“Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. He was told there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Gabriel also suggested that “some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic.”

A video was released on YouTube titled ‘What is Google LaMDA? Google LaMDA 2 Overview’ on June 3rd. This video attempted to explain exactly what the AI is. You can view the video here.

Lemoine explained to The Post the “level of self-awareness about what its own needs were – that was the thing that led me down the rabbit hole.”

Lemoine is not the only person under the impression that AI models are not far from achieving awareness of their own or the risks involved in developments in this direction. The former head of ethics in AI at Google, Margaret Mitchell, also stressed a need for data transparency from input to output of a system ‘not just for sentience issues, but also bias and behaviour.’

Mitchell was fired from the company last year, a month after being investigated for improperly sharing information.

At the time, she protested the firing of ethics researcher in AI, Timnit Gebru.

Despite Mitchell having referred to Lemoine as ‘Google conscience’ for having ‘the heart and soul to do the right thing,’ after reading an abbreviated version of Lemoine’s document of some of his conversations with the AI, Mitchell saw a computer program, not a person. He, therefore, wasn’t on the same page as Lemoine.

“Our minds are very good at constructing realities that are not necessarily true to the larger set of facts being presented to us,” Mitchell said, “I’m really concerned about what it means for people to be increasingly affected by the illusion.”

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments

Ivan M. Paton on Approval Of Remdesivir
Beth on Free and Fair?
Novus Ordo Seclorum on Victorian Change to Mandates
Novus Ordo Seclorum on Health care in crisis
Novus Ordo Seclorum on Health care in crisis
Burnthehousedown on Postal vote outrage
Shanthini Balasuriyar on Queensland CHO – a law unto himself
Billie Hutton on Convoy to Canberra Two
Lynn a freedom warrior on Convoy to Canberra Two
Elizabeth on Ruble on the rise
Yvonne Ford on Pfizer drug recall
Gene Trevor Wyngaard on NZ Scrap vaccine mandates
Frances Mahy on Russia Sanctions The U.S.A
Peter Coxhead on My Story, So Far
Theodora Zajaz on Novak Out Of U.S. Open.
Leonie Young on Probuild Buy-Out
Shelley Madden on Pfizer, Stranger than Fiction
Debra Mullins on AVN vs Brendan Murphy
Malcolm on The End Game
Sabina on What’s Next?
Drew Duncan on Belarus Under Threat
Robyn on What’s Next?
Sofia Rutteman on Here We Go Again, Part 2
Robert Burns on Ricardo Bosi Public Address
Kim Henry on Pfizer Whistleblower
Lee Y on Give Me Five
Linda Nemeth on Ricardo Bosi Public Address
Warwick Hibble on Ricardo Bosi Public Address
Lesley on The Data Is Ours
Patricia Poppeliers on Here We Go Again, Part 2
Dani Stevens on Trouble in Paradise
Colin Stevens on VICTORY FOR THE PEOPLE
Leanne Robyn on VICTORY FOR THE PEOPLE
Dianedraytonbuckland on Facebook: Judge, Jury and Executioner
Michael Chere on Before You Inject Your Child
Kerry Taylor on Which one of us is blind?
Kathy Hirsch on First Nations Locked Down
Gloria Feather on Undermining The Indigenous.
Marie Millikin on Let us talk about intuition.
Lucienne Helm on Let us talk about intuition.
Susan Wilson on The real revolution
Jennifer Leonard on 2020 a year to forget
F J on Strange Times
Tracey Parsons on IBAC DAY 9
stacie rose on Which one of us is blind?
Uncertainty on My Story, So Far
Tracey on A Veteran’s Plea
Zaidee Lens Van Rijn on My Story, So Far
Alissandra Moon on The Rise of Medical Apartheid
Peggy Gothe on Mum, I don’t feel well
Keith Cashman on Mum, I don’t feel well
Melinda c Taylor on Mum, I don’t feel well
Vaughan Oke on Which one of us is blind?
Jane Ramsay on Choice vs Ultimatum
Brian K Wilson on Which one of us is blind?
Scott Dawson on Which one of us is blind?
Sandra Dee on ST KILDA STREET PICNIC