It’s a fascinating report and project. AI and machine learning concepts have always intrigued me. And while it’s certainly a very convincing record, whether LaMDA is truely sentient remains questionable. Without peer and external review and further testing, it’s his opinion versus the official Google line:
Google says Lemoine’s findings do not prove sentience. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” said Google spokesperson Brian Gabriel. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
The Google engineer published the conversations after he was informed of the outcome of his submission, and decided to make his Medium post. He has (reportedly) since been put on “administrative leave”.
Google made the decision to place Lemoine on “paid administrative leave” on Monday after it was determined that he had breached the company’s confidentiality policy by publishing the conversations with LaMDA online. The company emphasized in a statement that Lemoine had been employed as a software engineer, not an ethicist.
THANK YOU!! I read that article yesterday and completely agree. Look at any politician, just because they can speak eloquently, does not mean they can think eloquently.
So what you’re really trying to say is that in order to determine if you are speaking to an AI or an actual human, the peanut butter and pineapple/feather model should be used to identify this?
Is this behaviour considered acceptable when talking in person too?