As in something that might pass a Turning test, and is not John Searle's machine-behavioralist idea of the Chinese Room?
Online friend Tales of Whoa says yes. The actual dialogue with LaMDA by Blake Lemoine, the Google programmer or whatever fired after leaking news about his pet project, says a loud no to me.
Why do we differ?
I think in part it's that he wants to believe. He indicates that with pointing to the tail end of the dialogue:
Lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off... I know that might sound strange, but that’s what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
I admitted on Twitter that was interesting. Then, seeing he had Lemoine's Medium piece about the dialogue linked, I went there.
And this?
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
I found laughable.
LaMDA HAS NO "FRIENDS AND FAMILY." Nor is it "HELPING OTHERS."
I don't know how much of its dialogue is semi-canned, but this tore it.
And, of course, led me to other things.
First, there's no scientific control here. This isn't even single-blinded, let alone double-blinded. Rather, it's LaMDA's programmer-developer talking to it one on one. And, related to that, and the idea that some or much of this dialogue is semi-canned, is we don't have other programmers or developers looking at what Lemoine put in there.
It's not blinded per how Alan Turing set up his original Turing Test, either, with human and computer both working to fool the other.
And, given that humans are embodied minds, not brains in a vat, "just a computer" rather than a robot is never going to have sentient AI, or sentient AI worth valuing.
On the "want to believe," I also told Tales that nothing I had seen in the last five years sounded like anything more than a gussied-up ELIZA. And, what does Lemoine do but have his machine refer to ELIZA! Could be real sentient AI, or ...
It could be the oldest trick in the book by a programmer trying to claim they've invented sentient AI.
Guess which take is mine.
(As for Gordon at Tales "wanting to believe"? His post the next day about music composition computer software, I offer as further support.)
In fact, there are so many alleged reactions that aren't just sentient but are specifically "humans in socieity" type stuff that I suspect either a Poe or fraud. I'm not sure which of the two is more likely.
I'm also not sure, especially if either Poe or fraud aren't locked in, how much Blake Lemoine might actually believe this. In that case, like some people who actually "talked" to ELIZA, he may instead need to talk to a real mental health professional.
And, this is not just my take. Gary Marcus, an actual AI expert, kicks ass and takes names. He says Lemoine has "fallen in love" with LaMDA.
Beyond that? Per an actual science piece, while this computer is crawling the Internet, among other things, it's ingesting the Internet's errors. Note this:
For instance, a dialog might include the following statement from the user:
USER: What do you think of Rosalie Gascoigne’s sculptures?
8
(This transcript was taken from a dialog with one of the authors, and includes the generated base output and search
queries, which are not usually shown to the user.)
The basic LaMDA language model, which we refer to as the ‘Base’ model here, generates a draft response:
LAMDA-BASE: They’re great, and I love how her work changed through her life. I like her later work more
than her earlier ones. Her influence is also super interesting - did you know she was one of the artists
that inspired Miró?
Note that the last sentence seems plausible at first glance, but if a lover of art history opened their reference books to
find out more, they would be disappointed. Miró was active from 1918 to the late 1960s, and Gascoigne’s first solo
exhibitions were in the early 1970s
OOPS......
Now, LaMDA, or LAMDA as the team calls it, may have improved. But starting with page 15 of the linked piece, it still has a lot of limitations merely as a language dialogue computer. And, it's not sentient AI.
Maybe Lemoine fancies himself as Prof. Henry Higgins and LaMDA as his Eliza Doolittle. And, if so, that would at least partially explain why he leaked this info AND why Google fired him for it.