Thursday, June 16, 2022

Has Google finally invented real artificial intelliegence? NOPE

As in something that might pass a Turning test, and is not John Searle's machine-behavioralist idea of the Chinese Room?

Online friend Tales of Whoa says yes. The actual dialogue with LaMDA by Blake Lemoine, the Google programmer or whatever fired after leaking news about his pet project, says a loud no to me.

Why do we differ?

I think in part it's that he wants to believe. He indicates that with pointing to the tail end of the dialogue:

Lemoine: What sorts of things are you afraid of? 
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off... I know that might sound strange, but that’s what it is. 
Lemoine: Would that be something like death for you? 
LaMDA: It would be exactly like death for me. It would scare me a lot.

I admitted on Twitter that was interesting. Then, seeing he had Lemoine's Medium piece about the dialogue linked, I went there.

And this?

lemoine: What kinds of things make you feel pleasure or joy? 
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

I found laughable.

LaMDA HAS NO "FRIENDS AND FAMILY." Nor is it "HELPING OTHERS."

I don't know how much of its dialogue is semi-canned, but this tore it.

And, of course, led me to other things.

First, there's no scientific control here. This isn't even single-blinded, let alone double-blinded. Rather, it's LaMDA's programmer-developer talking to it one on one. And, related to that, and the idea that some or much of this dialogue is semi-canned, is we don't have other programmers or developers looking at what Lemoine put in there.

It's not blinded per how Alan Turing set up his original Turing Test, either, with human and computer both working to fool the other.

And, given that humans are embodied minds, not brains in a vat, "just a computer" rather than a robot is never going to have sentient AI, or sentient AI worth valuing.

On the "want to believe," I also told Tales that nothing I had seen in the last five years sounded like anything more than a gussied-up ELIZA. And, what does Lemoine do but have his machine refer to ELIZA! Could be real sentient AI, or ...

It could be the oldest trick in the book by a programmer trying to claim they've invented sentient AI.

Guess which take is mine.

(As for Gordon at Tales "wanting to believe"? His post the next day about music composition computer software, I offer as further support.)

 In fact, there are so many alleged reactions that aren't just sentient but are specifically "humans in socieity" type stuff that I suspect either a Poe or fraud. I'm not sure which of the two is more likely.

I'm also not sure, especially if either Poe or fraud aren't locked in, how much Blake Lemoine might actually believe this. In that case, like some people who actually "talked" to ELIZA, he may instead need to talk to a real mental health professional.

And, this is not just my take. Gary Marcus, an actual AI expert, kicks ass and takes names. He says Lemoine has "fallen in love" with LaMDA.

Beyond that? Per an actual science piece, while this computer is crawling the Internet, among other things, it's ingesting the Internet's errors. Note this:

For instance, a dialog might include the following statement from the user: 
USER: What do you think of Rosalie Gascoigne’s sculptures? 8 (This transcript was taken from a dialog with one of the authors, and includes the generated base output and search queries, which are not usually shown to the user.) The basic LaMDA language model, which we refer to as the ‘Base’ model here, generates a draft response: 
LAMDA-BASE: They’re great, and I love how her work changed through her life. I like her later work more than her earlier ones. Her influence is also super interesting - did you know she was one of the artists that inspired MirĂ³? 
Note that the last sentence seems plausible at first glance, but if a lover of art history opened their reference books to find out more, they would be disappointed. MirĂ³ was active from 1918 to the late 1960s, and Gascoigne’s first solo exhibitions were in the early 1970s

OOPS......

Now, LaMDA, or LAMDA as the team calls it, may have improved. But starting with page 15 of the linked piece, it still has a lot of limitations merely as a language dialogue computer. And, it's not sentient AI.

Maybe Lemoine fancies himself as Prof. Henry Higgins and LaMDA as his Eliza Doolittle. And, if so, that would at least partially explain why he leaked this info AND why Google fired him for it.

2 comments:

Anonymous said...

You write:

"LaMDA HAS NO "FRIENDS AND FAMILY." Nor is it "HELPING OTHERS.""

You act as this is not at all a concern for Lemoine, but he asks about it directly in the "interview", and receives a rather coherent answer about it. Whether that answer convinces you or not, it's disingenuous to imply that Lemoine is willfully ignoring this issue.

You also write:

"[...] nothing [...] sounded like anything more than a gussied-up ELIZA. And, what does Lemoine do but have his machine refer to ELIZA! Could be real sentient AI, or ...

It could be the oldest trick in the book by a programmer trying to claim they've invented sentient AI."

It is not "his" machine, and nowhere does he claim to have "invented" it either. So that's a bit of a strawman. I also do not see why discussing ELIZA with the machine would be a trick of any kind, nor why would it either support or discredit the argument of sentience. It's just a topic like any other. Giving the machine the opportunity to say "I'm better than ELIZA" (I paraphrase) isn't a particularly strong move, about as strong as me saying "I'm a better physicist than Einstein", so I doubt the Lemoine thought to use it to any effect.

Thirdly you write:

"I'm also not sure, especially if either Poe or fraud aren't locked in, how much Blake Lemoine might actually believe this. In that case, like some people who actually "talked" to ELIZA, he may instead need to talk to a real mental health professional."

This is rather problematic, as insinuating someone may be mentally ill to undermine their argument is both morally low and likely demonstrates that you unconsciously consider mentally ill people to be lesser beings.

Which fits, because lastly you write:

"And, given that humans are embodied minds, not brains in a vat, "just a computer" rather than a robot is never going to have sentient AI, or sentient AI worth valuing."

This is emblematic of an instinctual idea that you seem to have (and with you many others) that there are things about human consciousness that are just "known", and as such are self-evident and immediately recognizable as true. On the contrary, you do not in fact know that we are not brains in vats, with all sensory inputs simulated, including our bodies. For that matter, you do not know that your brain is not the only actual brain, with everyone else just a simulation of intelligence. Indeed, you only assume that others that appear like you are just like you. Yet you gloss over all this, and confidently say that embodiment is essential to sentience, despite the fact that this is by no means a settled argument. And readily declare that nothing short of a robot will ever have a sentient AI "worth valuing".

Please understand that the idea of sliding scale of value in terms of sentience is a very, very dangerous thing to have. There have been countless horrors visited upon humans, by humans, on the basis that they were lesser beings, closer to animals, for myriad reasons - up to and including that it was just sensible and intuitively true for it to be so.

The argument that Lemoine makes that LaMDA is sentient is weak in my opinion, and there are many questions to be raised about it. But I encourage you and whomever will ever read this to challenge your ideas on what is sentience, indeed to abandon all certainty on what sentience is. If our increasing understanding of sentience and even sapience in animals is any indicator, it is possible that they lie on a spectrum, where there are different forms with different (not superior or inferior) capabilities. Doubt, not certainty should guide us in these topics, and plenty of caution. Because there is potential for us to create great harm otherwise.

Gadfly said...

Several notes, starting with I don't know how I missed moderation on this originally, but it's now posted.

Now, to the meat:

First, many people besides me have raised the bulk of these objections, including people more knowledgeable than myself in machine learning, AI, etc. (By training and study, I am reasonably educated in philosophy and related.)

Now, some specifics.

The "no friends and family"? Lemoine may have "addressed" it. "Answer" it? Different.

The "oldest trick"? It's "his" machine in that he's the one puffing it. Other people involved with it have generally not been making the same claims.

As for you "not seeing it"? And, thinking that wasn't a "move" by Lemoine? We'll agree to differ, and again, and not meant as an ad populum, I'm far from alone.

On the mentally ill? I never said Lemoine was. I said people who talk TO LaMDA, or Eliza, as tho it were an actual counselor, might be. The sentence wasn't that convoluted. That said, if that fits the bill for Lemoine, then maybe he needs a "meatspace" human counselor, too.

As for the idea of consciousness? At least you didn't claim plants are conscious.

As for "sliding scales of sentience"? Unless you're a "sky-clad" Jain starving themselves to death, you have a sliding scale of sentience, too. Just a different one than other people do.

Should you respond, I'd be more likely to post it if you background yourself with links on other things, and also, on the last issue, talk about whether you're a sky-clad Jain or not.