Thursday, June 03, 2021

What Searle, Turing, AI thought experiments etc get wrong, part two

Two months ago, I talked about John Searle's famous Chinese room thought experiment, and said what Searle's critics, and to a degree, Searle himself, got wrong, was in not thinking of consciousness as embodied cognition.

I added that embodied cognition needed to include embodied affect.

Well, I'm going to look at this more now.

Imagine that Searle's translation book has a glitch or something. Or a lack of linguistic nuance, better yet. (Save some of this for a part three.)

For example, say that a literal phrase in Chinese is an insult in English or vice versa. (I don't know Mandarin, but I DO know Spanish enough to offer up the word "pendejo" as an example.)

Searle's machine, or human inside the black box, might produce a perfect literal translation, but, by the fact it offered no affect to having translated something into an insult, might give itself away.

(Per a comment at MeWe, I've always understood the Chinese room, especially with person and book substituted for a computer, to be pretty much straight-up translation. While the book might allow idiom in a narrow sense, Searle's idea seems to be, though being unexpressed, to be against idiom in a broader sense. To put it another way, it seems to be about purely and only denotative translation, not connotative.)

Turing himself hinted at something somewhat like this, at least on the matter of affect, when laying out the test:

Interrogator: In the first line of your sonnet which reads ‘Shall I compare thee to a summer’s day’, would not ‘a spring day’ do as well or better? Witness: It wouldn’t scan. Interrogator: How about ‘a winter’s day’? That would scan all right. Witness: Yes, but nobody wants to be compared to a winter’s day.

But he doesn't seem to fully follow up. This is just the tip of the affective iceberg.

But, as with cognition, it's embodied.

What will it feel like, to move from Searle to Nagel's bat-world, for a robot (since a computer without locomotion is, if animalian not plant-like, a sponge at best) to "hug" steel? Wood? Human flesh? And, feel not just in the tactile sense, but primarily in the affective sense? What will it feel like for a robot to "hug" another robot? What will it say? 

"I love you"?

"God, I thought I was over you"?

Or, to tie linguistic nuance to affect?

Many psychologists have argued that fear, not hate, is the opposite of love. Setting aside the dubiousness of an emotion as complex as love in its types as well as its affect having a single opposite, I'd argue that the opposite of love is often frustration. Now ... a computer or robot that wanted to have a shot at passing the Turing test would be able to cogently participate in an argument such as this.


No comments: