Noam Chomsky, a
long and persistent critic of artificial intelligence, says yes, or that it at
least engages in the equivalent thereof, as related in this extended interview
with the Atlantic.
If he is right,
and I think he’s at least in the right ballpark, I think this arguably explains
why AI, for all its self-touting, is the biggest research science and
technology failure this side of peaceful fusion power. Indeed, progress on the
two shares a remarkably similar arc.
Noam Chomsky/From The Atlantic |
The interview
is indeed worth a read. It’s in-depth, and as the Atlantic editor-reporter
notes, it’s rare these days, because everybody wants to interview Chomsky on
political topics, not scientific ones.
Beyond his
“behaviorist” comments, he suggests AI researchers, and at least some people in
fields such as his own cognitive science, are still doing research on mind and
intelligence at what might be called the wrong level of abstraction. It brings
to mind Dan Dennett’s comment (ironic at times, given Dennett) of “greedy
reductionism.”
It also brings
to mind Paul Davies’ book “The Eerie Silence,” which criticizes SETI, the
Search for Extra-Terrestrial Intelligence, for various blinders it may be
wearing in its search.
Chomsky, in the
interview, also veers at least a bit into his home turf of linguistics. As part
of that, he doesn’t have a lot of good to say about Bayesian statistics.
He says there
are better ways for us to try to understand the “noise” with which we are
bombarded on a daily basis.
I have to agree
from a different, folk-level point of view.
To me, Bayesian
statistics seems like “the hip thing” for pop and semi-pop observers of human
cultural sociology. All it needs is a new book by Malcolm Gladwell.
From there, he
ties linguistics back to cognitive science. And, hits the nail on the head, in
my opinion:
It's worth remembering that with regard to cognitive science, we're kind of pre-Galilean, just beginning to open up the subject.
He
notes that, with the likes of Paracelsus, natural philosophy before Galileo, in
the hands of the likes of Paracelsus, was quasi-experimental. It had certainly
become more empirical than the philosophical speculation of the Greeks. But,
things like the null hypothesis, or the idea of using any particular hypothesis
to direct experimentation, weren’t fully there.
Chomsky
goes on to question issues related to algorithms. Again, I broadly agree with
him. In a more technical way, certainly, than me, he appears to question the
issue of claiming that mental processes in general, and especially those that
would give us what we would call an intelligent consciousness, can be reduced
to, or framed in terms of, algorithms. Indeed, when asked about it, in relation
to some particular research and mental modeling, he specifically rejects the
need for algorithms.
It
sounds like in both cognitive science and artificial intelligence, if not
already, Chomsky could soon become about as controversial as he is on U.S.
foreign policy.
But,
don’t stop there. Chomsky, getting into his theories of language, and with a
nod to Wittgenstein, argues that something analogous to language could be used
as part of new attempts to understand bodily systems, such as, say, how the
immune system works. It’s true that biology already talks about things such as “signaling,”
but in the past, it’s seemed to use these words and phrases
anthropomorphically, and Chomsky is saying, “take the next step.”
Anyway,
I’m just scratching the surface of my analysis, both in terms of how much of
the interview I’m analyzing and how much analysis I’m putting forth. Go read
the full thing yourself.
Let
me just add that he closes by saying something else I agree with, and that I
read Steve Toulmin saying 15 years or so ago: Scientists still need
philosophers of science challenging them, now perhaps more than ever.
No comments:
Post a Comment