Monday, October 30, 2006

Searle vs The Minds I

The Myth of The Computer

The Myth of the Computer: An Exchange

The exchange between Searle and Dennett and Hofstadter in the New York Review of Books gets personal! I believe I found that quote about Searle Professor Chopra was searhing for, though it was Dennett not Hofstadter who wrote it.

The quote is a response to Searle saying, in regards to the Chinese Room argument:
The mental gymnastics that partisans of strong AI have performed in their attempts to refute this rather simple argument are truly extraordinary.

Dennett responds:
Here we have the spectacle of an eminent philosopher going around the country trotting out a "rather simple argument" and then marveling at the obtuseness of his audiences, who keep trying to show him what's wrong with it. He apparently cannot bring himself to contemplate the possibility that he might be missing a point of two, or underestimating the opposition. As he notes in his review, no less than twenty-seven rather eminent people responded to his article when it first appeared in Behavioral and Brain Sciences, but since he repeats its claims almost verbatim in the review, it seems that the only lesson he has learned from the response was that there are several dozen fools in the world.

In Searle's review of The Minds I, he claims that Hofstadter and Dennett fabricate a quote which "runs dead opposite" to what he was trying to convey in his paper on the Chinese Room argument.

The quote? Instead of quoting Searle as having said "bits of paper" they misquote: "a few slips of paper." I have to agree with Dennett that the misquote hardly constitutes a fabrication that conveys the opposite opinion that Searle holds. In his response, Dennett apologizes for the misquote, and promises that the mistake will be corrected in future editions of the book. I can verify this: my copy, which dates from November 1982, correctly quotes Searle.


Blogger Jonathan said...

I think, despite the fact that his initial argument provides quite a dilemma for cognitive scientists, that Searle really goes off the deep end in this article. Consider this quote:

Let's program our favorite PDP-10 computer with the formal program that simulates thirst. We can even program it to print out at the end "Boy, am I thirsty!" or "Won't someone please give me a drink?" etc. Now would anyone suppose that we thereby have even the slightest reason to suppose that the computer is literally thirsty?

I find it shocking that he views the entire field of AI as having such a poor sophistication in methodology. Does he really believe that anyone would claim a machine is thirsty simply because it was programmed to output "I am thirsty"? The point is that we try to make a program that makes such a statement of its own accord. The real problem is (a) actually programming such complex behavior and (b) determining when it is really operating "of its own accord."

Another quote:

So let us imagine our thirst-simulating program running on a computer made entirely of old beer cans, millions (or billions) of old beer cans that are rigged up to levers and powered by windmills. We can imagine that the program simulates the neuron firings at the synapses by having beer cans bang into each other, thus achieving a strict correspondence between neuron firings and beer-can bangings.

Searle is really playing the abilities of the human imagination at this point. The problem I find with this statement is that while we know, technically speaking, that we can implement a Turing Machine using beer cans, string, and levers, it's not actually so easily imagined. He glosses over the fine details of how this is accomplished, letting our minds envision a poor conception of what is actually going on. And then, to ask the question--how can we say this is thirsty?--leaves us with little choice but to admit that it can't.

Furthermore, I doubt that any entity composed of beer cans and strings, while we may be willing to attribute certain desires to it, could be thirsty in the same way we become thirsty. It's as if an alien being the size of a pinhead who consumed quarks for breakfast claimed that humans don't have any understanding or semantic content because they couldn't possibly be hungry for quarks.

At any rate, I'm really shocked at how disingenuous some of Searle's argument become.

10:23 AM  

Post a Comment

<< Home