Saturday, September 16, 2006

Turing's proposal in "Computing Machinery and Intelligence" is often framed as: to determine if a particular machine thinks, have it play the Imitation Game, and if it passes you have sufficient evidence to believe that it does. But Turing is actually proposing that we substitute the question "Can machines think?" (Q1) with the question "can digital computers pass the imitation game?" (Q2) Turing didn't say that Q2 is a means to answering Q1, but that Q2 is the question we ought to be asking. Turing even goes on to dismiss Q1 as being meaningless.

What I find confusing is that the remainder of the article is devoted to refuting objections to Q1 or Q2,
but never to refuting an objection to his proposal that we substitute Q2 for Q1. Turing also makes no attempt to justify his claim that Q1 is meaningless. And when I read the article, it always seems to me that Turing is taking the position of someone who feels Q2 is a sufficient means to answering Q1, not that Q1 ought to be dismissed...I'll proceed as if this might be true.

The "Argument from Conciousness" and some of "Argument from Disabilities" are attacks on the Imitation Game as a means to determining if a machine could think. The idea is that even if a machine could pass the test, we wouldn't know if it actually felt anything, or understood what it was saying, or that it even knew that it was participating in a game.

Turing responds that if we take this line of argument we'll be forced into a solipsist position. It's true that we wouldn't know if the machine "feels" or "understands," but we can't be sure of this about any human other than ourselves either. Observed behavior is the best we can do, and is almost always taken as sufficient (except by solipsists of course), which is why the Imitation Game ought to be considered sufficient as well.

Turing's response works well for phenomenological consciousness - we can't know if the machine can "feel" anything any better than we can know that another person can feel anything.

But when it comes to knowing whether a machine is capable of attributing meaning to the symbols it manipulates (a type of "understanding") we can do a lot better, since we have direct knowledge of the methods the machine uses.

Let's take an algorithm for addition that could be implemented by any discrete state machine, that works as follows.

Given the following:

111+11

The number of '1''s we see will represent a number - so that the three ones on the left of the '+' represents the number 3, and the two ones on the right represent the number 2.

Replace the '+' with a '1':

111111

and then remove the last '1':

11111

Which we can take to mean the number 5.

Try it again for another sequence ( 11+11 ---> 1111 )...

This will work for all numbers represented this way, and can be programmed into any discrete state machine.

The noteworthy thing about the algorithm is that the machine implementing it doesn't need to know what the characters represent. We don't have sufficient reason for believing that a machine that implements this algorithm understands what addition is, what a number is, or what it is doing.

If we programmed the machine playing the Imitation Game in a manner similar to the algorithm above, except that the goal is not is add two numbers but to pass the test, we wouldn't have sufficient reason to believe it was capable of understanding what it was saying.

Turing does address this. He says, in response to a general objection that a machine is incapable of doing certain things, that, "...if one maintains that a machine can do these things, and describes the method that the machine could use, one will not make much of an impression. It is thought that the method (whatever it may be, for it must just be mechanical) is really rather base."

But I don't think it is an adequate response. My example doesn't demonstrate that the machine doing addition is using a base method - just that there isn't sufficent grounds to believe that it understands what it is doing.

((I need to give this more thought, since the technique for addition humans are often taught to use in elementary school doesn't require an understanding of the numerals being manipulated. There are times we don't use any technique - like when asked to answer "what is two plus two?" But perhaps what we're using is a simple mapping between "two plus two" and "four", which also does nothing to demonstrate our understaning. So maybe we can't do any better than the machine following the algorithm above)).

I'll continue with Turing's paper in another post...

4 Comments:

Blogger Jonathan said...

I think the biggest problem is that whenever anyone asks the question, "Can machines/computers/robots be built to think/understand/have consciousness like humans?" the answer always depends on how you define think/understand/have consciousness. Turing seems to be avoiding the question for that reason, insinuating that no precise definition is possible or even necessary; instead of chasing an imaginary unicorn, we should set a realistic goal and try to achieve that first.

If Turing faltered in his reasoning, it was only because he was a man of his time--behaviorism was still the driving force in psychology, and (as I found incredibly shocking) he considered ESP a serious objection. The computer scientists who followed Turing in investigating artificial intelligence by setting certain goals (ie: making a computer to play chess, checkers, or expert systems) seem to be part of the movement that Haugeland identified as GOFAI.

Perhaps I'm oversimplifying, but it seems to me that the turning point occurred when cognitive psychology took its foothold and philosophers were able to apply it to wrestling with the questions that Turing intentionally avoided.

3:04 PM  
Blogger Phillip Dreizen said...

"Turing seems to be avoiding the question...insinuating that no precise definition is possible or even necessary"

Turing isn't avoiding the question, he insists that the question being asked is meaningless.

His response to the "Argument from Consciousness" demenstrates that he was well aware of issues in defining "consciousness", "understanding" and "meaning." His response to the argument doesn't require precise definitions of those terms, and even rests on the fact that they aren't well defined. Definitions of these terms often require references to personal experience that we can't confirm anyone has but ourselves, but we go on assuming that everyone shares in these experiences anyway. If we demand that a machine has to prove they share in these experiences, we'd have to make the same demand of other humans - and no human can successfully prove it. So to make the demand forces us into solpsism.

If Turing faltered in his reasoning, it was only because he was a man of his time--behaviorism was still the driving force in psychology"

Turing's views should not be dimissed because they might have been influenced by behaviorism. His response to the argument from consiousness is a strong one and needs to be addressed, not dismissed. Besides which, Turing was probably not a behaviorist at all, but a functionalist.

His views on ESP does nothing to invalidate his other arguments.

And I'd like to know why you think "Turing faltered in his reasoning".

1:12 PM  
Blogger Jonathan said...

Turing's views should not be dismissed because they might have been influenced by behaviorism. His response to the argument from consciousness is a strong one and needs to be addressed, not dismissed. Besides which, Turing was probably not a behaviorist at all, but a functionalist.

Finding fault in a person's argument, I do not believe, is the same thing as dismissing their argument. Perhaps one might be inclined to characterize my response as "dismissive" since I didn't directly address any of his arguments, which could forseeably lead one to believe that I had possibly dismissed him entirely.

Furthermore, I think that to dismiss the question "Can machines think?" (as well as "Can other humans think?") is a very behaviorist stance to take.

His views on ESP does nothing to invalidate his other arguments.

I never made any such claim. I expressed shock that he would cite ESP at all, especially considering his brilliance in all other areas of the paper. But then those were the times...

And I'd like to know why you think "Turing faltered in his reasoning".

You're right in questioning me on this point, as I should have been more explicit in making it.

As I said, the definition of thinking is problematic to specify, and Turing is in agreement with me on this (in fact I’m sure it’s that I’m in agreement with him, rather than vice versa).

I PROPOSE to consider the question, 'Can machines think?' This should begin with definitions of the meaning of the terms 'machine 'and 'think'. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words 'machine' and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, 'Can machines think?' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.



The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

I find fault in his logic on two fronts. First, I really do believe, as difficult as it is to accomplish, that there is a specific definition of thinking that we must and will produce. I obviously can’t offer perfect evidence to this, for I think if I could, I would have in my hands exactly what I say is difficult to find. But it is a belief I maintain, because in order to construct a building, you must first have blueprints. (At least, that is, if you want your building to stand upright and thus be worthy of consideration as a building.)

Second, he believed that by engaging in the production of a machine that could achieve a certain objective, we would learn during that process what it means for a machine to think. Perhaps there was no way for him to have known that that wasn’t true, as he wasn’t alive to see AI flounder around for so many years while computer scientists and engineers took stabs in the dark towards the bar he set. The lesson they learned along the way was that you need to pay attention to what thinking truly is and how it works before you can go off and start constructing a machine that does something that a thinking human can do with ease.

So: (1) finding a formal definition of thinking is not impossible; and (2) having a formal definition is necessary to answering not only "Can machines think?" but also "Can a machine pass the imitation game?"

3:33 AM  
Blogger Phillip Dreizen said...

Furthermore, I think that to dismiss the question "Can machines think?" (as well as "Can other humans think?") is a very behaviorist stance to take.

Turing never rejects the idea of internal mental states in humans. Furthermore, his use of the term "human computer," suggests that he believed humans have intenal states - since, as we know, every other sort of computer has internal states.


Finding fault in a person's argument, I do not believe, is the same thing as dismissing their argument. Perhaps one might be inclined to characterize my response as "dismissive" since I didn't directly address any of his arguments, which could forseeably lead one to believe that I had possibly dismissed him entirely.


I characterized your response as dismissive because the reason you gave for Turing's faulty reasoning is that "he was a man of his time" - a time of behaviorists. In the same argument you add that he "considered ESP a serious objection," as a reason for considering Turing's reasoning faulty.

1:55 AM  

Post a Comment

Links to this post:

Create a Link

<< Home