Sunday, September 24, 2006

Daniel Dennett’s paper is an inquiry into how humans are able to formulate predictions of the behavior of objects and other beings.

The physicalist stance is a strategy that is the most rigorous and robust; it is more or less deduction by accounting for physical properties of an object. This strategy works well for simple objects in our world, such as bowling balls, falling pianos, and spring boards, but does not work well for predicting the behavior of more complex objects such as frogs or butterflies, much less human beings.

The design stance is a strategy taken by one who supposes that a certain object is designed to behave in a certain way. This works well for all sorts of kinds of equipment and electronic devices, since we are able to infer its behavior according to a designer’s intent.

The intentionalist stance, which is the focus of Dennett’s paper, is taken by the one who assumes that the object, acting in its own self interest, has beliefs and desires that it will follow through on. This works so well, because being rational agents ourselves, once we are able to attribute those beliefs and desires, we are then somehow able to simulate their behavior based on our own.

I am convinced that the intentional stance is exactly what happens, as we often—especially during childhood—attribute beliefs and desires to anything and everything we see in the world. I might, if I so choose to, attribute beliefs and desire to my faucet; it leaks when it’s sad and gets too hot when it’s angry. Of course that’s ridiculous, but I can do it all the same, and I can certainly imagine it turning its spout up at me and say, “Why not? I have feelings, too.” Perhaps in some alternate, Roger Rabbit universe, it might spit in my eye if I think anything less of it. This certainly gives me “reason” to treat it with respect. This kind of reasoning, as I remember it, is very much alive in childhood where we do not have the capability to use a physicalist stance.

I think that this sort of reasoning might even be the cause for many if not most religious beliefs; people attribute beliefs and desires to nature, and when they predict its behavior they take those into account. Why upset nature, when she might lash back at me? I should follow God’s wishes in order not to anger him. One might be inclined to turn to the Gaia “hypothesis” because it offers an easy way of predicting the Earth’s behavior.

I disagree with Dennett on only two points, and they are perhaps small. The first is that I would not consider a thermostat to be an intentional system. I’m not sure what he means when he says that thermostats have representations of their environments, but I cannot imagine a thermostat as it is with any sort of representation. It doesn’t even hold information about its environment in the first place. There might be some rudimentary programming that can be done on it to change its behavior (such as, stay a little bit warmer in the morning), but there is no representation at all. It doesn’t “measure” anything about its environment, and thus there is nothing to represent. It reacts to a change in the atmosphere mechanically, without storing any information regarding that change. A vile of mercury inside of it reacts, directly causing a switch to be turned, which sends a signal to the boiler. What representation could possibly be going on there? But that’s a trivial point, anyway.

Second, I see the “language of thought” as a strong possibility, but I would add a clause. Dennett argues for its existence because he sees no alternative for a form of representation in our minds. We have mentalese to represent, symbolically, reality, and thus use symbols in some sort of logic natural to our minds.

I find no fault with the idea of mentalese and admit that it could possibly be true. But I do disagree that it is the only possible representation of the world that we carry; it seems fairly self evident to me that we have a robust graphical representation of the world. We are able to rotate three dimensional objects with our mind, which involves no symbolic manipulation. I think that from this graphical representation do we extricate beliefs and desires, which themselves may be represented symbolically and manipulated according to our innate rules of logic.

Saturday, September 16, 2006

Turing's proposal in "Computing Machinery and Intelligence" is often framed as: to determine if a particular machine thinks, have it play the Imitation Game, and if it passes you have sufficient evidence to believe that it does. But Turing is actually proposing that we substitute the question "Can machines think?" (Q1) with the question "can digital computers pass the imitation game?" (Q2) Turing didn't say that Q2 is a means to answering Q1, but that Q2 is the question we ought to be asking. Turing even goes on to dismiss Q1 as being meaningless.

What I find confusing is that the remainder of the article is devoted to refuting objections to Q1 or Q2,
but never to refuting an objection to his proposal that we substitute Q2 for Q1. Turing also makes no attempt to justify his claim that Q1 is meaningless. And when I read the article, it always seems to me that Turing is taking the position of someone who feels Q2 is a sufficient means to answering Q1, not that Q1 ought to be dismissed...I'll proceed as if this might be true.

The "Argument from Conciousness" and some of "Argument from Disabilities" are attacks on the Imitation Game as a means to determining if a machine could think. The idea is that even if a machine could pass the test, we wouldn't know if it actually felt anything, or understood what it was saying, or that it even knew that it was participating in a game.

Turing responds that if we take this line of argument we'll be forced into a solipsist position. It's true that we wouldn't know if the machine "feels" or "understands," but we can't be sure of this about any human other than ourselves either. Observed behavior is the best we can do, and is almost always taken as sufficient (except by solipsists of course), which is why the Imitation Game ought to be considered sufficient as well.

Turing's response works well for phenomenological consciousness - we can't know if the machine can "feel" anything any better than we can know that another person can feel anything.

But when it comes to knowing whether a machine is capable of attributing meaning to the symbols it manipulates (a type of "understanding") we can do a lot better, since we have direct knowledge of the methods the machine uses.

Let's take an algorithm for addition that could be implemented by any discrete state machine, that works as follows.

Given the following:

111+11

The number of '1''s we see will represent a number - so that the three ones on the left of the '+' represents the number 3, and the two ones on the right represent the number 2.

Replace the '+' with a '1':

111111

and then remove the last '1':

11111

Which we can take to mean the number 5.

Try it again for another sequence ( 11+11 ---> 1111 )...

This will work for all numbers represented this way, and can be programmed into any discrete state machine.

The noteworthy thing about the algorithm is that the machine implementing it doesn't need to know what the characters represent. We don't have sufficient reason for believing that a machine that implements this algorithm understands what addition is, what a number is, or what it is doing.

If we programmed the machine playing the Imitation Game in a manner similar to the algorithm above, except that the goal is not is add two numbers but to pass the test, we wouldn't have sufficient reason to believe it was capable of understanding what it was saying.

Turing does address this. He says, in response to a general objection that a machine is incapable of doing certain things, that, "...if one maintains that a machine can do these things, and describes the method that the machine could use, one will not make much of an impression. It is thought that the method (whatever it may be, for it must just be mechanical) is really rather base."

But I don't think it is an adequate response. My example doesn't demonstrate that the machine doing addition is using a base method - just that there isn't sufficent grounds to believe that it understands what it is doing.

((I need to give this more thought, since the technique for addition humans are often taught to use in elementary school doesn't require an understanding of the numerals being manipulated. There are times we don't use any technique - like when asked to answer "what is two plus two?" But perhaps what we're using is a simple mapping between "two plus two" and "four", which also does nothing to demonstrate our understaning. So maybe we can't do any better than the machine following the algorithm above)).

I'll continue with Turing's paper in another post...

Wednesday, September 06, 2006

Haugeland’s Essay and Understanding Animals

I don’t think that there’s much to say about most of Haugeland’s essay, as it is merely an introduction and overview to the rest of the book’s articles. As for the nature and meaning of “mind design” I’ve so far found it to be a study that uses philosophy to commune between psychology and computer science. Computer science was borne of philosophy and math, still tethered by the umbilical cord of logic; and psychology sprang forth from the same social and ethical philosophies that produced sociology and anthropology. It seems to me that one needs to be a philosopher in order to assimilate abstract concepts from psychology and mold them into concrete forms for computer science.

There is one section that’s worth mentioning, from the end of Haugeland’s piece. He writes,

“It seems to me that…only people ever understand anything—no animals and no artifacts (yet). It follows that…no animal or machine genuinely believes or desires anything either—How could it believe something it doesn’t understand?—though, obviously, in some other, weaker sense, animals (at least) have plenty of beliefs and desires.” (pp 27)

First of all, this seems like a completely disingenuous argument—he first makes a very strong claim about the intelligence of animals, and then backs off by supplying a “maybe it is so, but in a weaker sense.” How can we understand what he means when he gives such vague and slippery arguments? Furthermore, the bar he sets for understanding seems to be met by plenty of animals. In training chimpanzees to speak sign language, the chimps must have those proto-concepts, and they do indeed apply them correctly, as evinced by the fact that Washoe not only used words correctly and in appropriate contexts, but also taught her own children to sign. That “Not all psychologists agree that Washoe did acquire language” doesn’t seem to be detrimental to the fact that she had understanding, as “she had semanticity (understanding)” (see AS Psychology). If she was able to learn only 800 words or achieve only the grammar of a 3rd grader, it does not mean that she had no understanding or intelligence, rather that she had less of it then humans do.

Furthermore, primates are not the only animals that can learn language. Parrots are able to learn words, and will learn despite any active teaching by the owner or trainer. One parrot, trained to know the words “apple” and “banana,” became familiar with pears as well, but was not taught any words for them. On one occasion, the parrot pointed a talon at one sitting next to its trainer and cawed “banapple!” The trainer looked at the pear, knowing what the parrot meant, and proceeded to try to teach it the appropriate word. Yet the parrot insisted on using its own word, having apparently judged it so based on its qualities, being intermediate between bananas and apples. (In fact, it seems appropriate, considering a pear’s softer texture and less tart flavor in comparison to an apple.) If that is not understanding, than what is?

[PS: The parrot example comes from the Scientific American Mind magazine, which is not free, and I was unable to find the sources that the article uses.]

Tuesday, September 05, 2006

"Mind design is the endeavor to understand mind (thinking, intellect) in terms of its design (how it is built, how it works). Unlike traditional empirical psychology, it is more oriented toward the 'how' than the 'what.' An experiment in mind design is more likely to be an attempt to build something and make it work--as in artificial intelligence--than to observe or analyze what already exists. Mind design is psychology by reverse engineering." --John Haugeland

This blog will be the posting site of responses to a collection of journal articles on the topic of mind design. The journal articles all come from a book entitled "Mind Design II," edited by John Haugeland, and with various contributors from the fields of philosophy, computer science, and psychology.

Based upon bi-weekly meetings with Professor Chopra, Jonathan and Phillip will supply one response to each of two articles read for every meeting. This means that the two of us will write these two responses over the course of the two week interval, plus additional comments made on each other's posts and by Prof. Chopra. The responses themselves will also include references to outside material that critique the articles.

Additionally, the readings and postings will culminate in a 20 page essay on an appropriate topic.

Mind Design II Index:
  1. "What is Mind Design?" John Haugeland.
  2. "Computing Machinery and Intelligence," A.M. Turing.
  3. "True Believers: The Intentional Strategy and Why it Works," Daniel Dennett.
  4. "Computer Science as Emprical Inquiry: Symbols and Search," Allen Newell and Herbert A. Simon.
  5. "A Framework for Representing Knowledge," Marvin Minsky.
  6. "From Micro-Worlds to Knowledge Representation: AI at an Impasse," Hubert L. Dreyfus.
  7. "Minds, Brains, and Programs," John R. Searle.
  8. "Architecture of Mind: A Connectionist Approach," David E. Rumelhart.
  9. "Connectionist Modeling: Neural Computation/Mental Connections," Paul Smolensky.
  10. "On the Nature of Theories: A Neurocomputational Persepctive," Paul M. Churchland.
  11. "Connectionism and Cognition," Jay F. Rosenberg.
  12. "Connectionism and Cognitive Architecture: A Critical Analysis," Jerry A. Fodor and Zenon W. Pylyshyn.
  13. "Connectionism, Eliminativism, and the Future of Folk Psychology," William Ramsey, Steven Stitch, and Joseph Garon.
  14. "The Presence of a Symbol," Andy Clark.
  15. "Intelligence without Representation," Rodney A. Brooks.
  16. "Dynamics and Cognition," Timothy van Gelder.
Schedule of Meetings:
  • September 18: Chapters 1 & 2
  • September 25: Chapters 3 & 4
  • October 16: Chapters 5 & 6
  • October 30: Chapters ...
  • November 13: Chapters ...
  • November 27: Chapters ...
  • December 11: Chapters ...
Two of the sixteen readings will be dropped (making a total of fourteen). This post will be updated when a decision is made on what chapters to drop.