Wednesday, October 25, 2006

Micro-Intelligence

I think that a popular bit of self-delusion that infects those who are interested in building a robot with a capacity for human intelligence is in the conception of what they envision the robot doing. Perhaps this is mostly a complaint against what Haugeland termed GOFAI, but it seems to me that following Turing’s example of giving the machine a specific goal to accomplish, many experimenters think to themselves, “what sort of thing do humans do that we characterize as an expression of intelligence?” and then set out to build a machine that does just that. For example: humans play chess and checkers, and that is a sign of intelligence; humans recognize objects and can form opinions on the properties of those objects, so that is a sign of intelligence; and even mundane tasks, such as navigating the hallways of a building to deliver a piping hot cup of coffee is deemed an activity for the intelligent, so a machine that can do that must in some way be intelligent.

The problem with building a machine that can “recognize” objects, have “conversations”, or play chess or checkers is that that is the totality of its behavioral range; its whole environment, meaning the objects it can “sense” and the dimensions across which it can exert any influence are contained wholly within a small scene of blocks, a small vocabulary (with little to no true understanding, to boot), or an eight-by-eight board with a variety of at most six different types of pieces. There is some sense that once one builds enough micro-worlds in which machines are able to operate—namely, one micro-world for visual recognition of objects (SHRDLU), one for language processing (ELIZA), and others for manipulation of pieces (Samuel’s program) one can plug them all into each other and have a machine that displays a wider range of behaviors. If the complaint is that it has a set of primitives too small, making it unable to assimilate information about complex shapes, complex sentences, or playing pieces never before encountered, then the solution is simply to program in more primitives—curved lines, templates for complex sentence structures, or a battery of more pieces.

This is the point on which I agree with Hubert Dreyfus the most heartily, but also after which we depart and go our separate ways. In regards towards those types of projects, he makes the argument that having once accomplished the construction of a machine that can carry out a set task, it is not remotely close to attaining any sort of human intelligence, for it is restricted to that one task alone and no other. Furthermore, the micro-worlds are simplified versions of environments and activities that we humans encounter and execute, but they are so stripped down as to be meaningless in the context of the real world. Dreyfus notes, “The nongeneralizable character of the programs so far discussed makes them engineering feats, not steps toward generally intelligent systems, and they are, therefore, not at all promising as contributions to psychology.”

Furthermore, plugging in micro-worlds to each other, and then manually adding in more primitives, would be equally pointless. The machine in question would still bear capabilities bounded strictly by its programming, meaning it does not grow or learn in the way a human does. In order to have the type of intelligence that we do, it must be able to form for itself new primitives, with which to compose a world growing ever more complex.

At this point, Dreyfus begins his critique of Minsky’s frameworks. Although I agreed with him regarding the micro-worlds, I strongly disagree with his general assessment of Minsky’s theory. It seems quite apparent to me how frames can lead to the formation of new primitives: our varying types of sensory inputs are stored in collections of nodes, which over time come to correlate with each other, whereupon they form frames consisting bundles of different types of information. For example, a baby encountering an apple many times over will come to correlate the redness, the smoothness, the hardness, and all other properties that can be sensed. With that bundle of information, the baby will then come to recognize other apples by virtue of similarities of their properties. Further properties of apples—such as behavior when falling, or rolling, or colliding with other objects—are assimilated with experience.

In his concluding main thesis, Deryfus makes the claim that the idea of knowledge representation is not only unnecessary but also is impossible to realize in any artificial system. Because an explanation of how we do something always traces back to what we are—which Dreyfus believes is something we can never know—we will never be fully equipped with the conditional rules for founding an intelligent system. “In explaining our actions we must always sooner or later fall back on our everyday practices and simply say ‘this is what we do’ or ‘that’s what it’s like to be a human being.'”

Furthermore, rather than representations, Dreyfus believes that intelligent behavior can be explained under alternative accounts: one, “developing patterns of responses,” with recognition being gradually acquired through training; and two, allowing for “nonformal (concrete) representations” (e.g.: images) that are used in exploration of “what I am, not what I know.”

Frankly, I couldn’t disagree with Dreyfus more. Based on the fact that we collect information with our sensory apparatuses, and then store it for later use, it must necessarily be represented in some manner—at all times. How else can we manipulate the information? Dreyfus might say that we should appeal to those concrete representations that don’t require any explanation of the rules for symbol manipulation, since there are no symbols in concrete representations. But how does this explain anything? What have we learned about ourselves from this? How does it allow us to explain our behavior and help predict future behavior?

Dreyfus’ “patterns of responses” seems to me to be a behaviorist denial of any internal life. For him, there seems to be no concept of swimming that we hold, only our acquired responses to being in the water. All we do in life is respond to stimuli.

Dreyfus seems wrapped up in the idea that to have intelligence is to have only human intelligence and therefore since no computer can be human, it cannot have intelligence. I would argue that there must be an entire range of ways to be intelligent, perhaps even some that don't use representation, as our intelligence as a species did not arise in a day or ten-thousand years, but evolved over millions of years (and how many hundreds of millions of years ago was it when the most primitive nervous systems emerged?). Clearly, then, there are some structures that evolved first and others that rely on those original formations; I take this as evidence that (a) cognitive science should begin to concern itself more with the evolution and development of intelligence and that (b) more experimental research should be done that does not try to go gung-ho and recreate a feature of human intelligence, but rather should attempt to recreate the intelligence of lesser species.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home