Sunday, November 26, 2006

Intelligence without representation I

Brooks seems to think that behavior based AI will be able to produce intelligence on the level of human intelligence without ever resorting to internal representations at all.

His approach is inspired by evolution, which he says spent most of it's time perfecting perception, "acting and reacting." The things that AI had been primarily concerned with - reasoning, problem solving, language - are very recent developments. So, he says that we should be focusing on acting and reacting before focusing on problem solving.

I think he's right. He's convinced me that we ought to be focusing our attention on acting and reaction without resorting to internal representations of the world. Many of the things we consider intelligent behavior can be produced this way.

But I'm not convinced that language, problem solving, and reasoning can be produced by Brooks' approach. When we communicate with one another or reflect upon our own intenal states, we need symbolic representations. Brooks says that the fact that we use representations when communicating to one another or for introspection is not sufficient grounds to conclude that we have any internal representations to produce behavior.

He might be right. But it's worth pointing out that Brooks's epiphany about abandoning representation came when he realized that the calculations needed just to move a robotic arm was just too complicated. A reactive system with a lot of sensory inputs was able to solve the problem - without the complicated calculations.[*] But what about language? Language is, by its nature, representational. I suspect that getting a system to use language would be simpler if the system had internal representation than if it didn't.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home