Sunday, November 26, 2006

Intelligence without representation I

Brooks seems to think that behavior based AI will be able to produce intelligence on the level of human intelligence without ever resorting to internal representations at all.

His approach is inspired by evolution, which he says spent most of it's time perfecting perception, "acting and reacting." The things that AI had been primarily concerned with - reasoning, problem solving, language - are very recent developments. So, he says that we should be focusing on acting and reacting before focusing on problem solving.

I think he's right. He's convinced me that we ought to be focusing our attention on acting and reaction without resorting to internal representations of the world. Many of the things we consider intelligent behavior can be produced this way.

But I'm not convinced that language, problem solving, and reasoning can be produced by Brooks' approach. When we communicate with one another or reflect upon our own intenal states, we need symbolic representations. Brooks says that the fact that we use representations when communicating to one another or for introspection is not sufficient grounds to conclude that we have any internal representations to produce behavior.

He might be right. But it's worth pointing out that Brooks's epiphany about abandoning representation came when he realized that the calculations needed just to move a robotic arm was just too complicated. A reactive system with a lot of sensory inputs was able to solve the problem - without the complicated calculations.[*] But what about language? Language is, by its nature, representational. I suspect that getting a system to use language would be simpler if the system had internal representation than if it didn't.

Sunday, November 19, 2006

Consilience

"Philosophers are beginning to like it—it's something for them to do. They've been sort of flopping around since the failure of positivism."
Edward O. Wilson, in an interview with Seed Magazine (November 2006).

The article from which this quote was grabbed is about Wilson's latest book, The Creation, and the rift between science and religion. The book itself is Wilson's attempt to draw the two worlds together in a truce for the sake of humanity's progress, and more specifically to garner attention towards saving the Earth, rather than wasting breath on an argument that goes nowhere. This article is one in a series that spans newspapers and magazines—including Wired's "The Church of the Non-Believers"—that cover a series of books published this fall written by top scientists and philosophers on the subject of fundamentalist values overtaking scientific reason. The authors of these books include Richard Dawkins, Sam Harris, Daniel Dennett, and of course Edward O. Wilson.

The quote above is a reference to the central idea of his 1998 book, Consilience: The Unity of Knowledge, that all fields of knowledge and scientific inquiry are intrinsically linked by a set of fundamental rules implicit to all fields. His remark is in reference to the fact that, after decades of these fields dividing and splintering, there is recent synthesis and convergence where certain fields jut against one another. As members of one field identify similar ideas found in others, they initiate an exchange with their newfound colleagues, which include bridging gaps in each other’s knowledge and linguistic blending.

But doing so is a tedious process. Although the semantic content of related fields develops often along parallel routes, occasionally crossing paths, the syntactical elements progress divergently from the outset, and researchers find themselves unable to communicate effectively with cousins hailing from those neighboring fields. Wilson’s comment hits squarely upon the utility of philosophy emerging in the modern world.

Philosophers have a new task: as catalysts of fusion, to combine the reactant disciplines of science and merge them together into novel, nascent branches of inquiry. Science becomes fragmented as scientists delve deeper into the disparate wells of knowledge, but the philosopher may quickly traverse the many beaten paths, picking up the shards along the way. To fit them together is to fill in the semantic gaps of each domain—and in doing so there occurs a reformulation of content in a new syntax adaptable to the various extant fields, thereby joining them in linguistic solidarity.

PS: For the text of a discussion between E.O. Wilson and Daniel Dennett, click here.

Wednesday, November 15, 2006

The Life and Times of William Sidis

Perhaps there is such a thing as being too intelligent.
"At age nine William attempted to enroll at Harvard, and though the entrance exams were not a challenge for the young intellect, he was turned down on the basis that he was too 'emotionally immature' for college life. As William waited for the Harvard admissions board to capitulate, he spent the intervening time at Tufts College correcting mistakes in mathematicians' books, perusing Einstein's theories for possible errors, mastering foreign languages, and diligently collecting streetcar transfer slips."

"...he died a reclusive, penniless office clerk."
The moral of the story that I draw from this is that if there are Laplacean Martians that exist, they wouldn't want to spend more than a few minutes with beings that lack any perceptible intellect such as ourselves. And if one of them ever ended up being stranded on Earth with us, it would probably die from the feeling of being crowded by stupidity.

Sunday, November 12, 2006

Giving Rise to Semantics

I want to start out by explaining what my idea is and is not. First of all, this is not a complete theory of mind; I think it lends itself to a possible theory, but by its lonesome self it is not something that should spur anyone to run out to a computer lab and start programming. As a matter of fact, as I’ll note in places, there are points where I want to gather empirical evidence.

Second, this is not meant to refute or bolster Searle’s argument. What I am doing here is considering the conclusion of the Chinese Room Argument to be, It is not clear how syntax can give rise to semantics. Searle would of course claims it is impossible, but I think it is more prudent to just stop by saying that we simply don’t know if it is possible, and if it is, then we don’t know how.

Third, what I am addressing is the question, What else could it be? I am trying to offer here a possible alternative to a Turing Machine. Now, two questions seem to arise immediately from my saying this: (a) Am I actually saying that the syntax of a Turing Machine can’t give rise to semantics; and (b) Am I saying that my alternative is actually what gives rise to semantics? My answers: No, and no. I do not know if TM’s can or cannot give rise to semantics, and in fact I think that it’s difficult if not impossible to prove one way or the other at this time with the knowledge that we currently have. In other words, I claim temporary agnosticism. Also, my alternative that I am proposing is by no means a definite solution, only what I believe to be a possible solution. I think that it would be as difficult to prove one way or the other the feasibility of my solution as that of a TM.

If the thought crosses your mind that my argument will be for naught if it doesn’t offer a conclusive insight, then rest assured, as I believe there is a specific, non-arbitrary set of empirical evidence that is required to make my theory complete, some of which has already been gathered. To that effect, part of the conclusion of my idea here will be a plan for future study of various subject matters.

***

With that out of they way, here is my idea:

  1. As described by Minsky, a Society of Mind is necessary and sufficient for having/making a mind.
  2. The massively parallelized multi-agent system that comprises the mind cannot be emulated by a single TM.
  3. Following 2, the reply by Searle (Mooney’s Reply H) that Minsky’s Society (being a “connectionist architecture”) can be reduced to a single TM does not apply.

Now clearly, I must justify 2. Obviously, a single TM can emulate multiple TM’s in parallel. I previously brought up my example of an eight-core processor being emulated by single-core to demonstrate my belief in this.

The reason why, then, a single TM cannot emulate a society of Minsky’s agents is because there is something that these agents bring with them that can at best be only simulated, and not emulated. This is an important distinction here: I mean by emulation just what I meant before—that implementing multiple TMs or an architecturally different TM on a single TM is wholly, utterly, and completely equivalent (albeit computationally slower); I mean, however, by simulation a proximal functional equivalence.

Notice how I did not apply “functional” to the definition of emulation; it is not merely functionally equivalent, but rather it is a one-for-one equivalence on the lowest level of operation (namely the manipulation of symbols). Simulation, then, is hindered by the fact that (1—which by itself would not matter) it is not operationally equivalent and that (2) it can only approximate the behaviors and solutions of that which is being simulated.

So what is it about Minsky’s society that cannot be emulated by a TM? It is the environment in which the agents operate. As I envision it, the system that comprises these agents is not one single unified structure, but numerous disparate structures that connect and disconnect to and from each other in a particular medium of interaction. This medium, I maintain, should not be overlooked, and in fact cannot, as this medium in which the agents persist plays an equally important role in determining the total state of mind system.

There is a recursive dynamic generative relation* between an agent and its environment. In any interaction between an agent and its environment, the state of the environment will determine the state change made by the agent, and so the state of the agent will determine the state-change of the environment. Thus, any state of an agent, S(a), will be determined by two factors—the previous state of the agent, S(a-1), and the state of the environment S(e). And yet the same is also true of the state of the environment, as its change in state is determined by its previous state and the state of the agent.

Now, the important point, when I make the claim that this cannot be fully implemented on a finite state machine, is that the environment is continuous. The agents themselves may be discrete—meaning they have a finite number of states—but the environment in which they persist and operate is continuous—meaning it has an infinite number of states. The best that a discrete state machine can do is to simulate the environment.

This makes sense, conceptually. The agents are not in constant communication with one another; connections may be broken, lost, and then found again. Or signals may still be sent and received, but only through this dynamic and continuous medium of exchange.

Searle complains that it is not possible for syntax to give rise to semantics, as there is no clear moment in process of manipulating symbols that could suddenly be decided as crossing a threshold to semanticity. Because any sort of discrete state machine can be implemented on another, even the most complex of machines can be reduced to a simple structure of the most outlandish composition, be it beer cans and string or Chinese symbols and a naïve English speaking man.

So, since it is not clear how any system with a finite number of configurations of symbols can have semanticity, then perhaps it is necessary for it to have an infinite number of configurations—but not of symbols. The environment in which these agents operate would preserve and contain non-symbolized information, emitted from the agents themselves and from the external world.

This brings us to an important point: a TM loses information when it assigns symbols. Such is the nature of a discrete machine operating in a continuous world.

Perhaps, then (and this is my argument in all its glory), in order for a system to have understanding it must preserve information that would otherwise be lost in a purely symbol manipulating machine. The environment in which Minsky’s agents operate would have the capacity to do so, as it is continuous and may be in an infinite number of states.

Now, clearly, I am lacking knowledge in the subject of neurophysiology. Searle holds that only brains cause minds, but this does not give us any clear way to abstract away the specific biochemical activities exhibited by the brain. In citing the environment as a structure in which total information may be preserved, I believe that a clear to road to embark upon research can be shown.

Of course, as I wrote before, this is not even a partially complete theory, but merely an idea. I might be entirely wrong on this, as I might discover upon reading material on neurophysiology that the brain is a wholly unified structure with no “gaps” or “spaces” in it essential to its operation. However, from the little I do know, I am led to believe that the idea I have laid out is a relevant, and that such an environment can contribute to the dynamics of the whole system in a way not otherwise possible.

*I owe this phrase to Mpodozis, Letelier, and Maturana

***

I just want to tag this on, here at the end, for purposes of cognitive mastication:

"The behavior of an agency is therefore determined both by its internal connection pattern and state, and by its external environment, as reflected by the input signals."

Link

Saturday, November 11, 2006

First Impressions of Connectionism (Rumelhart)

With this article we move away from asking what is theoretically possible (can machines think?). Instead we presume that we can design thinking machines, but wonder how to make the problem tractable.

Connectionism offers a methodology that seems to correspond well with the way human minds work. It moves away from the von Neumann model of computation to a model that Rumelhart calls "neurally inspired." Proponents of connectionism claim this model is condusive to the sorts of algorithms that will be needed to design intelligent machines.

The shift in architecture doesn't change what is theoretically possible. A von Neumann computer can emulate other architectures; any algorithm can be computed by any Turing Machine. But the change in architecture does lead to a change in how we model cognition. We know that changing paradigms can have potent effects. When programmers shifted to the structured programming paradigm, they were capable of meeting the demands that sophisticated software systems made of them. Without the paradigm shift, it's probable that they would be unable to program software at the level of complexity that is currently produced. Designing the complicated systems they do now would have been theoreticaly possible prior to adopting the structured programming paradigm, but it wouldn't have been practically possible.

It's worth noting that AI had for a long time been dominated by the LISP programming language. Like connectionism, LISP does not model the von Neumann architecture; so, I don't think AI has been using the von Neumann architecture exclusively until the arrival of the connectionist model. What distinguishes connectionism from GOFAI the most is how it models knowledge representation. In connectionism, knowledge is represented implicitly in the system, as opposed to the explicit representation of GOFAI.

The implicit representation is a "pattern of connectivity" of the smallest processing units in a connectionist model (these small processing units are analogous to a neuron). These neurons are connected to each other by various weights. This pattern of connectivity can be represented as a matrix that Rumelhart calls a"connectivity matrix."

And now some ramblings:

The matrix sounds exactly like an adjacency matrix used to represent a graph, with the smallest processing units the nodes. I don't know if it's helpful, but changes from one pattern of connectivity to another can be formalized, mathematically, as a multiplication of the connectivity matrix C at time t by some transformation matrix T. So that C(t+1) = C(t) * T (Rumelhart showed a similar formula for when individual nodes are activated). Playing around with knowledge representation in a connectionist model is "just" playing around with an adjacency matrix.

Tuesday, November 07, 2006

Researchers Create Artificial Retina From Silicon

I came across an interesting article today about artificial retinas.

From the article:

Researchers from the University of Pennsylvania and Stanford University have made a breakthrough in the field of vision. Kareem Amir Zaghloul and Kwabena Boahen have proposed a silicon retina that reproduces signals in the optic nerve, a technology which could be used to provide vision to those who suffer from blindness-related diseases, such as retinitis pigmentosa

. . .

“We morphed our retinal model into a silicon chip by replacing each synapse or gap junction in our model with a transistor,” Zaghloul and Boahen revealed. “One of its terminals is connected to the pre-synaptic node, another to the post-synaptic node and a third to the modulatory node. By permuting these assignments, we realize excitation, inhibition and conduction, all of which are under modulatory control.”

It's too bad the article doesn't explain how everything works, but I suppose that's a good reason to read the journal article and take a neurophysiology course and an eye anatomy course.