Sunday, November 12, 2006

Giving Rise to Semantics

I want to start out by explaining what my idea is and is not. First of all, this is not a complete theory of mind; I think it lends itself to a possible theory, but by its lonesome self it is not something that should spur anyone to run out to a computer lab and start programming. As a matter of fact, as I’ll note in places, there are points where I want to gather empirical evidence.

Second, this is not meant to refute or bolster Searle’s argument. What I am doing here is considering the conclusion of the Chinese Room Argument to be, It is not clear how syntax can give rise to semantics. Searle would of course claims it is impossible, but I think it is more prudent to just stop by saying that we simply don’t know if it is possible, and if it is, then we don’t know how.

Third, what I am addressing is the question, What else could it be? I am trying to offer here a possible alternative to a Turing Machine. Now, two questions seem to arise immediately from my saying this: (a) Am I actually saying that the syntax of a Turing Machine can’t give rise to semantics; and (b) Am I saying that my alternative is actually what gives rise to semantics? My answers: No, and no. I do not know if TM’s can or cannot give rise to semantics, and in fact I think that it’s difficult if not impossible to prove one way or the other at this time with the knowledge that we currently have. In other words, I claim temporary agnosticism. Also, my alternative that I am proposing is by no means a definite solution, only what I believe to be a possible solution. I think that it would be as difficult to prove one way or the other the feasibility of my solution as that of a TM.

If the thought crosses your mind that my argument will be for naught if it doesn’t offer a conclusive insight, then rest assured, as I believe there is a specific, non-arbitrary set of empirical evidence that is required to make my theory complete, some of which has already been gathered. To that effect, part of the conclusion of my idea here will be a plan for future study of various subject matters.

***

With that out of they way, here is my idea:

  1. As described by Minsky, a Society of Mind is necessary and sufficient for having/making a mind.
  2. The massively parallelized multi-agent system that comprises the mind cannot be emulated by a single TM.
  3. Following 2, the reply by Searle (Mooney’s Reply H) that Minsky’s Society (being a “connectionist architecture”) can be reduced to a single TM does not apply.

Now clearly, I must justify 2. Obviously, a single TM can emulate multiple TM’s in parallel. I previously brought up my example of an eight-core processor being emulated by single-core to demonstrate my belief in this.

The reason why, then, a single TM cannot emulate a society of Minsky’s agents is because there is something that these agents bring with them that can at best be only simulated, and not emulated. This is an important distinction here: I mean by emulation just what I meant before—that implementing multiple TMs or an architecturally different TM on a single TM is wholly, utterly, and completely equivalent (albeit computationally slower); I mean, however, by simulation a proximal functional equivalence.

Notice how I did not apply “functional” to the definition of emulation; it is not merely functionally equivalent, but rather it is a one-for-one equivalence on the lowest level of operation (namely the manipulation of symbols). Simulation, then, is hindered by the fact that (1—which by itself would not matter) it is not operationally equivalent and that (2) it can only approximate the behaviors and solutions of that which is being simulated.

So what is it about Minsky’s society that cannot be emulated by a TM? It is the environment in which the agents operate. As I envision it, the system that comprises these agents is not one single unified structure, but numerous disparate structures that connect and disconnect to and from each other in a particular medium of interaction. This medium, I maintain, should not be overlooked, and in fact cannot, as this medium in which the agents persist plays an equally important role in determining the total state of mind system.

There is a recursive dynamic generative relation* between an agent and its environment. In any interaction between an agent and its environment, the state of the environment will determine the state change made by the agent, and so the state of the agent will determine the state-change of the environment. Thus, any state of an agent, S(a), will be determined by two factors—the previous state of the agent, S(a-1), and the state of the environment S(e). And yet the same is also true of the state of the environment, as its change in state is determined by its previous state and the state of the agent.

Now, the important point, when I make the claim that this cannot be fully implemented on a finite state machine, is that the environment is continuous. The agents themselves may be discrete—meaning they have a finite number of states—but the environment in which they persist and operate is continuous—meaning it has an infinite number of states. The best that a discrete state machine can do is to simulate the environment.

This makes sense, conceptually. The agents are not in constant communication with one another; connections may be broken, lost, and then found again. Or signals may still be sent and received, but only through this dynamic and continuous medium of exchange.

Searle complains that it is not possible for syntax to give rise to semantics, as there is no clear moment in process of manipulating symbols that could suddenly be decided as crossing a threshold to semanticity. Because any sort of discrete state machine can be implemented on another, even the most complex of machines can be reduced to a simple structure of the most outlandish composition, be it beer cans and string or Chinese symbols and a naïve English speaking man.

So, since it is not clear how any system with a finite number of configurations of symbols can have semanticity, then perhaps it is necessary for it to have an infinite number of configurations—but not of symbols. The environment in which these agents operate would preserve and contain non-symbolized information, emitted from the agents themselves and from the external world.

This brings us to an important point: a TM loses information when it assigns symbols. Such is the nature of a discrete machine operating in a continuous world.

Perhaps, then (and this is my argument in all its glory), in order for a system to have understanding it must preserve information that would otherwise be lost in a purely symbol manipulating machine. The environment in which Minsky’s agents operate would have the capacity to do so, as it is continuous and may be in an infinite number of states.

Now, clearly, I am lacking knowledge in the subject of neurophysiology. Searle holds that only brains cause minds, but this does not give us any clear way to abstract away the specific biochemical activities exhibited by the brain. In citing the environment as a structure in which total information may be preserved, I believe that a clear to road to embark upon research can be shown.

Of course, as I wrote before, this is not even a partially complete theory, but merely an idea. I might be entirely wrong on this, as I might discover upon reading material on neurophysiology that the brain is a wholly unified structure with no “gaps” or “spaces” in it essential to its operation. However, from the little I do know, I am led to believe that the idea I have laid out is a relevant, and that such an environment can contribute to the dynamics of the whole system in a way not otherwise possible.

*I owe this phrase to Mpodozis, Letelier, and Maturana

***

I just want to tag this on, here at the end, for purposes of cognitive mastication:

"The behavior of an agency is therefore determined both by its internal connection pattern and state, and by its external environment, as reflected by the input signals."

Link

3 Comments:

Blogger Phillip Dreizen said...

"So, since it is not clear how any system with a finite number of configurations of symbols can have semanticity, then perhaps it is necessary for it to have an infinite number of configurations—but not of symbols"

Isn't this exactly wha a TM can do? The set of symbols in the input alphabet and tape alphabet are finite, but since the "tape" is of unlimited size, the number of configurations is infinite?

What your theory is stating, implicitly, is that the human mind is not a computer, because the environment the human mind exists in is incomputable.

11:47 PM  
Blogger Phillip Dreizen said...

What about the massive parallelism in a connectionist model solves your problem of uncomputability?

11:51 PM  
Blogger Jonathan said...

"Isn't this exactly wha a TM can do? The set of symbols in the input alphabet and tape alphabet are finite, but since the "tape" is of unlimited size, the number of configurations is infinite?"

I knew you would say that, and I had meant to replace all of the "Turing Machines" with "discrete state machine" or "finite state machine." I suppose I should just let it stand now though, and respond to it here.

I think that continuously throwing more and more memory at the problem would be an effort in futility. Technically, yes a TM could with its infinite memory implement a system with an infinite number of states, but are we really going to build a machine to do just that? Don't you think that doing so would be so computationally expensive as to be infeasable? I think it would be more fruitful to actually provide a physical environment in which these agents could operate.

However, I'm willing to be proven wrong, if I find an example of said environment that is simulated to such a degree of precision while at the same time remaining time efficient.

"What your theory is stating, implicitly, is that the human mind is not a computer, because the environment the human mind exists in is incomputable."

The human mind is the composite of finite state agents operating in an environment that has an infinite number of states, the latter of which is incomputable.

"What about the massive parallelism in a connectionist model solves your problem of uncomputability?"

As I envision it, the neural networks are the basic units with which to compose a society of mind. This is obviously not a formal account of how to build a mind, but I think it lends itself towards a goal that once attained can be used to build an argument for how the mind works.

Furthermore, allow me to quote Rumelhart for a response:

"It is crucial in the development of any model to have a clear representation of the environment in which this model is to exist. For connectionist models, we represent the environment as a time-varying stochastic function over a space of possible input patterns"
Rumelhart, Mind Design II, pp 215. 1997.

This is the environment that I am talking about. As Rumelhart describes it, the environment is represented by probabilistic functions, but as above, I question whether this is adequate. I have to go into more articles describing this environment for the connectionist architecture, and then also go into neurophysiology and see how it compares to what is actually going on in the brain.

And clearly, that's a lot to get into...

10:30 AM  

Post a Comment

Links to this post:

Create a Link

<< Home