I want to start out by explaining what my idea is and is not. First of all, this is not a complete theory of mind; I think it lends itself to a possible theory, but by its lonesome self it is not something that should spur anyone to run out to a computer lab and start programming. As a matter of fact, as I’ll note in places, there are points where I want to gather empirical evidence.
Second, this is not meant to refute or bolster Searle’s argument. What I am doing here is considering the conclusion of the Chinese Room Argument to be, It is not clear how syntax can give rise to semantics. Searle would of course claims it is impossible, but I think it is more prudent to just stop by saying that we simply don’t know if it is possible, and if it is, then we don’t know how.
Third, what I am addressing is the question, What else could it be? I am trying to offer here a possible alternative to a Turing Machine. Now, two questions seem to arise immediately from my saying this: (a) Am I actually saying that the syntax of a Turing Machine can’t give rise to semantics; and (b) Am I saying that my alternative is actually what gives rise to semantics? My answers: No, and no. I do not know if TM’s can or cannot give rise to semantics, and in fact I think that it’s difficult if not impossible to prove one way or the other at this time with the knowledge that we currently have. In other words, I claim temporary agnosticism. Also, my alternative that I am proposing is by no means a definite solution, only what I believe to be a possible solution. I think that it would be as difficult to prove one way or the other the feasibility of my solution as that of a TM.
If the thought crosses your mind that my argument will be for naught if it doesn’t offer a conclusive insight, then rest assured, as I believe there is a specific, non-arbitrary set of empirical evidence that is required to make my theory complete, some of which has already been gathered. To that effect, part of the conclusion of my idea here will be a plan for future study of various subject matters.
With that out of they way, here is my idea:
- As described by Minsky, a Society of Mind is necessary and sufficient for having/making a mind.
- The massively parallelized multi-agent system that comprises the mind cannot be emulated by a single TM.
- Following 2, the reply by Searle (Mooney’s Reply H) that Minsky’s Society (being a “connectionist architecture”) can be reduced to a single TM does not apply.
Now clearly, I must justify 2. Obviously, a single TM can emulate multiple TM’s in parallel. I previously brought up my example of an eight-core processor being emulated by single-core to demonstrate my belief in this.
The reason why, then, a single TM cannot emulate a society of Minsky’s agents is because there is something that these agents bring with them that can at best be only simulated, and not emulated. This is an important distinction here: I mean by emulation just what I meant before—that implementing multiple TMs or an architecturally different TM on a single TM is wholly, utterly, and completely equivalent (albeit computationally slower); I mean, however, by simulation a proximal functional equivalence.
Notice how I did not apply “functional” to the definition of emulation; it is not merely functionally equivalent, but rather it is a one-for-one equivalence on the lowest level of operation (namely the manipulation of symbols). Simulation, then, is hindered by the fact that (1—which by itself would not matter) it is not operationally equivalent and that (2) it can only approximate the behaviors and solutions of that which is being simulated.
So what is it about Minsky’s society that cannot be emulated by a TM? It is the environment in which the agents operate. As I envision it, the system that comprises these agents is not one single unified structure, but numerous disparate structures that connect and disconnect to and from each other in a particular medium of interaction. This medium, I maintain, should not be overlooked, and in fact cannot, as this medium in which the agents persist plays an equally important role in determining the total state of mind system.
There is a recursive dynamic generative relation* between an agent and its environment. In any interaction between an agent and its environment, the state of the environment will determine the state change made by the agent, and so the state of the agent will determine the state-change of the environment. Thus, any state of an agent, S(a), will be determined by two factors—the previous state of the agent, S(a-1), and the state of the environment S(e). And yet the same is also true of the state of the environment, as its change in state is determined by its previous state and the state of the agent.
Now, the important point, when I make the claim that this cannot be fully implemented on a finite state machine, is that the environment is continuous. The agents themselves may be discrete—meaning they have a finite number of states—but the environment in which they persist and operate is continuous—meaning it has an infinite number of states. The best that a discrete state machine can do is to simulate the environment.
This makes sense, conceptually. The agents are not in constant communication with one another; connections may be broken, lost, and then found again. Or signals may still be sent and received, but only through this dynamic and continuous medium of exchange.
Searle complains that it is not possible for syntax to give rise to semantics, as there is no clear moment in process of manipulating symbols that could suddenly be decided as crossing a threshold to semanticity. Because any sort of discrete state machine can be implemented on another, even the most complex of machines can be reduced to a simple structure of the most outlandish composition, be it beer cans and string or Chinese symbols and a naïve English speaking man.
So, since it is not clear how any system with a finite number of configurations of symbols can have semanticity, then perhaps it is necessary for it to have an infinite number of configurations—but not of symbols. The environment in which these agents operate would preserve and contain non-symbolized information, emitted from the agents themselves and from the external world.
This brings us to an important point: a TM loses information when it assigns symbols. Such is the nature of a discrete machine operating in a continuous world.
Perhaps, then (and this is my argument in all its glory), in order for a system to have understanding it must preserve information that would otherwise be lost in a purely symbol manipulating machine. The environment in which Minsky’s agents operate would have the capacity to do so, as it is continuous and may be in an infinite number of states.
Now, clearly, I am lacking knowledge in the subject of neurophysiology. Searle holds that only brains cause minds, but this does not give us any clear way to abstract away the specific biochemical activities exhibited by the brain. In citing the environment as a structure in which total information may be preserved, I believe that a clear to road to embark upon research can be shown.
Of course, as I wrote before, this is not even a partially complete theory, but merely an idea. I might be entirely wrong on this, as I might discover upon reading material on neurophysiology that the brain is a wholly unified structure with no “gaps” or “spaces” in it essential to its operation. However, from the little I do know, I am led to believe that the idea I have laid out is a relevant, and that such an environment can contribute to the dynamics of the whole system in a way not otherwise possible.
*I owe this phrase to Mpodozis, Letelier, and Maturana
I just want to tag this on, here at the end, for purposes of cognitive mastication:
"The behavior of an agency is therefore determined both by its internal connection pattern and state, and by its external environment, as reflected by the input signals."