Saturday, November 11, 2006

First Impressions of Connectionism (Rumelhart)

With this article we move away from asking what is theoretically possible (can machines think?). Instead we presume that we can design thinking machines, but wonder how to make the problem tractable.

Connectionism offers a methodology that seems to correspond well with the way human minds work. It moves away from the von Neumann model of computation to a model that Rumelhart calls "neurally inspired." Proponents of connectionism claim this model is condusive to the sorts of algorithms that will be needed to design intelligent machines.

The shift in architecture doesn't change what is theoretically possible. A von Neumann computer can emulate other architectures; any algorithm can be computed by any Turing Machine. But the change in architecture does lead to a change in how we model cognition. We know that changing paradigms can have potent effects. When programmers shifted to the structured programming paradigm, they were capable of meeting the demands that sophisticated software systems made of them. Without the paradigm shift, it's probable that they would be unable to program software at the level of complexity that is currently produced. Designing the complicated systems they do now would have been theoreticaly possible prior to adopting the structured programming paradigm, but it wouldn't have been practically possible.

It's worth noting that AI had for a long time been dominated by the LISP programming language. Like connectionism, LISP does not model the von Neumann architecture; so, I don't think AI has been using the von Neumann architecture exclusively until the arrival of the connectionist model. What distinguishes connectionism from GOFAI the most is how it models knowledge representation. In connectionism, knowledge is represented implicitly in the system, as opposed to the explicit representation of GOFAI.

The implicit representation is a "pattern of connectivity" of the smallest processing units in a connectionist model (these small processing units are analogous to a neuron). These neurons are connected to each other by various weights. This pattern of connectivity can be represented as a matrix that Rumelhart calls a"connectivity matrix."

And now some ramblings:

The matrix sounds exactly like an adjacency matrix used to represent a graph, with the smallest processing units the nodes. I don't know if it's helpful, but changes from one pattern of connectivity to another can be formalized, mathematically, as a multiplication of the connectivity matrix C at time t by some transformation matrix T. So that C(t+1) = C(t) * T (Rumelhart showed a similar formula for when individual nodes are activated). Playing around with knowledge representation in a connectionist model is "just" playing around with an adjacency matrix.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home