Friday, November 4, 2011

Connectionism, part I


Thinking in terms of this connectionism thing has helped me make a bunch of connections that I hadn't made before (hah... umm.. seriousy), and it's mostly not something I can put down on paper.  It's a basic frame work to see things in, so it needs to be pondered for a long while. It's a big fucking deal.

It's worth noting that most of my thought on this has been my own - trying to actually apply it to things I see and not keeping close with the theoretical work done. I might be describing things differently and even making some mistakes, and I'm certainly being hand wavy. However, I don't think I'm going out too far on a limb here.

Okay, so we're looking at our mind in some abstract n-dimensional space, now what? How many dimensions are there? What are the rules from getting from one state to another?

Our brains work by neurons forming a neural net and doing their thing. If you have N nodes, then the activation state can be described by a N dimensional vector (that lives in your N-dimensional state space), and N^2 connections and their associated strengths described by a NxN matrix.

Connection strengths change (i.e. learning), so the evolution of mental states must take into account the connection strengths. In total, there are N^3 values to store, including N activation levels and N^2 connection strengths. I'll distinguish between the activation state (N) and the total state (N^3).

Basically, each node has some level of activation which then spreads to all other nodes after being multiplied by the connection strength, and the activation of those nodes spreads further based on their connection strengths, ad infinitum...

In math, the activation state is multiplied by the connection matrix to get the next activation state, which you multiply again by the matrix... (I say "multiplied", but its nonlinear. Element A2ik  = A1ij(Ajk), not A1ij*Bjk)

This leads to spreading activation, where activation spreads from the first to the second, from the second to the third, diminishing as it goes. Priming is a perfect demonstration of this. If you light up the "bird" node, then you're more likely to think of "dove" as a bird than as the past tense of "to dive", due to the activation leaking over.

A neat observation (that I have yet to see any real significance of) is that a change of coordinate system can make each 'node' non-local, which we know to be true of how the brain works. Nothing in the math jumps out at me as an obvious choice for the new coordinate system, or how you'd even notice which is being used (that seems like a valid question, given that from the inside it seems like I have distinct concepts...), but perhaps it's just that different parts of the brain do different types of things. The emotional component of the concept coming from one side, the color from another place, etc...

Anyway, this connectionism thing bit seems to predict that "Name someone that isn't Genghis Khan" returns "Genghis Khan!" for an answer. And indeed it does - at first. We then do conflict monitoring stuff in our ACC and reject it.

I'm not really sure how that part fits in, so I treat it as a black box sitting on top of the neural net mess. The whole state space thing still works, of course, but state transitions have to be modeled purely from higher up reasoning.


http://en.wikipedia.org/wiki/Connectionism
http://en.wikipedia.org/wiki/Spreading_activation
http://lesswrong.com/lw/6q5/connectionism_modeling_the_mind_with_neural/
http://lesswrong.com/lw/7mx/your_inner_google/



No comments:

Post a Comment