Thursday, June 02, 2011
It is some scholars’ nightmare to find out that a model they have been using for decades turns out not be exactly correct. You have to rethink all your conclusions and see which ones are still true and which ones aren’t. If a lot of them aren’t, it can jeopardize all the teaching and research and publishing you have done over the years. Plus it hits your self-image a bit.
Something like this happened to me today. I always knew that I was using a simplified version of how the brain is wired to explain human cognition, emotion, and information processing. The world is too complex to be an expert in neuropsychology, human factors/usability, productivity management, and innovation. I have a wider range than most, but still I need to set a reasonable scope somewhere.
I won’t suffer you with the details, but think of it this way. My model starts with the fact that each of the 100,000,000 neurons in the human brain is connected to tens of thousands of others. What makes our brains so complex is that while many of the connections are local (memory to memory, emotion to emotion), we also have a rich set of connections that go between areas and allow us to make complicated associations.
I was reading an article from a PhD candidate in neuroscience from Columbia who was citing recent research by Tom Bartol of the Salk Institute. And what I learned is that there is a much bigger gap between the end of a dendrite and the associated axon it is connected to. As a result, a lot of the neurotransmitters that make brain activity go get lost in between and can activate the wrong neuron.
The reason this is important to me is that it changes the way information spreads through the brain. It adds a much bigger random component than we thought. It makes focused attention harder and increases the chances of things like daydreaming and ADD. It increases the frequency and types of errors that we should expect people to make. But it is good for creativity and innovation.
This doesn’t change any of my conclusions (thankfully!). It just increases the probability of some things and decreases the probability of other things. While most of you probably couldn’t care less, this really blows my mind. And as an open-minded version of a scholar, I love when this happens. Not only did I learn something today, but I learned something fundamental. I feel like contacting all of the students I have ever had in a theoretical human factors course and sending them an update.