Thoughts

Replacing Neurons

Bookmark and Share Imagine: a mad scientist with a ray gun shoots at a neuron somewhere in cortical layer IV of your visual area MT, burning it up in a matter of microseconds (just for fun, imagine also that the ray gun leaves everything else intact).

With one neuron missing, you probably won't notice any perceptual change. But what if, one by one, all neurons in are MT went AWOL? You'd be stuck with an annoying inability to visually detect motion.

Now imagine that for every cell that our fancy ray gun hits, it replaces it with a magical transistor equivalent. These magical transistors have wires in place of each and every dendrite, a processing core, and some wires in place of axon(s). Naturally, the computational core analyzes the sum of all inputs and instructs the axon to "fire" accordingly. Given any set of inputs to the dendrite wires, the output of the axon wires is indistinguishable from that of the deceased neuron.

We can still imagine that with one neuron replaced with one magical transistor, there wouldn't be any perceptual change. But what happens when more and more cells are replaced with transistors? Does perception change? Will our subject become blind to motion, as if area MT weren't there? Or will motion detection be just as good as with the real neurons? I am tempted to vote in favor of "No change [we can believe in]," but have to remain skeptical: there is simply no direct evidence for either stance.

Ray guns aside, it is not hard to see that a computational model of a brain circuit may be a candidate replacement of real brain parts (this is especially true considering the computational success of the Blue Brain Project's cortical column, which comprises 10,000 neurons and many more connections among them). For example, we can imagine thousands of electrodes in place of inputs to area MT that connect to a computer model (instead of to MT neurons); the model's outputs are then connected, via other electrodes, to the real MT's outputs, and ta-da!  Not so fast. This version of the upgrade doesn't shed any more light on the problem than the first, but it does raise some questions: do the neurons in a circuit have to be connected in one specific way in order for the circuit to support perception? Or is it sufficient simply for the outputs of the substitute to match those of the real circuit, given any set of inputs? And, what if the whole brain were replaced with something that produced the same outputs (i.e. behavior) given a set of sensory inputs - would that "brain" still produce perception?

Extra extra!!! Storm brewing in espresso shot!

Bookmark and Share

The media is always hungry for juicy stories about anything. Topics of interest range from Lindsey Lohan's latest adventures to the implications of another political ethics violation. The science writers at the New York Times are no exception. Dennis Overbye confessed in an essay yesterday that some writers are so eager to report on sensational findings that they sometimes hype up their stories.

Shocking! Overbye gives an example of one such NYT article, which reported the amazing story that scientists found hints of the elusive and mysterious dark matter in a Minnesota mine. He said the article raised a hysteria, but it eventually left people disappointed when someone cared to report that the amount of dark matter found was not far above amounts found by chance. Dennis Overbye goes on to condemn the internet for spreading rumors, but he fails to note that the original hyped report on dark matter was written by him!

Perhaps our trusted science writers should do a bit more research before they publish their articles. But wait! they need the stories, and they need those stories to be catchy, damnit! Their job isn't to educate readers on the current state of whatever scientific field; their job is to report the latest findings. They more controversial they are, the better. There's a new article everyday about how exercise is good for you (or is it bad? I can't remember anymore) or how prostate or breast exams for cancer have been wrong all these years (don't worry - they'll turn out to be right again next week). No wonder Americans are confused about their health.

Individual studies are great, but they have to be taken in context and have to stand the test of time. Most findings in basic science research are small; it's the knowledge collected over many experiments and years that gives us a big picture of any one field. So the next time you read about "a new study," take it with a critical grain of salt.

Toasters With Feelings

The Brave Little Toaster Bookmark and Share Anthropomorphism is the attribution of human characteristics to inanimate objects, animals, or God. It has been a hallmark of faiths and religions worldwide. Humans have a natural tendency to assign intentions and desires to inanimate objects ("my computer isn't feeling well today - he's so slow!"), but they also strip "lower" beings (animals) of those same human characteristics.

We have a history of treating animals unnecessarily cruelly. I don't mean killing for food - that's necessary for our survival; I'm referring to dog fights, hunting, and other violence. We didn't even think that animals could sense pain until quite recently!

Why do we think of lifeless forms as agents with intentions but of actual living creatures as emotionally inferior clumps of cells?

Could it be that the need to rationalize phenomena is simply stronger when the phenomena have absolutely no visible explanation?

And do toasters really have feelings??