Descriptive vs. Predictive Models

When we look back at the important advances in neuroscience in the 20th and 21st centuries, what will we remember? What will we still find useful and worth pursuing further? The field is still in its nascent stages, even a century after Ramon y Cajal showed evidence for the neuron doctrine, establishing the neuron as a fundamental unit of the nervous system; and Brodmann published his cytoarchitecture studies that convinced the world that the brain is divided into distinct areas and likely uses those to divvy up processing. Yet we still have virtually no clue how the brain works: there is no central theory, no cures for brain diseases; only a whole lot of curious, enthusiastic and optimistic minds and some funding to help them get stuff done. And it is rightly so that some neuroscientists have serious physics envy, which pushes them to develop predictive models that (sometimes) give important insights into what mother nature did to make the brain work. A great example of this is the Hodgkin-Huxley model of the action potential. When Hodgkin and Huxley created the model in the early 1950's, biologists had little clue as to how cells generated such complex waveforms. Having observed conductance changes across the cell membrane during the action potential, Hodgkin and Huxley went on to show that the conductances were ion-selective, and worked as functions of time and membrane potential. They then predicted that whatever was mediating conductance of ions had to be voltage-sensitive and allow fast molecular changes. This work led to a wide search for the ion conductors, which turned out to be voltage-gated sodium and potassium channels. The key word there is that the model predicted something. Fast forward to 2011, and we still don't have a greater success story for predictive models than the Hodgkin-Huxley model.

Neuroscientists today are gathering data by the terabytes, describing amazing properties of neurons and networks, and moving on to the next experiments. A typical electrophysiological experiment, for example, involves electrode recordings of populations of cells, and describes the cells' firing properties while the brain is engaged in some behavior or other. What we need is to be able to make predictions about major principles based on information we've gathered over the last century. For example, given that we observe gamma rhythms during object recognition, can we predict not only if gamma is required for that task, but how it helps the brain achieve it? Given these observations, can we predict what features the brain must have to accomplish this task? For example, if we predict that cortical connections constitute "small world" networks, can we understand the rules for wiring better? Better yet, can we infer what the wiring rules must be? As we develop ever more sophisticated tools to study the brain, we should have an easier time making predictions about how it works. We have to step up to the plate in the 21st century, and produce some theories that do more than describe what we see. These theories have to not only capture the complexity of the system but also the relative simplicity by which the system is created.

Whale Song Analysis Crowd-sourcing

Scientific American is collaborating with marine scientists on a project to crowd-source analysis of whale songs and calls. Having gathered thousands of sound files from many species of whales, scientists now need to classify each call and song to get an understanding of each specie's repertoire. Once the calls and songs are sorted and classified, scientists can pursue interesting questions like, is a whale's song repertoire related to its intelligence? To classify the vocalizations, scientists are asking the public for help. On, anyone (no expertise required) can sift through some spectrograms and embedded sound files, and match them to a template. It's easy, fun and cool. Something that would take one person months or years to do, can now by accomplished much faster by the public in a fun format.

Some previous efforts in scientific crowd-sourcing like FoldIt, a game in which people fold proteins based on simple rules (computers can't do this), or the search for new galaxies by amateur astronomers from images taken by the Hubble telescope. Perhaps this type of effort could help the Connectome efforts to map out the brain down to each synapse using electron microscopy, where every neurite in a cross-sectional image must be strung to itself in adjacent images. Tracing axons across thousands of EM images could actually make a fun and productive game.

What are experts for?

It is by no mistake that philosophers from Plato to Hume to Adam Smith have advocated a division of labor as a driving force of society and its economy. The more specialized one's labor, the more advanced the resulting product. As people gain expertise in their distinct fields, the more they are able to advance those fields. This is as true for labor as it is for science (in general; there is a lot to be said for multi- and cross-disciplinary approaches and out of the box thinking that specialization usually dampens. But on the whole,  it is undoubtedly more advantageous to specialize in a field than not). Scientific experts are people society relies on to advance knowledge and establish facts; they are the people we go to when we need answers. Here is why we need experts: Bret Stephens is a journalist and  Wall Street Journal columnist whose training is, supposedly, in journalism and maybe economics (Wikipedia, which never lies, says he attended the London School of Economics). In his column today, Bret Stephens writes about global warming. Entitled "The Great Global Warming Fizzle," the article compares climate change science to a religion - and a dying one too - whose adherents are "spectacularly unattractive people" and whose "claims are often non-falsifiable, hence the convenience of the term 'climate change' when thermometers don't oblige the expected trend lines."

Now if Bret Stephens were an environmental scientist with proper training, his criticism of climate science would be worth hearing. But just as we don't take physics advice from members of the Taliban (Sam Harris's favorite example), we shouldn't take climate change advice from Bret Stephens. Has he seen the data in question? Would he know how to interpret it? Would he draw the same conclusion if money and government intervention were not factors? The worry is that he is not fulfilling his role of journalist, in which he is expected to provide fair interpretations of the science and policy to the public, who are not experts. Instead of facts, we get an opinion piece on something Bret Stephens has no expertise in.

Here's the dilemma for those who care - is it better to ignore the vocal people who don't know what they're talking about or to correct them and spread the correct message? The latter would be a far more active and constructive choice, but it should have to be a proactive message instead of reactive as in this post.

Not Another Rodent

Slate magazine had a slideshow by Daniel Engber a little more than a week ago on unusual laboratory animals and why they're important. The slideshow was prompted by Engber's observation that mice and rats make up an enormous proportion of all lab animals, perhaps limiting what we can conclude from experimental results and narrowing our perspective on what questions to ask. In short, scientists need to start thinking outside the box when it comes to model organisms. Engber lists fourteen animals, some of which have already given important clues to specific questions. I will mention some of those here. 1. The squid: the squid peaked in importance in the 1950's, when Hodgkin and Huxley got the idea to use its giant axon (up to 1mm diameter) to study properties of the action potential. The large diameter made it possible for them to insert micro-electrodes directly into the intracellular space of the axon, thereby measuring the flow of ions across the membrane during various stages of the action potential or under different extracellular ionic concentrations. This work resulted in Hodgkin and Huxley's mathematical model of AP generation, which earned them the 1963 Nobel Prize. 

The squid giant axon (not "giant squid axon") was used primarily because its large diameter made it possible for researchers to stick electrodes into the lumen. These days, better equipment and techniques allow scientists to record electrical activity of neurons with intracellular electrodes placed into cell bodies with widths in the range tens of microns in slice preparations or even awake behaving animals!

2. Xenopus laevis frogs are used for the eggs females produce, oocytes, which can be around 1mm in diameter. As with the squid giant axon, these cells were first used because their large size makes them easy to handle. 

Xenopus oocytes contain machinery for protein production, which scientists can hijack by injecting DNA or RNA coding a desired protein. Once injected with DNA or RNA, the oocyte starts making protein. Oocytes are typically used in electrophysiological experiments on the function of particular neurotransmitter- or voltage-gated ion channels, like the GABA-A receptor.

3. The zebra finch and other songbirds are great models for motor learning/planning and language acquisition. Adult male Zebra finches sing the same song - a highly structured, complex sequence of sounds that requires equally sophisticated motor control - throughout their lives (hundreds of times per day); how the brain codes for such learned sequences and maintains them for years is a question of great interest, and arguably more amenable to scientific study than motor control in primates (the typical model for such questions), whose movement repertoire is much more diverse and variable. 

Other birds don't sing the same tune every time, but instead combine syllables or motifs into songs that differ in composition each time. Canaries have a rich repertoire that may provide clues into sequence and even syntax learning. Their songs contain non-random sequences of syllables, each of which determines what syllable comes next. So any given syllable transition point  in a Canary song depends on what syllable came before. Even cooler, Bengalese finches were recently shown to be able to spontaneously discriminate among songs of variable syntactic structure, indicating that birds have not only the ability to produce songs with hierarchical syllable structure, but to perceive such structure too.

Other model organisms Engber mentions are the zebrafish, traditionally used in genetics experiments, has recently been added to the list of species with an optogenetics toolkit (others include flies, mice and primates); the sea slug Aplysia - used famously by Eric Kandel to demonstrate the concept of long term potentiation, LTP, and the molecular principles behind it; the prairie vole - first demonstration of molecular principles behind "monogamous" relationships; the fruit fly, used for everything genetic because its generations are so short, actually accounts for a small fraction of all papers published since the 1950's, according to Engber's analysis.

We do need out of the box thinking in asking the right questions in science. Choosing appropriate (or simply different) model organisms could be one way to start. The animals listed above are good examples of the results that were achieved by doing so. Archaebacteria also come to mind, since they were the first organisms to contribute to the booming field of optogenetics. But using the same organism to study a given scientific problem is still a necessary way to standardize all the research coming out every day. Tons of data would be uninterpretable if every team used a different organism to perform different experiments. What will the next animal of choice be for neuroscience? Something with decent intelligence, complex neurophysiology, yet not so complex that it becomes criminal to keep it in a cage. Perhaps corvids, with their amazing cognition despite the lack of a six-layer neocortex?


Who's In Charge?

The celebrated cognitive neuroscientist Michael Gazzaniga has a new book coming soon, Who's In Charge - about the implications of neuroscientific findings for the law. To promote it, Slate printed an excerpt that asks what it means for responsibility and culpability if free will doesn't exist.

The idea that "If determinism is true, then no one is responsible for anything" doesn't have to be true: a person acting criminally is still the most proximal cause of the bad behavior and should be held accountable; the Big Bang isn't to blame for his criminality. Moreover, by this reasoning, everyone who commits crimes is not responsible because their brains made them do it: if determinism is true, they also had no choice but to 'sin'. So why should a seemingly healthy offender go to jail (where he doesn't get rehabilitated and doesn't learn to not repeat his offenses), while one with a brain tumor or schizophrenia should be treated medically and perhaps even reinstated into society?

These questions, I think, lead us think of determinism and free will as inappropriate for the legal system. If no one has a choice in their behavior, then clinically sick people shouldn't be treated any differently from anyone else; if no one is responsible for their actions, then sick people aren't somehow "less responsible" than others.
What emerges is that those who "can't help" but to act criminally (i.e. the schizophrenics) are treated medically and released back into society when they're healthy again (psychiatry has a whole lot of catching up to do if that is to actually happen; perhaps neuroscience can help?). So why don't we also treat those criminals who appear healthy? If their brain made them kill, then there must be something wrong with it. What I'm driving at is that a judicial system based on retribution doesn't make much sense. Wouldn't we be far better off if we actually fixed criminals? Perhaps that's wishful. Or worse, dystopian.

Automated in vivo patch clamp

Patch clamping is an electrophysiological method used to measure cell dynamics. The setup involves attaching a pulled-glass pipette filled with conductive solution to a cell membrane and recording the currents that pass through that patch of membrane. The technique is notoriously difficult in cultured neurons or brain slices, and even more difficult in live animals (ultimately, the most appropriate system to study). Several scientists have recently developed an algorithm to automate the process, thereby reducing the skill level and time commitment required to perform patch clamp experiments in vivo.



fMRI has traditionally been used for mapping the brain and correlating brain function with specific structures. The method has become a sort of laughing-stock within the electrophysiological community because of the countless studies that proclaim region A to be responsible for function B. A typical blunder can go like this: "Increased activation of the amygdala during a fear conditioning task suggests that the amygdala is the brain's fear center." To be fair, the method is still very useful and serious scientists don't fall into this fallacy as much as the popular media does. Some outstanding questions are what the measured signal (blood oxygenation level; BOLD) actually means for neural activity; whether it's possible to disambiguate excitation from inhibition; how activation in one region affects connected regions; and what the causal relationships among activated regions are. To address the last two of these, Ed Boyden and colleagues at MIT used a combination of optogenetics and fMRI (Opto-fMRI) in awake mice. The idea is that if they can change the dynamics of a defined population of cells in a localized and fast way (they infected pyramidal cells in mouse somatosensory cortex with ChR2), the network effects of that activation would be revealed by fMRI. In this way, they validate the network effects in both technologies. One limitation that's still inherent to fMRI is the slow temporal resolution - while optogenetic stimulation changes membrane potential with millisecond-resolution, fMRI's hemodynamic response is much slower. Perhaps other imaging methods like multiphoton imaging may be used in the future to dissect large-scale circuits in awake animals.

Here is a nice interview with Jeff Lichtman of Harvard, who is working on a cellular-level map of synaptic connections in the brain (a connectome). The interview raises several questions, like how can we collect thousands of petabytes (millions of gigabytes) of data of the structure of the brain at the level of individual cells? Do we even need so much data? Even though connectomics won't reveal much about neural dynamics (i.e. how neurons actually transmit or integrate information), it should be a useful tool for further work in theoretical neuroscience. Someone has to do it. One caller in this interview asks a great question on the hard problem of consciousness: when scientists look at neuronal activity when one is thinking of a childhood pet, where in the universe is that image of the dog? All the scientists see, after all, is electrical activity...

Optogenetics to Cure Alzheimer's?

Preparing for SfN 2011, I have to give a shout-out to one of the coolest emerging technologies in neuroscience, optogenetics. Optogenetics, as everyone no doubt knows by now, is a method that allows researchers to control the electrical activity of neurons using light. Scientists infect certain types of neurons with an algae transmembrane channel protein that allows the flow of ions into a cell when light of preferred wavelength shines upon it. The method has been described well elsewhere (Steve Ramirez waxes poetic about it on the Mind the Gap Junction blog). Optogenetics is an amazing method for many reasons, but mainly because by allowing us to directly activate or silence neurons, it makes it possible to establish causal relationships in neural circuits: if neuron A is hyperactive, the mouse runs around in circles; if A is silenced, perhaps the mouse is unable to run in circles; therefore, activity in neuron A causes the mouse to run in circles. This is important because traditional electrophysiological methods allow us to only record activity without manipulating it directly (stimulating electrodes are rather crude spatially), and the methods that did allow us to manipulate activity (i.e. pharmacology or stimulating electrodes) have a myriad of effects that make precise causes of behavior unclear (i.e. does TTX act only on sodium channels? Which types? etc).

As optogenetics becomes more and more refined and widespread, I can't help to wonder what it will do for the most prevalent of neurological diseases. Will this method cure Alzheimer's? How about Parkinson's? Optogenetics promises to show us circuit-level interactions among neurons and perhaps even to nail down the network effects of particular diseases. But if we're looking to find cures for diseases instead of just fixes, we ought to not forget our molecular biologists and maybe even geneticists. That's not to say that treatments for neurological diseases are worthless! There are, after all, no cures for any brain diseases so far - so anything will be useful. With all this enthusiasm over optogenetics, we have to be honest about its capabilities and limitations.

Fertilized Eggs Are Now People

The people of Mississippi are set to vote on an initiative to amend the state Constitution to define personhood as starting at the moment of fertilization. If fertilized eggs are people, then abortion is murder. The ballot, which has received widespread bipartisan support and is likely to pass, is rather short and rigid:

Section 33. Person defined.  As used in this Article III of the state constitution, “The term ‘person’ or ‘persons’ shall include every human being from the moment of fertilization, cloning or the functional equivalent thereof.

It's no surprise Mississippians are against abortion; what's interesting is that people still turn to religion to answer seemingly scientific questions like when a human should be considered a person. NPR's Michael Martin covers the topic by interviewing religious leaders:

Now, when voters are asked to consider a weighty moral issue such as this, some turn to their faith for answers, so in a minute we've decided to turn to two religious leaders who are on opposite sides of this question.

Why turn to religious leaders on this issue? What do religious leaders know about life? What evidence do they have (regardless of what their religion is) for when life begins or ends? Or what happens after life ends? None. Zero. In fact, there is as much reason to think that Santa Claus is real as there is for thinking that personhood begins at conception.

Anyway, NPR's idea of fair coverage of the issue is to talk to two pro-life pastors, one pro the ballot and the other against. The former, pastor Dillon, supports the ballot because he was brought up to think that life is precious and sacred, and that god wouldn't approve of murder. Well, yes - life is precious, and murder is in most cases wrong. But why is it that the states that oppose abortion are also the ones with the highest rates of capital punishment? And violent crime (see Richard Dawkins's The God Delusion for evidence for these claims)? And if life is so precious, shouldn't we all be vegetarians? Or not eat at all, if we consider the that plants are living organisms too?

On the other hand, pastor McDonald opposes the ballot because its vague wording wouldn't allow exceptions to the rule in cases of rape or incest, and because he is opposed to big government:

And so this far-reaching arm of the government - I mean we're fighting to get government out of our lives, why would we vote to have government to come more into our personal lives and into our families and into our faith even? These matters should be left to people of faith, to be left to parents or the women and men who are specifically, directly involved in it, and not up to government.

This is actually quite an honest admission that most conservatives won't allow - it's very hard to reconcile small government with rigid anti-abortion laws. Why, if some people don't want government telling them what to do, do they want it to tell others what to do? In this sense, I commend pastor McDonald's consistent perspective. The right to choose is both a freedom consistent with conservative beliefs and a governmental protection consistent with liberalism. So why can't people agree?

But the real question is about when life begins. Is a clump of cells to be considered a human being? How about an adult in a vegetative state? Neuroscience has the capability to answer these questions definitively, and the answer that's creeping up is that personhood or mental capacity (and the capacity for joy, suffering, pain, etc) is tightly correlated with functional complexity of the nervous system. That's why some scientists propose that dolphins and other cetaceans be considered "nonhuman persons." The case for fertilized eggs to be considered persons is unambiguous; they are not people, period. Things get more complicated at later stages of development, and at some point abortion may in fact be murder. The difficulty is that unlike pornography, which we can't define but know it when we see it, personhood can be defined (by nervous system function), it's just that we're not sure if we know it when we see it.

Practically, the problem is when people choose to believe something in spite of evidence to the contrary (this is called 'faith'). The pastors in the NPR interview think of embryos as people, and think it's wrong to "kill" these "people" because presumably they feel emotional and physical pain as adult humans do. This is simply delusion, to borrow Richard Dawkins' nomenclature. The sad part is that no amount of evidence will persuade these people of the truth (unless of course archeologists dig up long-lost "Neuroscientific Gospels," in which god commands his followers to demand evidence for everything).

This would be funny if it weren't so sad and dangerous: the Mississippian people are almost certainly doomed to pass the Personhood Amendment, and they are not peerless among the states. Perhaps the best we as scientists can do is to instill in children a desire to think critically and accept nothing without evidence.

Trendy Lines for Coffee and Depression

"Can drinking coffee help stave off depression?", writes Anahad O'Connor for Tara Parker-Pope's Well Blog on the New York Times. A new paper in The Archives of Internal Medicine by Alberto Ascherio's group at Harvard School of Public Health shows a dose-dependent trend between daily caffeine consumption and relative risk of depression in women. The authors found that women who drink 4-6 cups of coffee daily have a reduced relative risk of becoming depressed compared to women who don't drink coffee. As every psychology student will tell you, "correlation is not causation." And while Anahad's article is teaming with suggestions that coffee helps against depression (proper language is difficult when it comes to analyzing data), the real offense goes deeper, into Dr. Ascherio's report. The only figure in the paper boasts a P value for a linear trend of 0.02 and some impressively overlapping 95% Confidence Intervals (C.I.; meaning that 95% of each point's population falls within the y-values covered by the vertical bars).

There is no doubt that there is a trend in this data set. But with such a wide - and overlapping - spread of the relative risk of depression under each caffeine-amount category, there is almost no chance that there is a statistically significant difference among any group (it is possible to get P < 0.05 if the CIs are equal and overlapping if the difference in means is more than 3x standard error, SE; the lower and upper bound of the CI together add up to about 4x SE; in this figure, the differences between any two means are clearly smaller than 2xSE, let alone 3x). The women in this study in essence all have the same risk of depression regardless of whether they drink one or six cups per day. Important too is that the (valid) assumption that drinking no coffee (rather, less than 100mg per day) gives one a relative risk of 1 has not been tested; but would the trend still hold if we had 95% C.I.s for the first point? Will we ever see a public health recommendation to drink six cups of coffee daily to reduce the risk of depression by 20%? Correlation-causation notwithstanding, it's hard to imagine how one could be depressed while consuming alarming doses of caffeine (that remark is anecdotal, of course).

Perhaps these issues are simply impossible to acknowledge in clinical research. After all, correlating something as complex as depression with something as unreliable as self-reported caffeine consumption (over a period of ten years, no less) is quite a challenge. But it is the responsibility of the science journalist to parse the tons of new research that comes out daily and to give the public an honest and unhyped account. Will we now have to grapple with the caffeine-crazed depression-wary Times readers?

Wonder and Magic

Pretty snowflakes are nothing but some water molecules arranged in special hydrogen bond patterns. Lady Gaga's love songs squeeze and twist your heart only because your brain may be wired to perceive such chord progressions as sad. And your significant-other means the world to you simply because some oxytocin/vassopressin molecules interact with the dopaminergic reward systems during your intimacies. The list goes on, but this is enough to illustrate the pessimism with which some frame scientific knowledge. The explanation of a feature in terms of its underlying mechanism does not diminish its value. Just because love is not created by magic but by an awesomely complex machine (the brain) doesn't make it any less wonderful, in the same way that knowing the ingredients and recipe of New England clam chowder doesn't make it less delicious (I'm afraid the same can't be said for fois gras or animelles, for different reasons of course). The danger of thinking that a mechanistic explanation of something seemingly magical is bad is that it may impede scientific progress. If our friends on Capitol Hill decide next week that it's a waste of time to search for the neural basis or evolutionary advantage of music, we may be deprived further of knowledge about the mechanisms of language, emotions and social cohesion. Ignorance may be bliss, but it's not what started the industrial revolution, the space race or the information age; nor will ignorance cure cancer (fruit-fly research in Paris, France might). Aside from magical parts of human nature, science promises to demystify more sinister ones like violence or racism. What happens if we discover that men have a natural rather than "merely-social" tendency to beat their wives? Does that mean science justifies wife-beating? Not a chance. But we do have to be careful with our facts, since some confuse what is with what ought to be, or worse still - what is natural with what is ought to be.

With scientific explanations of our nature, we will still have magic in our lives. But we can't go on pretending something is true when it is not. The mystery of music may disappear when you are reading about the brain areas involved in music perception, but it won't fail to creep up on you when you're listening to your favorite Beethoven. The question is how to inform the public about the mechanistic nature of everything without them becoming emotionless robots.



The Hard Problem of Consciousness

You’re lying on a sandy beach on a hot sunny afternoon, enjoying a few hours of much needed laziness. As you open your eyes and confront the vastness of the ocean in front of you, light of 600nm wavelength hits your retina, kindling an impossibly long cascade of events in your brain: a molecule called retinal changes shape, neurons fire action potentials down the optic nerve, arrive at the lateral geniculate nucleus deep in the brain causing more action potentials in primary visual cortex in the back of your head, and so on ad infinitum. At some point, the mechanical wonder of 100 billion neurons working together produces something special: your experience of the color blue. What’s special is not that you can discriminate that color from others; nor that you are aware of it and paying attention to it. It is not notable that you can tell us about it, or assign a name to it. It’s that you have a subjective, qualitative experience of the color; there is something it is like to experience the color blue. Some philosophers call these experiences qualia – meaning “what kind” – but it is not important what kind of experience you are having, just that you are having one at all. Modern science hypothesizes that subjective experience is a product of the brain, but has no explanation for it. The brain’s building blocks are neurons; their language is the action potential, an electrical impulse that relays information. Sensory molecules pick up information about the outside world and translate it into action potentials. The information is processed among many networks of neurons, and returns to the outside world via signals to muscles, which effect behavior. Somewhere between sensory molecules and muscles, the neurons organize to create systems for memory, attention, global access of information, self-awareness and language. How the brain achieves this feat is largely unknown, but neuroscientists are hard at work today trying to elucidate the mechanisms responsible. The philosopher David Chalmers calls these the “easy” problems of consciousness because science has the tools to ask questions about them and eventually solve them.

The easy problems have in common the fact that their explanation requires only a mechanism of their function; once we explain a mechanism by which neurons integrate information, for example, the problem of integration is solved. In contrast, experience, or the existence of qualia, is the “hard” problem of consciousness because it has no obvious function and is completely unmeasurable; science has no way of even proposing hypotheses about it.

Philosophical Zombies

Do you know that feeling you have when you fall in love? Most people describe it as something special, unexplainable, mysterious and wholly wonderful. Scientists will describe it in terms of molecules of oxytocin and vasopressin binding receptors on neurons in the midbrain. Surely love is not just a bunch of molecules running wild in your head? Yes and no. The molecules cause one to exhibit seriously strange behavior like not eating or sleeping, but out of their interactions emerges something more. That something is the feeling itself.

Physical rules and current neuroscientific evidence suggest that the brain should function as it does, but without producing feelings, sensations, or subjective experience; we should be philosophical zombies. Philosophical zombies are hypothetical beings that look and act exactly as humans do, but never actually have first-person qualitative experience of anything.

If a philosophical zombie met a nice girl, he would act as if he were in love. He would talk about his longing and joy, but he would not actually have that qualitative feeling of being in love. Even though they have brains just like ours, philosophical zombies are in essence robots – processing information, reporting mental states, having information of pains or emotions, having functional memory, but never actually having an experience of anything. There is nothing it is like to be a philosophical zombie; all processing goes on in the “subconscious.” This is exactly what science – in its current state – would predict. All cognitive processing should go on “in the dark,” without a conscious element.

Yet we obviously are not philosophical zombies. The processing that goes on in our brains is accompanied by a subjective experience. This experience is the most intimate thing you know – it’s almost impossible to imagine life without it – and for that reason, it is also the hardest thing to question or pinpoint in your own mind. Neuroscience hypothesizes that everything there is to your mind, including this subjective experience, is a product of physical events. But your experience itself is seemingly not physical; there is no thing, energy field, radiation or force that is your subjective experience that we currently know about. All we can measure are molecular events and electrical interactions among neurons. So where does experience come from and how can we study it?


The answer may be found in the concept of emergence. From the interactions of a number of matching parts sometimes emerges a behavior or property that cannot be predicted from or reduced to the properties of the constituents. One such unexpected property comes from the simple behavior of individual ants, which produces a complex “society,” whose properties cannot be predicted from the behavior of individual ants. In fact, adding up the contributions of all individual ants does not produce an effect equal to the effect from the ant colony as a whole. Other examples of emergence include snowflakes, which assemble out of interactions among water molecules at low temperatures; temperature, which is based on molecular kinetics; the stock market, which has no central planning or regulation; human society; and subjective experience.

Subjective experience is an emergent property of the brain. As such, it cannot be predicted from our current knowledge of the brain, or reduced to its basal parts. Individual neurons are not aware of anything at all, but 100 billion of them working together are.

Modern neuroscientists aim to peek into the brain at higher and higher spatial and temporal resolutions with the goal of recording the electrical activities of vast numbers of neurons. Once they have recorded the activity, the thinking goes, the only remaining task will be to find out what the activity does. This logic is enticing, but falls short of a explaining the entirety of the brain’s features. One problem is that the entity that emerges – subjective experience – is qualitatively different from neurons and their activities, just as society emerges from interactions among individuals but is qualitatively different from individuals. Moreover, if we were to describe the activities of all individuals that comprise society, we would get no information about society; we would get noise from all the opposing actions. Likewise, if we describe the activities of all the neurons in the brain, all we get is activities of all the neurons in the brain.

An additional barrier is that subjective experience is closed off from outside observation. The contents of your experience are available only to you, and scientists have no way of collecting the data of experience directly. While some neuroscientists are satisfied with collecting first-person data via verbal (human subjects) or behavioral (animal subjects) reports, the fact is that as soon as the subject translates first-person experience into a report, the data becomes of third-person quality.

If aliens discovered earth, they would have no way of knowing that humans had anything going on between their ears beyond electricity and chemistry. This is why neuroscience is so exciting: the most magical machine in the universe is in your head, and we have the opportunity to find out what makes it so special. As neuroscience attracts increasing amounts of talent and funding, we must not forget the most mysterious, least tangible question about the brain.

Is is not Ought

The killing of Osama bin Laden and the ensuing controversy over the widespread jubilation in the U.S. have prompted some scientists to explain the psychological and evolutionary basis for those celebrations. Unfortunately, some of them used science to argue that since joy in this situation is natural, it is also morally good. Regardless of one’s view on the appropriateness of celebrating a killing, the fact that it is natural to do so has no relation to whether it is moral or right. Science bases human behavior on the functions of neural networks and evolutionary adaptations, but does not excuse us from taking responsibility for those behaviors. Just as promiscuity may be a natural but not morally inacceptable temptation for males in a monogamous society, natural joy over the killing of an evil man is not necessarily good either. Our values come from philosophy, not empirical evidence. Or, as Hume wrote, what is is not necessarily what ought to be.

That does not mean that science and morality have nothing to do with each other. While it is illogical to justify a value using scientific facts, as some did with celebrations of Osama bin Laden’s killing, it is quite alright to use science to optimize the practice of an established value. If we deem it unacceptable to celebrate killing, we may use neuroscience to adjust educational techniques to instill that value in our children.

Most importantly, science writers have a responsibility to separate facts from values; what behavior is natural and what behavior is acceptable. They must be careful to note that a materialistic basis for mental events does not relieve us from responsibility for our actions. At the same time, readers must be wary of those who try to use scientific evidence to justify a moral agenda. Science alone will never be a basis for our values, but if used properly, it can help us realize the values we choose.


Jonah Lehrer on revenge

Benedict Carey on Celebrating Death

"Why we celebrate a killing," by Jonathan Haidt and my response.


New Class of Cognitive Enhancers to Transform Mankind

Scientists at the Bewundgen University in Germany discovered that a diet rich in petrolatum, a substance of hydrocarbons, can greatly improve performance on a wide variety of cognitive tasks. The research, led by neuroscientist Dr. Hans Schweinstucken, followed three groups of human subjects for over a year. The first group was instructed to eat regularly, but to also consume 500 grams of petrolatum per day, in the morning after breakfast. The second group was given an energy-deficient supplement of sugar substitutes; and the third were not given anything at all. All groups were tested periodically on tasks of memory, abstract thinking, cognitive speed, and general agility. To their surprise, the researchers found that regular consumption of petrolatum improved subjects' recall, memory retrieval and abstract thinking while reducing overall agility, motivation and ability to make decisions. In contrast, the group eating sugar substitutes performed significantly worse over time on tests of memory and abstract thinking, with 50% of the subjects hitting an all-time low of 25% correct responses on recall (vs. their performance prior to the experiment).

Dr. Schweinstucken speculates that the first group's reduced motivation and agility may have something to do with their major weight gain, which by itself remains a mysterious side-effect. As for the mechanisms of action, Dr. Schweinstucken proposes that petrolatum acts via inhibitory GABAergic interneurons in neocortex, the brain part thought to be important in higher cognition, antagonizing GABA action and thereby reducing overall levels of inhibition in the brain. However, he warns that at higher doses than 500 grams per day, petrolatum may actually have a detrimental effect on cognition because it may saturate GABA receptors and the corresponding neurons, causing massive seizures; he is currently conducting experiments to test this hypothesis.

Meanwhile, for all you folks who have exams to study for, I recommend a trip to your local CVS, where petrolatum is sold over-the-counter as "Vaseline," or petroleum jelly.

Further reading: Schweinstucken et al. Petrolatum improves cognitive performance in humans. J Psycho Chemo Physio Med. 2011, April 1.


They can’t stop talking about her. “Look at how popular and successful she is!” “Look at how stupid and ditsy she is!” “What has she done to be so famous?” … Well, I don’t care if she’s smart or stupid, rich or poor. The only things I see when she’s on the screen are those voluptuous curves. Regardless of what you think of her, Kim Kardashian has what most men dream of. Since this is a nerds’ blog, we’re going to take the moment to examine why we men like those curves so much.

Men like women with large curves because these provide an adaptive advantage, increasing the likelihood of the propagation of genes. Wide hips are adaptive because they make child birthing easier (more successful); large breasts may provide more nutrition during nursing. The men who go for the curves are more likely to make successful offspring; those offspring incidentally share the same instinct for curves and eventually make more progeny; and the cycle continues. Kim Kardashian Now, Kim Kardashian is what you call a supernormal stimulus. She has everything that normally elicits a positive response but exaggerated. “Supernormal stimulus,” by the way, is attributed to the famous ethologist Niko Timbergen, who found that substituting a mamma-seagull’s white beak with its one red spot for a stick with three red spots made the chicks way more excited for food. Many more such examples have been described in a variety of animals.

But anyway, I am a male and my primitive brain can’t help but to love Kim Kardashian. One could say the male brain is predisposed or hard-wired to love curves like Kim’s. Actually, some folks are still amazed to hear that there are neural correlates of this or that (you see this in the news all the time – “scientists now found the brain mechanisms behind gambling,” social anxiety, or enhanced hearing in the blind. The list goes on). There won’t be any behavior, feeling, thought, etc without neural correlates. I dare you to show otherwise.

In an article on love and the brain, Psychology Today columnist Marnia Robinson describes the neural mechanisms that make prairie voles (similar to mice) pair bond, or stay as a couple for at least one round of mating). It has to do with the distribution of oxytocin receptors, which makes the vole associate its mate with the dopamine reward pathways, meaning that a couple stays together (“in love”) long enough to raise some pups. Marnia notes that we, like the voles, are “programmed to pair bond—just as we're programmed to add notches to our belts.” In another post in her column, she drives the point home:

“Pair bonding is not simply a learned behavior. If there weren't neural correlates behind this behavior, there would not be so much falling in love and pairing up across so many cultures. The pair-bonding urge is built-in and waiting to be activated… The vital point is that our pair bonding penchant arises from physiological events, not mere social conditioning… So, even though many Westerners appear to be caught up in a chaotic hook-up culture for the moment, it doesn't mean that we humans are, by nature, as promiscuous as bonobo chimps or that pair-bonding inclinations are superficial cultural constructs.”

What Marnia means is that committed relationships (perhaps marriage, too) are natural, and therefore you don’t have to worry that everyone you know is only interested in hooking up because they should prefer committed relationships; eventually they’ll all settle down and all will be right in the world. I hope you will forgive me for interpreting Marnia’s writing as a promotion of marriage and an attack on hook-up culture (after all, the title of her post is “Committed Relationship: Like It Or Not, You’re Wired For It”). Humans have a genetically-based neural system that enables them to fall in love and pair bond (again, it shouldn’t be surprising that we have a neural system for this; the only question is what roles do genes and environment play on it). But just because it is there doesn’t mean it is 100% deterministic.

It’s true, in some species the best strategy for gene propagation is for the couple to share the responsibility of child rearing. Evolution favors individuals with the monogamy instinct and it just so happens that monogamous relationships feel good to them. What Marnia is driving at is that you don’t have a choice but to end up in a committed relationship because your brain is “wired for it”.

Is that really true? Decision-making can be described as synaptic integration of relevant inputs based on their weights or importance. Unless you are a cocaine addict running on empty, the factors going into most decision have fairly weighted synaptic representation (i.e. a crack-head’s brain won’t allow factors other than crack to have a big vote in the decision-making congress). Just because a brain is predisposed toward some trait or behavior doesn’t mean that that trait is 100% deterministic. This idea of relative cognitive liberty here doesn’t even invoke free will; the decisions you make are based on the brain’s wiring, your previous experiences, probability, etc – not some soul that does what it wants.

And why does it matter that monogamy is the “natural” thing to do? Who cares what we are by nature? Last I checked, by nature dudes can be expected to throw themselves at every cake, cookie, jar of peanut butter and sexy lady they see. Haven’t witnessed that recently at the local Shaw’s… And it wouldn’t matter if “society” were “making” us do that – we control society! We choose what’s acceptable. If I want to sleep around instead of getting married, that’s my choice! (isn’t it ironic how it’s the conservative right that always worries about threats to personal freedoms and tries to deny personal freedoms in the name of traditional values?).

That doesn’t mean we can ignore our nature; we do have innate mechanisms that pull or push us in different directions – I don’t love Kim Kardashian because I chose to, but because as a man I have certain preferences built in. But here’s the catch: just because I think Kim is attractive doesn’t mean I’m going to ditch my girlfriend and hop on the next plane to Hollywood. I can control myself and stay in a meaningful relationship; I can inhibit this reptilian instinct. Likewise, not every man prefers Kim to someone with a flatter topology. We do have innate preferences, but they all have different impact on what we do or how we feel. Next time you see a headline about the genetic basis or experience-driven neuroplasticity of some trait or other, be wary: not everything is as intensely deterministic as the neuropundits will have you believe. For now stay content that you can enjoy Kim Kardashian’s curves without committing any social faux pas.


Nigel Barber. The evolutionary psychology of physical attractiveness: Sexual selection and human morphology. Ethology and Sociobiology. Volume 16, Issue 5, 1995, 395-424

Decision Making in Recurrent Neuronal Circuits Xiao-Jing Wang. Neuron. 60, (2) 215-234.

Committed Relationship: Like It Or Not, You’re Wired For It

Human Brains Are Built to Fall in Love

Here's a great video summary from Nature on the recent advances in the field of connectomics by researchers at the Max Planck Institute in Germany and Harvard University:

The video on Nature Blogs

And the original research, here.

My previous post on Connectomics.

Moral Code

Why is it wrong to kill babies? Why is it wrong to take advantage of mentally retarded people? To lie with the intention of cheating someone? To steal, especially from poor people? Is it possible that Medieval European society was wrong to burn women suspected of witchcraft? Or did they save mankind from impending doom by doing so? Is it wrong to kick rocks when you’re in a bad mood? Questions of right and wrong, such as these, have for millenia been answered by religious authorities who refer to the Bible for guidance. While the vast majority of people still turn to Abrahamic religious texts for moral guidance, there are some other options for developing a moral code. Bibles aside, we can use our “natural” sense of what’s right and wrong to guide our actions; a code based on the natural sense would come from empirical studies on what most people consider to be right or wrong. Ignoring the logistics of creating such as code, we should note that the rules in this code would not have any reasoning behind them other than “we should do this because this is what comes naturally.” How does that sound? Pretty stupid.

The other option is to develop a moral code based on some subjective metaphysical ideas, with a heavy backing of empirical facts. “Subjective” means these ideas won’t have an undeniability to them; they are what they are and that’s it. Take as an example the rule such as “we should not kill babies.” There is no objective, scientific reason why we shouldn’t kill babies. Wait!, you say, killing babies is wrong because it harms the proliferation of our species and inflicts pain on the mothers and the babies themselves! But why should we care about the proliferation of our species? About hurting some mother or her baby? While no one will deny that we should care about these, there is nothing scientific that will explain why. Science may give us a neurological reason why we care about species proliferation (it will go something like, “there is a brain region that makes us care about proliferation of our species.”), but why should we be limited to what our brains tend to make us think or do?

Subjective rules like these must therefore be agreed upon with the understanding that they are subject to change. Interestingly, some argue that science can answer moral questions because it can show us what “well-being” is, how we can get it, etc. But the scientific reason why we should care about well-being is nowhere to be found. The result is that we can use science to answer moral questions, but we have to first agree (subjectively) that we want well-being. Science by itself cannot answer moral questions because it shows us what is rather than what ought to be. (Actually, Sam Harris is the only one to argue that science can be an authority on moral issues; his technical faux-pas is an embarrassment to those who advocate “reason” in conduct).


But more on the idea of metaphysically constructed moral codes. What properties should this code have, and how should we go about synthesizing it? Having one fixed/rigid source as an authority for moral guidance is dangerous. Make no mistake: there must be some authority on moral questions, but it must be flexible, and adaptable; it must be able to stand the test of time on the one hand, but to be able to adjust to novel conditions on the other. This sounds a lot like the constitution of the U.S. But even with such a document as The Constitution, which has provided unity and civil progress since the country’s founding, there are some who take its words literally and allow no further interpretation; if it’s not written in the constitution, it can’t be in the law, they argue (see Strict Constructionism versus Judicial Activism). These folks also tend to be rather religious (read: they spend a lot of time listening to stories from the Bible; not to be confused with “spiritual” or of religions other than the Abrahamic ones). So while we must have a moral code, it must be flexible (i.e. change with time) and we must seek a balance between literal and imaginative interpretations, just as we do with the US Constitution.

Why and how is a rigid moral authority dangerous? Our authority must change with time because new developments in our understanding of the world must update how we interact with others. For example, if science finds tomorrow that most animals have a brain part that allows them to feel emotional pain in the same way that humans do, we will have to treat them with more empathy; research on dolphin cognition has recently produced an effort by scientists to have dolphins be considered and treated as nonhuman persons. Furthermore, if we don’t explain why we do certain things, we won’t understand why we do them and therefore won’t know why violating them is bad. This unquestionability aspect of God as moral authority or the Strict Constructionists as law-makers is what makes them particularly dangerous and leads to prejudice and ignorance. Our moral code must therefore be based on empirical research, with every rule being subject to intense scrutiny (think of two-year-olds who keep asking, “but why?”).

But why should we have a moral code in the first place? Perhaps if everyone followed a moral code of some sort, the world would have fewer injustices and atrocities. Getting people to follow a moral code of any kind is a completely different issue.

Sam Harris gets it wrong.

Nonhuman Personhood for Dolphins

Cetacean Cognition

Mirror Self –Recognition in Dolphins

Witches are immoral and should be burned