Science/Philosophy

Runaway Selection in Birds of Paradise

I watched a program on PBS the other night about birds of paradise – exotic birds from New Guinea with elaborate displays. To attract females, males have evolved these intricate feathers and courtship dances and rituals. A Parotia male, for example, will clear out a dancing ground and when a female is in sight, he will puck up feathers around his chest into a sort of collar similar to those of Italian nobility of the Renaissance (or perhaps closer to a ballerina’s skirt) with bright iridescent feathers forming a shield below his neck, the long quills on his head that usually point lazily toward his rear will stick straight up in a semicircle around his head. 

A couple of weeks ago in the New York Times, David Ewing Duncan wrote an article, “How Science Can Build a Better You,” describing a brain-machine interface called Braingate that supposedly uses a tiny bed of "electrons" to read out brain activity. Scientists recently described this device’s ability to decode neural signals to control a prosthetic arm; this and other devices promise to restore mobility in paralyzed or tetraplegic patients. However, the Braingate device actually used an array of electrodes rather than electrons. An electron is a subatomic particle that carries negative charge; the flow of electrons is the basis of electrical stimulation. Electrodes, on the other hand, are wires that measure changes in electrical potential.

While the spelling difference is trivial, the semantic error is significant. Writing about science is a challenge for those who have no training in science, as is copy-editing; the complexity of science should require journalists to reach a level of expertise in their field before bringing their reports to the world. On the opposing side, American readers should have the basic education in science to know the difference between electrodes and electrons, and should not be at risk of being branded as nerds for pointing out such mistakes. Investment in early childhood education is critical for basic science knowledge, and the upcoming presidential election will determine if Americans choose “electrodes” over “electrons”.

Lost Thoughts in the Wake-Sleep Transition

I've been meaning to write about this curious phenomenon I experience every time I go to sleep. Lying in bed last night, I was thinking about a movie I had just finished watching - The Aviator, a great movie! - and was overcome by a sudden frustration: some idea that was running through my mind had simply vanished, to be replaced by something silly and mundane. Trying desperately to remember what I had just been thinking about, I could find no trace of my thoughts. It was as if they were never recorded. This happens several times, until I finally give up and fall asleep. Even more perplexing is that I am aware of those lost thoughts, I know something is missing. I just can't remember what it was. If these aren't freak phenomena, one can imagine something in the awake-sleep transition that messes with short term memory. It's as if whatever network or assembly representing the would-be memory doesn't undergo short term plasticity necessary to "solidify" those connections. This is of course overly simplistic and probably misleading language, but a way to think about it. Perhaps this can (or has been?) analyzed in rats, as in the "replay" or reactivation activity in hippocampus of experienced events, during sleep, as in this paper by Matt Wilson of MIT. One could examine lost thoughts in the awake-sleep transition by looking at the temporal structure of activity during that transition vs. same activity during experience on a maze, for example. Perhaps this loss of thought depends on some subcortical "kick" that's absent during sleep. Just a thought.

The Daily Show aired a special report by Aasif Mandvi on "an expensive lesson about bringing fish back to life," or the dangers of leaving children with the capability to make purchases on the Apple App Store. The point of this story is that children can't inhibit behavior as well as adults can due to their underdeveloped frontal cortices; and are therefore vulnerable targets to those whose sole purpose is to make easy money, not unlike drug dealers selling to addicts who just can't help themselves:

The Daily Show With Jon Stewart Mon - Thurs 11p / 10c
Tap Fish Dealer
www.thedailyshow.com
Daily Show Full Episodes Political Humor & Satire Blog The Daily Show on Facebook

Christopher Hitchens writes in the January edition of Vanity Fair about what he believes to be a nonsensical maxim: "What doesn't kill me, makes me stronger." Hitchens is suffering from esophageal cancer, the primary reason for the sentiment that he is not becoming "stronger," but is rather on a terminal decline. The phrase is attributed to Nietzsche, whose mental decline late in life, Hitchens notes, probably did not make him any stronger. Nor did the philosopher Sydney Hook consider himself stronger after a terrible experience in a hospital. Hitchens considers himself to be among the many who don't conquer illness to come out stronger. But there is a flaw in this reasoning - the first condition to becoming stronger is to not be killed. Hitchens is thankfully still alive and kicking (i.e. writing), but he hasn't defeated his cancer (yet, hopefully); it is only after the cancer is over with that Hitchens can say he's stronger or weaker. Now is premature. The more important qualification is that "stronger" should mean mentally stronger, not physically. Diseases that target the mind specifically, like Nietzsche's syphilis, should be discounted; all others should hopefully be an exercise for the power of will and mental fortitude.

Whenever you think life is hard, remember Hitchens and countless others who brave horrible diseases. Stay stark, Hitch!

Hitchens's essay may be found here.

Brainy Computers

“We’re not trying to replicate the brain. That’s impossible. We don’t know how the brain works, really,” says the chief of IBM's Cognitive Computing project, which aims to improve computing by creating brain-like computers capable of learning in real-time and consuming less power than conventional machines. No one knows how the brain works, but have the folks at IBM tried to figure it out? It seems strange to say that it's impossible to replicate the brain, especially coming from a man whose blog's caption reads, "to engineer the mind by reverse engineering the brain." Perhaps I'm picking at his words - replicating and reverse engineering are totally different things; to replicate is to copy exactly, while reverse engineering isn't as strict, since it's concerned with macroscopic function rather than microscopic structure. But of all the things that seem conceptually impossible today, it's the "engineer the mind" that's the winner, especially if one can't "replicate the brain." The chances of engineering a mind are greater the closer the system is to the brain; that's why my MacBook, to my continual disappointment, does not have a mind.

These little trifles haven't stopped Darpa from funding IBM and scientists elsewhere. IBM now boasts a prototype chip with 256 super-simplified integrate-and-fire "neurons" and a thousand times as many "synapses." This architecture is capable of learning to recognize hand-drawn single-digit numbers. Its performance may not be optimal, but still impressive considering the brain likely allocates far more neurons (and far more complicated neurons) to the same task. On another front, the group reported using a 147,456-CPU supercomputer with 144TB of main memory to simulate a billion neurons with ten thousand as many synapses. Now if only they could combine these two efforts and expand their chip from two hundred to a billion neurons.

Dancing for Science

Science is difficult to understand and even more difficult to explain. John Bohannon thinks that words are inept at explaining scientific concepts, and should stay out of the way. Powerpoint is useless too. Instead, Bohannon argues, scientific concepts should be explained with dance. He foresees a boost to the economy if dancers were to be hired as aids to presenters, not only because those dancers would have jobs, but because science would be communicated more effectively, leading to more innovation. Bohannon presents these ideas in an engaging TEDx talk, with the help of the Black Label Movement dance team.  No doubt, seeing people dance out cellular locomotion is fun and more straightforward than hearing a verbal description of the same thing. I wonder though if such concepts would be more accurately portrayed and easier to understand through animations. Perhaps there is something about seeing people perform live that is more engaging than seeing animations or the same performance on a screen. If that's true, then having dancers at one's presentations would be very helpful (it would also make that presentation stand out, if no one else has dancers).

 

 

Descriptive vs. Predictive Models

When we look back at the important advances in neuroscience in the 20th and 21st centuries, what will we remember? What will we still find useful and worth pursuing further? The field is still in its nascent stages, even a century after Ramon y Cajal showed evidence for the neuron doctrine, establishing the neuron as a fundamental unit of the nervous system; and Brodmann published his cytoarchitecture studies that convinced the world that the brain is divided into distinct areas and likely uses those to divvy up processing. Yet we still have virtually no clue how the brain works: there is no central theory, no cures for brain diseases; only a whole lot of curious, enthusiastic and optimistic minds and some funding to help them get stuff done. And it is rightly so that some neuroscientists have serious physics envy, which pushes them to develop predictive models that (sometimes) give important insights into what mother nature did to make the brain work. A great example of this is the Hodgkin-Huxley model of the action potential. When Hodgkin and Huxley created the model in the early 1950's, biologists had little clue as to how cells generated such complex waveforms. Having observed conductance changes across the cell membrane during the action potential, Hodgkin and Huxley went on to show that the conductances were ion-selective, and worked as functions of time and membrane potential. They then predicted that whatever was mediating conductance of ions had to be voltage-sensitive and allow fast molecular changes. This work led to a wide search for the ion conductors, which turned out to be voltage-gated sodium and potassium channels. The key word there is that the model predicted something. Fast forward to 2011, and we still don't have a greater success story for predictive models than the Hodgkin-Huxley model.

Neuroscientists today are gathering data by the terabytes, describing amazing properties of neurons and networks, and moving on to the next experiments. A typical electrophysiological experiment, for example, involves electrode recordings of populations of cells, and describes the cells' firing properties while the brain is engaged in some behavior or other. What we need is to be able to make predictions about major principles based on information we've gathered over the last century. For example, given that we observe gamma rhythms during object recognition, can we predict not only if gamma is required for that task, but how it helps the brain achieve it? Given these observations, can we predict what features the brain must have to accomplish this task? For example, if we predict that cortical connections constitute "small world" networks, can we understand the rules for wiring better? Better yet, can we infer what the wiring rules must be? As we develop ever more sophisticated tools to study the brain, we should have an easier time making predictions about how it works. We have to step up to the plate in the 21st century, and produce some theories that do more than describe what we see. These theories have to not only capture the complexity of the system but also the relative simplicity by which the system is created.

Whale Song Analysis Crowd-sourcing

Scientific American is collaborating with marine scientists on a project to crowd-source analysis of whale songs and calls. Having gathered thousands of sound files from many species of whales, scientists now need to classify each call and song to get an understanding of each specie's repertoire. Once the calls and songs are sorted and classified, scientists can pursue interesting questions like, is a whale's song repertoire related to its intelligence? To classify the vocalizations, scientists are asking the public for help. On whale.fm, anyone (no expertise required) can sift through some spectrograms and embedded sound files, and match them to a template. It's easy, fun and cool. Something that would take one person months or years to do, can now by accomplished much faster by the public in a fun format.

Some previous efforts in scientific crowd-sourcing like FoldIt, a game in which people fold proteins based on simple rules (computers can't do this), or the search for new galaxies by amateur astronomers from images taken by the Hubble telescope. Perhaps this type of effort could help the Connectome efforts to map out the brain down to each synapse using electron microscopy, where every neurite in a cross-sectional image must be strung to itself in adjacent images. Tracing axons across thousands of EM images could actually make a fun and productive game.

What are experts for?

It is by no mistake that philosophers from Plato to Hume to Adam Smith have advocated a division of labor as a driving force of society and its economy. The more specialized one's labor, the more advanced the resulting product. As people gain expertise in their distinct fields, the more they are able to advance those fields. This is as true for labor as it is for science (in general; there is a lot to be said for multi- and cross-disciplinary approaches and out of the box thinking that specialization usually dampens. But on the whole,  it is undoubtedly more advantageous to specialize in a field than not). Scientific experts are people society relies on to advance knowledge and establish facts; they are the people we go to when we need answers. Here is why we need experts: Bret Stephens is a journalist and  Wall Street Journal columnist whose training is, supposedly, in journalism and maybe economics (Wikipedia, which never lies, says he attended the London School of Economics). In his column today, Bret Stephens writes about global warming. Entitled "The Great Global Warming Fizzle," the article compares climate change science to a religion - and a dying one too - whose adherents are "spectacularly unattractive people" and whose "claims are often non-falsifiable, hence the convenience of the term 'climate change' when thermometers don't oblige the expected trend lines."

Now if Bret Stephens were an environmental scientist with proper training, his criticism of climate science would be worth hearing. But just as we don't take physics advice from members of the Taliban (Sam Harris's favorite example), we shouldn't take climate change advice from Bret Stephens. Has he seen the data in question? Would he know how to interpret it? Would he draw the same conclusion if money and government intervention were not factors? The worry is that he is not fulfilling his role of journalist, in which he is expected to provide fair interpretations of the science and policy to the public, who are not experts. Instead of facts, we get an opinion piece on something Bret Stephens has no expertise in.

Here's the dilemma for those who care - is it better to ignore the vocal people who don't know what they're talking about or to correct them and spread the correct message? The latter would be a far more active and constructive choice, but it should have to be a proactive message instead of reactive as in this post.

Who's In Charge?

The celebrated cognitive neuroscientist Michael Gazzaniga has a new book coming soon, Who's In Charge - about the implications of neuroscientific findings for the law. To promote it, Slate printed an excerpt that asks what it means for responsibility and culpability if free will doesn't exist.

The idea that "If determinism is true, then no one is responsible for anything" doesn't have to be true: a person acting criminally is still the most proximal cause of the bad behavior and should be held accountable; the Big Bang isn't to blame for his criminality. Moreover, by this reasoning, everyone who commits crimes is not responsible because their brains made them do it: if determinism is true, they also had no choice but to 'sin'. So why should a seemingly healthy offender go to jail (where he doesn't get rehabilitated and doesn't learn to not repeat his offenses), while one with a brain tumor or schizophrenia should be treated medically and perhaps even reinstated into society?

These questions, I think, lead us think of determinism and free will as inappropriate for the legal system. If no one has a choice in their behavior, then clinically sick people shouldn't be treated any differently from anyone else; if no one is responsible for their actions, then sick people aren't somehow "less responsible" than others.
What emerges is that those who "can't help" but to act criminally (i.e. the schizophrenics) are treated medically and released back into society when they're healthy again (psychiatry has a whole lot of catching up to do if that is to actually happen; perhaps neuroscience can help?). So why don't we also treat those criminals who appear healthy? If their brain made them kill, then there must be something wrong with it. What I'm driving at is that a judicial system based on retribution doesn't make much sense. Wouldn't we be far better off if we actually fixed criminals? Perhaps that's wishful. Or worse, dystopian.

Automated in vivo patch clamp

Patch clamping is an electrophysiological method used to measure cell dynamics. The setup involves attaching a pulled-glass pipette filled with conductive solution to a cell membrane and recording the currents that pass through that patch of membrane. The technique is notoriously difficult in cultured neurons or brain slices, and even more difficult in live animals (ultimately, the most appropriate system to study). Several scientists have recently developed an algorithm to automate the process, thereby reducing the skill level and time commitment required to perform patch clamp experiments in vivo.

Opto-fMRI

Opto-fMRI

fMRI has traditionally been used for mapping the brain and correlating brain function with specific structures. The method has become a sort of laughing-stock within the electrophysiological community because of the countless studies that proclaim region A to be responsible for function B. A typical blunder can go like this: "Increased activation of the amygdala during a fear conditioning task suggests that the amygdala is the brain's fear center." To be fair, the method is still very useful and serious scientists don't fall into this fallacy as much as the popular media does. Some outstanding questions are what the measured signal (blood oxygenation level; BOLD) actually means for neural activity; whether it's possible to disambiguate excitation from inhibition; how activation in one region affects connected regions; and what the causal relationships among activated regions are. To address the last two of these, Ed Boyden and colleagues at MIT used a combination of optogenetics and fMRI (Opto-fMRI) in awake mice. The idea is that if they can change the dynamics of a defined population of cells in a localized and fast way (they infected pyramidal cells in mouse somatosensory cortex with ChR2), the network effects of that activation would be revealed by fMRI. In this way, they validate the network effects in both technologies. One limitation that's still inherent to fMRI is the slow temporal resolution - while optogenetic stimulation changes membrane potential with millisecond-resolution, fMRI's hemodynamic response is much slower. Perhaps other imaging methods like multiphoton imaging may be used in the future to dissect large-scale circuits in awake animals.

Here is a nice interview with Jeff Lichtman of Harvard, who is working on a cellular-level map of synaptic connections in the brain (a connectome). The interview raises several questions, like how can we collect thousands of petabytes (millions of gigabytes) of data of the structure of the brain at the level of individual cells? Do we even need so much data? Even though connectomics won't reveal much about neural dynamics (i.e. how neurons actually transmit or integrate information), it should be a useful tool for further work in theoretical neuroscience. Someone has to do it. One caller in this interview asks a great question on the hard problem of consciousness: when scientists look at neuronal activity when one is thinking of a childhood pet, where in the universe is that image of the dog? All the scientists see, after all, is electrical activity...

Trendy Lines for Coffee and Depression

"Can drinking coffee help stave off depression?", writes Anahad O'Connor for Tara Parker-Pope's Well Blog on the New York Times. A new paper in The Archives of Internal Medicine by Alberto Ascherio's group at Harvard School of Public Health shows a dose-dependent trend between daily caffeine consumption and relative risk of depression in women. The authors found that women who drink 4-6 cups of coffee daily have a reduced relative risk of becoming depressed compared to women who don't drink coffee. As every psychology student will tell you, "correlation is not causation." And while Anahad's article is teaming with suggestions that coffee helps against depression (proper language is difficult when it comes to analyzing data), the real offense goes deeper, into Dr. Ascherio's report. The only figure in the paper boasts a P value for a linear trend of 0.02 and some impressively overlapping 95% Confidence Intervals (C.I.; meaning that 95% of each point's population falls within the y-values covered by the vertical bars).

There is no doubt that there is a trend in this data set. But with such a wide - and overlapping - spread of the relative risk of depression under each caffeine-amount category, there is almost no chance that there is a statistically significant difference among any group (it is possible to get P < 0.05 if the CIs are equal and overlapping if the difference in means is more than 3x standard error, SE; the lower and upper bound of the CI together add up to about 4x SE; in this figure, the differences between any two means are clearly smaller than 2xSE, let alone 3x). The women in this study in essence all have the same risk of depression regardless of whether they drink one or six cups per day. Important too is that the (valid) assumption that drinking no coffee (rather, less than 100mg per day) gives one a relative risk of 1 has not been tested; but would the trend still hold if we had 95% C.I.s for the first point? Will we ever see a public health recommendation to drink six cups of coffee daily to reduce the risk of depression by 20%? Correlation-causation notwithstanding, it's hard to imagine how one could be depressed while consuming alarming doses of caffeine (that remark is anecdotal, of course).

Perhaps these issues are simply impossible to acknowledge in clinical research. After all, correlating something as complex as depression with something as unreliable as self-reported caffeine consumption (over a period of ten years, no less) is quite a challenge. But it is the responsibility of the science journalist to parse the tons of new research that comes out daily and to give the public an honest and unhyped account. Will we now have to grapple with the caffeine-crazed depression-wary Times readers?

The Hard Problem of Consciousness

You’re lying on a sandy beach on a hot sunny afternoon, enjoying a few hours of much needed laziness. As you open your eyes and confront the vastness of the ocean in front of you, light of 600nm wavelength hits your retina, kindling an impossibly long cascade of events in your brain: a molecule called retinal changes shape, neurons fire action potentials down the optic nerve, arrive at the lateral geniculate nucleus deep in the brain causing more action potentials in primary visual cortex in the back of your head, and so on ad infinitum. At some point, the mechanical wonder of 100 billion neurons working together produces something special: your experience of the color blue. What’s special is not that you can discriminate that color from others; nor that you are aware of it and paying attention to it. It is not notable that you can tell us about it, or assign a name to it. It’s that you have a subjective, qualitative experience of the color; there is something it is like to experience the color blue. Some philosophers call these experiences qualia – meaning “what kind” – but it is not important what kind of experience you are having, just that you are having one at all. Modern science hypothesizes that subjective experience is a product of the brain, but has no explanation for it. The brain’s building blocks are neurons; their language is the action potential, an electrical impulse that relays information. Sensory molecules pick up information about the outside world and translate it into action potentials. The information is processed among many networks of neurons, and returns to the outside world via signals to muscles, which effect behavior. Somewhere between sensory molecules and muscles, the neurons organize to create systems for memory, attention, global access of information, self-awareness and language. How the brain achieves this feat is largely unknown, but neuroscientists are hard at work today trying to elucidate the mechanisms responsible. The philosopher David Chalmers calls these the “easy” problems of consciousness because science has the tools to ask questions about them and eventually solve them.

The easy problems have in common the fact that their explanation requires only a mechanism of their function; once we explain a mechanism by which neurons integrate information, for example, the problem of integration is solved. In contrast, experience, or the existence of qualia, is the “hard” problem of consciousness because it has no obvious function and is completely unmeasurable; science has no way of even proposing hypotheses about it.

Philosophical Zombies

Do you know that feeling you have when you fall in love? Most people describe it as something special, unexplainable, mysterious and wholly wonderful. Scientists will describe it in terms of molecules of oxytocin and vasopressin binding receptors on neurons in the midbrain. Surely love is not just a bunch of molecules running wild in your head? Yes and no. The molecules cause one to exhibit seriously strange behavior like not eating or sleeping, but out of their interactions emerges something more. That something is the feeling itself.

Physical rules and current neuroscientific evidence suggest that the brain should function as it does, but without producing feelings, sensations, or subjective experience; we should be philosophical zombies. Philosophical zombies are hypothetical beings that look and act exactly as humans do, but never actually have first-person qualitative experience of anything.

If a philosophical zombie met a nice girl, he would act as if he were in love. He would talk about his longing and joy, but he would not actually have that qualitative feeling of being in love. Even though they have brains just like ours, philosophical zombies are in essence robots – processing information, reporting mental states, having information of pains or emotions, having functional memory, but never actually having an experience of anything. There is nothing it is like to be a philosophical zombie; all processing goes on in the “subconscious.” This is exactly what science – in its current state – would predict. All cognitive processing should go on “in the dark,” without a conscious element.

Yet we obviously are not philosophical zombies. The processing that goes on in our brains is accompanied by a subjective experience. This experience is the most intimate thing you know – it’s almost impossible to imagine life without it – and for that reason, it is also the hardest thing to question or pinpoint in your own mind. Neuroscience hypothesizes that everything there is to your mind, including this subjective experience, is a product of physical events. But your experience itself is seemingly not physical; there is no thing, energy field, radiation or force that is your subjective experience that we currently know about. All we can measure are molecular events and electrical interactions among neurons. So where does experience come from and how can we study it?

Emergence

The answer may be found in the concept of emergence. From the interactions of a number of matching parts sometimes emerges a behavior or property that cannot be predicted from or reduced to the properties of the constituents. One such unexpected property comes from the simple behavior of individual ants, which produces a complex “society,” whose properties cannot be predicted from the behavior of individual ants. In fact, adding up the contributions of all individual ants does not produce an effect equal to the effect from the ant colony as a whole. Other examples of emergence include snowflakes, which assemble out of interactions among water molecules at low temperatures; temperature, which is based on molecular kinetics; the stock market, which has no central planning or regulation; human society; and subjective experience.

Subjective experience is an emergent property of the brain. As such, it cannot be predicted from our current knowledge of the brain, or reduced to its basal parts. Individual neurons are not aware of anything at all, but 100 billion of them working together are.

Modern neuroscientists aim to peek into the brain at higher and higher spatial and temporal resolutions with the goal of recording the electrical activities of vast numbers of neurons. Once they have recorded the activity, the thinking goes, the only remaining task will be to find out what the activity does. This logic is enticing, but falls short of a explaining the entirety of the brain’s features. One problem is that the entity that emerges – subjective experience – is qualitatively different from neurons and their activities, just as society emerges from interactions among individuals but is qualitatively different from individuals. Moreover, if we were to describe the activities of all individuals that comprise society, we would get no information about society; we would get noise from all the opposing actions. Likewise, if we describe the activities of all the neurons in the brain, all we get is activities of all the neurons in the brain.

An additional barrier is that subjective experience is closed off from outside observation. The contents of your experience are available only to you, and scientists have no way of collecting the data of experience directly. While some neuroscientists are satisfied with collecting first-person data via verbal (human subjects) or behavioral (animal subjects) reports, the fact is that as soon as the subject translates first-person experience into a report, the data becomes of third-person quality.

If aliens discovered earth, they would have no way of knowing that humans had anything going on between their ears beyond electricity and chemistry. This is why neuroscience is so exciting: the most magical machine in the universe is in your head, and we have the opportunity to find out what makes it so special. As neuroscience attracts increasing amounts of talent and funding, we must not forget the most mysterious, least tangible question about the brain.

Is is not Ought

The killing of Osama bin Laden and the ensuing controversy over the widespread jubilation in the U.S. have prompted some scientists to explain the psychological and evolutionary basis for those celebrations. Unfortunately, some of them used science to argue that since joy in this situation is natural, it is also morally good. Regardless of one’s view on the appropriateness of celebrating a killing, the fact that it is natural to do so has no relation to whether it is moral or right. Science bases human behavior on the functions of neural networks and evolutionary adaptations, but does not excuse us from taking responsibility for those behaviors. Just as promiscuity may be a natural but not morally inacceptable temptation for males in a monogamous society, natural joy over the killing of an evil man is not necessarily good either. Our values come from philosophy, not empirical evidence. Or, as Hume wrote, what is is not necessarily what ought to be.

That does not mean that science and morality have nothing to do with each other. While it is illogical to justify a value using scientific facts, as some did with celebrations of Osama bin Laden’s killing, it is quite alright to use science to optimize the practice of an established value. If we deem it unacceptable to celebrate killing, we may use neuroscience to adjust educational techniques to instill that value in our children.

Most importantly, science writers have a responsibility to separate facts from values; what behavior is natural and what behavior is acceptable. They must be careful to note that a materialistic basis for mental events does not relieve us from responsibility for our actions. At the same time, readers must be wary of those who try to use scientific evidence to justify a moral agenda. Science alone will never be a basis for our values, but if used properly, it can help us realize the values we choose.

Background:

Jonah Lehrer on revenge

Benedict Carey on Celebrating Death

"Why we celebrate a killing," by Jonathan Haidt and my response.

 

New Class of Cognitive Enhancers to Transform Mankind

Scientists at the Bewundgen University in Germany discovered that a diet rich in petrolatum, a substance of hydrocarbons, can greatly improve performance on a wide variety of cognitive tasks. The research, led by neuroscientist Dr. Hans Schweinstucken, followed three groups of human subjects for over a year. The first group was instructed to eat regularly, but to also consume 500 grams of petrolatum per day, in the morning after breakfast. The second group was given an energy-deficient supplement of sugar substitutes; and the third were not given anything at all. All groups were tested periodically on tasks of memory, abstract thinking, cognitive speed, and general agility. To their surprise, the researchers found that regular consumption of petrolatum improved subjects' recall, memory retrieval and abstract thinking while reducing overall agility, motivation and ability to make decisions. In contrast, the group eating sugar substitutes performed significantly worse over time on tests of memory and abstract thinking, with 50% of the subjects hitting an all-time low of 25% correct responses on recall (vs. their performance prior to the experiment).

Dr. Schweinstucken speculates that the first group's reduced motivation and agility may have something to do with their major weight gain, which by itself remains a mysterious side-effect. As for the mechanisms of action, Dr. Schweinstucken proposes that petrolatum acts via inhibitory GABAergic interneurons in neocortex, the brain part thought to be important in higher cognition, antagonizing GABA action and thereby reducing overall levels of inhibition in the brain. However, he warns that at higher doses than 500 grams per day, petrolatum may actually have a detrimental effect on cognition because it may saturate GABA receptors and the corresponding neurons, causing massive seizures; he is currently conducting experiments to test this hypothesis.

Meanwhile, for all you folks who have exams to study for, I recommend a trip to your local CVS, where petrolatum is sold over-the-counter as "Vaseline," or petroleum jelly.

Further reading: Schweinstucken et al. Petrolatum improves cognitive performance in humans. J Psycho Chemo Physio Med. 2011, April 1.

Untangling the Wires

Here's a great video summary from Nature on the recent advances in the field of connectomics by researchers at the Max Planck Institute in Germany and Harvard University:

The video on Nature Blogs

And the original research, here.

My previous post on Connectomics.