When we look back at the important advances in neuroscience in the 20th and 21st centuries, what will we remember? What will we still find useful and worth pursuing further? The field is still in its nascent stages, even a century after Ramon y Cajal showed evidence for the neuron doctrine, establishing the neuron as a fundamental unit of the nervous system; and Brodmann published his cytoarchitecture studies that convinced the world that the brain is divided into distinct areas and likely uses those to divvy up processing. Yet we still have virtually no clue how the brain works: there is no central theory, no cures for brain diseases; only a whole lot of curious, enthusiastic and optimistic minds and some funding to help them get stuff done. And it is rightly so that some neuroscientists have serious physics envy, which pushes them to develop predictive models that (sometimes) give important insights into what mother nature did to make the brain work. A great example of this is the Hodgkin-Huxley model of the action potential. When Hodgkin and Huxley created the model in the early 1950's, biologists had little clue as to how cells generated such complex waveforms. Having observed conductance changes across the cell membrane during the action potential, Hodgkin and Huxley went on to show that the conductances were ion-selective, and worked as functions of time and membrane potential. They then predicted that whatever was mediating conductance of ions had to be voltage-sensitive and allow fast molecular changes. This work led to a wide search for the ion conductors, which turned out to be voltage-gated sodium and potassium channels. The key word there is that the model predicted something. Fast forward to 2011, and we still don't have a greater success story for predictive models than the Hodgkin-Huxley model.
Neuroscientists today are gathering data by the terabytes, describing amazing properties of neurons and networks, and moving on to the next experiments. A typical electrophysiological experiment, for example, involves electrode recordings of populations of cells, and describes the cells' firing properties while the brain is engaged in some behavior or other. What we need is to be able to make predictions about major principles based on information we've gathered over the last century. For example, given that we observe gamma rhythms during object recognition, can we predict not only if gamma is required for that task, but how it helps the brain achieve it? Given these observations, can we predict what features the brain must have to accomplish this task? For example, if we predict that cortical connections constitute "small world" networks, can we understand the rules for wiring better? Better yet, can we infer what the wiring rules must be? As we develop ever more sophisticated tools to study the brain, we should have an easier time making predictions about how it works. We have to step up to the plate in the 21st century, and produce some theories that do more than describe what we see. These theories have to not only capture the complexity of the system but also the relative simplicity by which the system is created.