Neural
Circuits and Neural Computation: A Systems-level Perspective
David C.
Van Essen
Washington
University in St. Louis
The amazing computational capabilities of the human brain reflect
the dynamic flow of information through its fabulously complex neural
circuitry. Elucidating the wiring diagram of the primate brain in general and
the human brain in particular represents one of the grand challenges of neural
computation. As the dominant structure of the brain (and in humans the most
variable structure), the cerebral cortex is especially intriguing but also
especially challenging to decipher. This presentation will focus on new
neuroimaging approaches that show much promise for revealing the circuitry and
functional organization of cerebral cortex in humans and nonhuman primates. A
quantitative understanding of neural connectivity patterns at both macroscopic
and microscopic levels will allow fundamental advances in modeling biologically
plausible neural circuits that emulate many functions of the human brain.
"Internal
models, adaptation, and the timescales of memory"
Reza
Shadmehr
Johns
Hopkins University
When the brain generates a motor command, it also predicts the
sensory consequences of that command via an "internal model". The
reliance on predictions makes the brain able to sense the world better than is
possible from the sensors alone. However, this happens only when the models are
accurate. To keep the models accurate, the brain must constantly learn from
prediction errors. Here I use examples from saccade and reach adaptation to
demonstrate that learning is guided by multiple timescales: a fast system that
strongly responds to error but rapidly forgets, and a slow system that weakly
responds to error but has good retention. What are these systems learning? In
principle, the brain could be learning to more accurately predict the sensory
consequences of motor commands and correct movements as they occur (i.e., learn
a forward model). Using the theoretical framework of stochastic optimal control,
I show that such adaptation should leave its signature in saccade trajectories.
Experiments on a novel form of saccade adaptation seems to bare out the
predictions. Therefore, it appears that motor errors give rise to multiple
timescales of adaptation, and the fastest timescales learn forward models.
Insect
vision: Physical constraints in natural information processing.
Rob de
Ruyter van Steveninck
Department
of Physics, Indiana University, Bloomington
Information processing by the brain is often understood to be
constrained by the properties of the neural hardware that carries out the
underlying computations. However, living systems cannot freely choose the
quality of their sensory input. That is dictated by physical properties of the
environment and by the necessity to respond to external stimuli in time. In
other words, there are constraints on information processing, independent of
the animal or its neural substrate, and these constraints are to some extent
universal. Our knowledge of neural processing has mostly come from laboratory
experiments, and so our understanding of those constraints, as they arise in
real natural conditions, is still in its infancy. It will be interesting to
quantify them, to see how they affect information processing strategies in real
animals, and to assess whether the solutions that animals use are close to
optimal in a way that we can understand.
The visual system is a good model for a study of these questions,
because vision naturally operates over an enormous range of light intensities,
that is, an enormous range of signal to noise ratios. Insect visual systems in
particular are generally very amenable to quantitative analysis. I will
introduce the subject with some historic examples that illuminate problems and
solutions in insect vision, ranging from the optics of the insect eye, to
motion vision, and behavior. Then I will discuss some of our early experiments
and analyses on motion estimation in a natural context, illustrating the need
for the system to adapt its computational strategies in order to cope with
large variations in signal and noise. Work in this vein is still in its early
stages. For the not too distant future, it is my hope that a combined effort in
experiment and theory can achieve a deeper and more quantitative understanding
of sensory information processing in the much richer context offered by the
complexities and uncertainties of the natural world.
The
Threefold Way in Computational Neuroscience
Henry
Abarbanel
University
of California, San Diego
There appear to be three (at least) identifiable approaches to
Computational Neuroscience. After trying to identify these views, and comment
on them with my own opinion, I will focus on the view I think will be most
productive both for Neuroscience as a whole and for organizations such as the
Los Alamos National Laboratory specifically.
This op-ed introduction will be followed by a discussion of a
specific problem solved by neural systems in a variety of different ways:
telling time. On scales from a few microseconds to many hours animals need to
address the passage of time. I will review some of the known strategies for
this and speculate on others.
Not to be too mysterious about the connections between this and
the beginnings of the talk: I choose the second of the three fold approaches.
The
critical role of electrical coupling in the generation of population
oscillations in neocortex, at frequencies from <1 Hz to >100 Hz
Roger
Traub
SUNY
Downstate Medical Center
The neocortex generates oscillations at many different
frequencies, the pattern of which correlates (in vivo) with the sleep/wake
cycle and, in the waking state, with sensory stimulation and cognitive tasks.
There are also correlations with the initiation and progression of epileptic
seizures. Many of these oscillations can be replicated in brain slices, from
both rodents and (more recently) humans, with a remarkable similarity, at the
cellular level, to in vivo oscillations. In addition, detailed network
simulations have advanced to the state where cellular oscillation patterns can
be replicated and specific experimentally testable predictions offered - in
some cases, already verified. Remarkably, most oscillation types in the
neocortex, and hippocampus also, depend on electrical coupling between
pyramidal neurons, and such coupling appears to exist at an unexpected site -
between axons. I shall review the morphological data on this type of coupling,
and also the phenomenology and mechanisms of gamma (30 - 80 Hz), beta2 (20 - 30
Hz) and very fast (>80 Hz) oscillations; and I shall outline how large-scale
modeling of a thalamocortical column, using multi-compartment,
multi-conductance neurons, has contributed to our understanding.
For the future, it is safe to say that models and theories of
neocortical function will need to take account of electrical coupling between
neurons, in addition to chemical synaptic interactions.
Toward a
new science of connectomics
Sebastian
Seung
Howard
Hughes Medical Institute and MIT
Judging from current progress in nanoscale imaging and cutting,
histochemical and genetic methods for staining, and computational algorithms
for image analysis, it should soon be possible to create automated systems that
will take a sample of brain tissue as input and generate its Òconnectome,Ó a
list of all synaptic connections between the neurons inside. Such systems will
give rise to a new field called "connectomics," defined by the
high-throughput generation of data about neural connectivity, and the subsequent
mining of that data for knowledge about the brain. I will discuss the possible
impact that connectomics could have on our understanding of how the brain wires
and rewires itself, the dynamics of activity in neural networks, and the
neuropathological basis of mental disorders.
Memory
and the Computational Brain
C. Randy
Gallistel
Rutgers
University
A read-write memory (TuringÕs tape) is implied by behavioral
evidence for the kinds of computations performed by even insect brains (e.g.,
dead reckoning) together with what computer scientists understand about the
limitations that a finite state architecture places on computational power.
However, neuroscientists have not looked for and (therefore?) not found a
read-write memory mechanism. The absence of such a mechanism is often taken as
a virtue, despite its relegation of the nervous system to the computationally
weaker class of finite state machines. Computational models in contemporary
cognitive science routinely presuppose the much more powerful Turing architecture,
which is why they are Òneurobiologically implausible.Ó I argue that this is a
problem for neuroscience, not cognitive science. There must be a read-write
memory mechanism. Its role in the causation of behavior is as central as the
role of the read-only molecular genetic memory mechanism in the causation of
biological structure. Its discovery will transform our understanding of
neurobiology, just as the discovery of the structure of the gene transformed
biochemistry.
Ensemble
coding of visual motion in the primate retina and its readout in the brain
E.J.
Chichilnisky
The Salk
Institute
One of the great challenges in neuroscience is to understand the
function of population codes. This entails answering at least three major
questions: (1) how do populations of neurons encode information in their
collective activity? (2) how are population codes read out by downstream
neurons? (3) how do population codes influence sensation and behavior? The
primate visual system illustrates these problems in abundance. Specifically, as
signals flow from the peripheral to the central visual system, receptive fields
become increasingly large and complex, reflecting readout of population coded
signals at successive stages of processing. A comprehensive investigation of
these computations therefore requires that one be able to experimentally
monitor the entire population code and its readout, a demand that until
recently has been technically prohibitive. In this talk I will describe our
studies of a behaviorally important population code and its readout in the
primate visual system. Visual motion is represented in the retina by traveling
waves of activity in many non-direction-selective neurons. The direction and
speed of these waves are read out by downstream neurons to control perception
and behavior. We exploited a newly developed large-scale electrophysiological
recording system to measure a substantial fraction of the population code for
visual motion over a significant region of primate retina. To test how effectively
the population code is read out by central neurons, we compared speed estimates
obtained from retinal activity to speed estimates performed by human observers
in matched stimulus conditions. We find that for brief, small stimuli,
behavioral motion sensing performance approaches the limits imposed by the
retinal signal, suggesting that population code readout can be efficient and
nearly noiseless. On the other hand, for extended stimuli, behavioral motion
sensing performance falls far short of limits imposed by the retinal signal,
indicating that central readout of the peripheral population code can place the
ultimate limit on sensation and behavior. We discuss the implications of these
findings for how motion is computed in the brain. We also discuss the factors
that have made it possible to obtain a comprehensive view of the population
code, and the parallels that might be expected in future investigations of
neural population codes.
Grand
challenges in auditory research
Israel
Nelken
Hebrew
University
The auditory system has highly developed subcortical structures
which are among the best understood in the brain. Furthermore, a number of
rather simple rules, with rough understanding of peripheral representations,
are sufficient to account for a surprisingly large number of perceptual
phenomena. Nevertheless, we understand very little about the way these lower
representations are combined to solve the 'hard' problems of audition, such as
pitch representation, spatial localization in realistic conditions, speech
understanding, or even such seemingly simpler processes such as segregating the
incoming sound into its component 'objects'. I will argue that the common
feature of these hard problems is the need to integrate information across both
frequency and time, neither of which occur at the lower representation levels.
I will present a number of (not necessarily mutually-exclusive) views of how
auditory cortex may participate in these tasks. In order to discriminate
between these possibilities it will be necessary to combine behavioral studies,
multi-single neuron recordings and active manipulation of neural activity at a
single-neuron resolution.
Nerve
cell networks on microelectrode arrays: platforms for investigations of
neuronal dynamics underlying information processing.
Guenter
W. Gross
University
of North Texas, Denton
It is unlikely that we will achieve a quantitative understanding
of information processing in the vertebrate brain until we understand
spatio-temporal action potential pattern processing in small neuronal ensembles
or networks. All information enters in parallel, is processed in parallel, and
shapes behavioral patterns in parallel. Computation seems to be performed
primarily by colliding patterns with associated constructive and destructive
interference. These phenomena are superimposed on spontaneous activity with
complex effects on gating sensory information. In the extreme, spontaneous
activity is either anticipatory, which facilitates rapid output pattern generation,
or antagonistic, which can block incoming sensory information, as is seen in
thalamo-cortical circuitry during sleep.
The requirement to quantify spatio-temporal patterns is
unavoidable, and methods must be developed that capture the simultaneity of neuronal
output patterns in neuronal circuits, networks, or ensembles. Although single
neuron behaviour cannot be ignored, it is the cell group that provides
reproducibility, fault tolerance, storage or experience-dependent responses,
and (possibly) Òdecision statesÓ. Cell group dynamics must receive emphasis for
a Òbottom-upÓ construction of brain function, but is difficult to study in
situ. Primary cultures on microelectrode arrays (MEAs) form stable,
spontaneously active networks that provide superior, long-term readout from
many discriminated units, and simultaneous optical information on network
morphology. In the past decade they have received extensive pharmacological and
toxicological attention and can be considered ÒhistiotypicÓ, as their responses
are highly similar to those of the parent tissue in situ.
Given their thorough pharmacological characterization, it is now
prudent to explore the more difficult domains of structure-function
relationships and network dynamics with these platforms. Electrical stimulation
is possible through the recording electrodes and responses to weak, pulsed
magnetic fields have been demonstrated. Recently, it was shown that such
networks in culture are weakly disassortative small world graphs, which differ
significantly in their structure from randomized graphs with the same average
connectivity (1). It is now possible to explore the internal dynamics of
self-organized neuronal systems and ask key questions such as: (a) What is the
origin and purpose of spontaneous activity? (b) What is the nature of
biological fault tolerance? (c) How do networks select or develop specific
spatio-temporal patterns? (d) What are the mechanisms and manifestations of
pattern storage? (d) Can specific patterns be imposed on network via external
stimulation? How do several networks interact if coupled electrically?
Spontaneously active mammalian tissue on MEAs opens a window to the internal
dynamics of networks with realistic applications to studies of pattern
processing and to basic theoretical questions on the nature of information
processing. They also find applications as tissue-based biosensors, and in
areas such as toxicology and drug development. This presentation will summarize
the progress made with these platforms, discuss the remaining problems, and
outline realistic future research efforts.
(1)
Bettencourt et al, 2007, Physical Review E. (in press).
Neurogrid:
Emulating a million neurons in the cortex
Kwabena
Boahen
Stanford
University
I will present a proposal for Neurogrid, a specialized hardware
platform that will perform cortex-scale emulations while offering software-like
flexibility. Recent breakthroughs in brain mapping present an unprecedented
opportunity to understand how the brain works, with profound implications for
society. To interpret these richly grow-ing observations, we have to build
models—the only way to test our understanding—since building a real
brain out of biological parts is currently infeasible. Neurogrid will emulate
(simulate in real-time) one million neurons connected by six billion synapses
with Analog VLSI techniques, matching the performance of a one-megawatt,
500-teraflop supercomputer while consuming less than one watt. Neurogrid will
provide the programmability required to implement various models, replicate
experimental manipulations (and con-trols), and elucidate mechanisms by
augmenting Analog VLSI with Digital VLSI, a mixed-mode approach that combines
the best of both worlds. Realizing programmability without sacrificing scale or
real-time op-eration will make it possible to replicate tasks laboratory
animals perform in biologically realistic models for the first time, which my
lab plans to pursue in close collaboration with neurophysiologists.
Design
Principles that Endow the Brain with a Scalable Architecture.
Charles
F. Stevens
Salk
Institute
One of the Grand Challenges is to learn what mathematical
operations are performed by neuronal circuits. The vertebrate brain has a
scalable architecture – the computations become better in some way as the
size of a circuit is increased – and understanding the scalability can
place constraints on the types of computations done or offer clues about the
nature of the computations. I will outline some methods for studying
scalability rules in vertebrate brains, and illustrate these methods with a
particular example of a universal scaling law and its underlying principle.
Imaging
Associative Neural Plasticity in Man
Claudia
D. Tesche
Univeristy
of New Mexico
Magnetoencephalography (MEG) provides an opportunity to observe the dynamics of human brain function with exquisite temporal resolution. Aversive (fear) conditioning may result from the repeated pairing of a neutral ‰ÛÏconditioned‰Û visual stimulus (CS) with an aversive ‰ÛÏunconditioned‰Û auditory stimulus (US). This association leads to a learned response: presentation of the CS in isolation elicits behaviors associated with the US, even though no such stimulus is presented. Although aversive conditioning has been studied intensively in animal models, little is known about the dynamics of the conditioned response in the normal human brain. We utilized a MEG array to study associative neural plasticity in normal adults. CS presented in isolation following training elicited activation of auditory cortex and amygdala. In a subsequent study, the inter-stimulus interval between CS and US was shortened from 1500 ms to 418 ms. Visual CS predictive of aversive noise continued to elicit responses in auditory cortex, as well as frontal areas and cerebellum, although activation of amygdala was strongly suppressed.