Wolfram Computation Meets Knowledge

Wolfram Summer School

Alumni

Benjamin Rapoport

Summer School

Class of 2008

Bio

Benjamin Rapoport is an MD-PhD candidate at Harvard Medical School and a doctoral student in the Department of Electrical Engineering at MIT. His research and professional interests include designing and neurosurgically implanting interfaces with the brain and nervous system to repair and augment neurologic function. He is currently involved in developing electronic microchip interfaces with the brain, to be used in neural prosthetic systems for paralyzed and disabled people. These advanced prostheses will enable people to communicate with computers and operate robotic prosthetic limbs or other devices using only their thoughts. His present work is on algorithms and ultra-low-power electronic architectures for decoding neural signals in the context of fully brain-implantable brain-machine interfaces.

At a more basic level he is interested in understanding how computation is accomplished in biological systems such as cellular signaling pathways, gene-protein networks, and cortical neuronal circuits. He hopes that the NKS Summer School will provide opportunities to gain insight into the primitives that underlie computation in a range of biological systems.

Project: Neuronal Computations Emulating Real-World Dynamics

Some of the richness and complexity associated with cognitive processes such as imagination, intuition, learning, and memory can be attributed to the internal dynamics of neuronal networks that perform computations in the brain. Traditional approaches to modeling neural networks often study their computational properties by focusing on their input-output relationships. Such approaches typically neglect the potentially complex behaviors that may occur at the level of internal processing layers within a neural network as it computes. The aim of this project is to explore and characterize a class of such behaviors using a geometric approach, and to consider how those behaviors might relate to cognitive phenomena.

There is good experimental evidence suggesting that certain populations of neurons in the mammalian brain (often known as place cells [O’Keefe and Nadel]) can encode maps of a physical environment. Knowing that some aspects of cortical information processing are organized in hierarchies of feature detectors, it seems natural to generalize observations about place cells by hypothesizing that the geometric structure of more general feature spaces might also be learned and encoded by biological neural networks–features might be as concrete as perceived sensations, or more abstract derived quantities. In populations of place cells, neuronal activity in dreaming rats has been observed to recapitulate the patterns generated as the animals navigate their environment while awake [Wilson], reflecting the continuous paths the animals know they are able to follow. Similarly, one might expect the dynamics of neuronal populations encoding a more general feature space to reflect and be constrained by the geometric structure of that space, and that memory and reasoning processes involving such a space might be influenced and constrained in related ways. This hypothesis will be explored using a model constructed as follows.

Consider a two-dimensional feature space S with a manifold structure, so that distances on S can be computed. Model the neural network encoding S as a two-dimensional array SA of elements (cells) coupled in nearest-neighbor fashion. By coordinatizing S, each cell can be assigned to a grid point in the discretized feature space. Assign weights to the couplings as a function of distance along S between neighboring cells. Adopt a neuron-like activity rule for the state of the cells. Explore the range of behaviors exhibited by the network in response to a variety of initial states of increasing complexity.

The proposed model corresponds to a two-dimensional totalistic cellular automaton with weights derived from the metric structure of an underlying space, S; for S, a variety of famous manifolds will be considered. For activity rules for the cells, subspaces of the classes of two-color, four- and eight-neighbor two-dimensional totalistic cellular automaton rules will be initially considered, restricting the rules based on constraints of neurobiological plausibility.

Extensions to these explorations include allowing activity patterns to modify the connectivity weights as the network state evolves (simulating learning), and allowing the system to be driven by external inputs as it evolves, for example by directly enforcing rule-independent state changes in a subset of cells as a function of time (simulating the network response to external stimuli or new information).

In the language of Wolfram’s A New Kind of Science, this model aims to explore how the structure of feature spaces might be learned and encoded by neuronal populations, how the structure of such spaces might influence the dynamics and complexity of the computations performed by neuronal populations in the brain, and how such neural activity patterns might affect processes of perception and analysis.

References

O’Keefe, J., and Nadel, L. The Hippocampus as a Cognitive Map. Oxford University Press, 1978.

Lever, C., Wills, T., Cacucci, F., and Burgess, N. “Long-Term Plasticity in Hippocampal Place-Cell Representation of Environmental Geometry.” Nature 416 (2002): 90-94.

Knierim, J. J., Kudrimoti, H. S., and McNaughton, B. L. “Place Cells, Head Direction Cells, and the Learning of Landmark Stability.” Journal of Neuroscience, 15 (1995): 1648-1659.

Ji, D. Y., and Wilson, M. A. “Coordinated Memory Replay in the Visual Cortex and Hippocampus During Sleep.” Nature Neuroscience, 10 (2007): 100-107.

Lee, A. K., and Wilson, M. A. “Memory of Sequential Experience in the Hippocampus during Slow Wave Sleep.” Neuron 36, 6 (2002): 1183-1194.

Louie, K., and Wilson, M. A. “Temporally Structured Replay of Awake Hippocampal Ensemble Activity during Rapid Eye Movement Sleep.” Neuron 29, 1 (2001): 145-156.

Project-Related Demonstrations

Cellular-Automaton-Like Neural Network in a Toroidal Vector Field

View demonstration of Wolfram Demonstrations Project

Favorite Radius 3/2 Rule

Rule 1498

In order to select a cellular automaton in the class (k=2, r=3/2) with personal significance, I searched for rules that quickly generate my name in Morse code (in a window centered on the initial condition) when initialized from a single black cell.

Where Morse dots correspond to isolated ones (black cells), Morse dashes correspond to pairs of ones, dots and dashes are separated by single zeros (white cells), and the sets of dots and dashes encoding individual letters are separated by pairs of zeros.

Rule 1498 generates “BEN” in Morse code on step 67, faster than any other rule in the class.