Peiran Gao
Theory: Neural Computation

Personal Background

I grew up in Nanjing, China, and moved to Southern California with my family just before I turned sixteen. First introduced to action potentials and thermodynamics in high school classrooms, I studied neurobiology and physics at UC, Berkeley together with a few electrical engineering courses. I earned my bachelor degree in 2009, and joined the Brains in Silicon lab to further my theoretical knowledge in order to understand the nature of neural computation. Having earned my master degree in 2011, I am currently pursuing a doctoral degree under the co-advisorship of Prof. Surya Ganguli and Prof. Kwabena Boahen.

 

Research Goals

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player

Rate and time codings in an integrate and fire neuron. The time-decoding-machine algorithm recovers (dashed red) the band-limited current signal (blue) injected into the neuron from the intervals between its spikes (blue dots). The signal rides atop a tonic current that increases throughout the video, increasing the mean spike rate. Thus, the neural code transitions smoothly from a timing code to a rate code. The algorithm faithfully recovers the signal over this entire range, first accurately decoding it when the Nyquist limit is exceeded.

One of my research goals is to study principles of neural computation without making prior assumptions of the neural code. Whether neurons encode information in their spike rates or times is a hotly debated issue in computational neuroscience. Under the influence of such debates, choosing an appropriate neural code became a prerequisite for theoretical treatments of neural computation. However, experimental results have demonstrated that biological neurons can operate with one or the other code depending on the brain areas they are from, suggesting that the spiking neuron as a computational unit does not fundamentally restrict neural networks to a single code. In fact, the spike rate and time codes may simply be the extremes of a continuous spectrum. Recent developments of Time Encoding and Decoding Machines (deterministic) and of Generalized Linear Models (probabilistic) provide a way of decoding information from spikes using only the dynamics of the spike generation mechanism without a prior assumption of the neural code. Taking advantages of their generality in coding, I am able to construct neural networks that carry out a variety of computational tasks throughout the coding spectrum and then to study the underlying computational principles that are invariant to the neural code choice.

In wiring circuits in the brain or constructing artificial networks in software or hardware, practical considerations of wiring economy, memory capacity, component heterogeneity, functional robustness and matching to experimental observations raise questions about the topologies that underlie these networks. If computation is a network's function, topology is its structure. The relation between structure and function is one of the fundamental themes in modern biology. Theoretical treatments of this relation in computational neuroscience have led to many results on properties of random networks defined by topological parameters (e.g. sparsity, connection weight distribution). But ensembles of networks grouped together by their similar functions remain an untouched subject. My goal is to search for and to study topological invariances in such ensembles. In other words, I ask: what are the possible transformations on a network's connectivity that will keep its computational function intact? After having characterized such invariances, the aforementioned practical considerations can be formulated as constrained optimizations over the topological spaces defined by invariant transformations.

Publications

ID Article Full Text
J36
P Gao, B V Benjamin and K Boahen, Dynamical system guided mapping of quantitative neuronal models onto neuromorphic hardware, IEEE Transactions on Circuits and Systems I, In press.

Full Text
C41 S Choudhary, S Sloan, S Fok, A Necker, E Trautmann, P Gao, T Stewart, C Eliasmith, and K Boahen, Silicon Neurons that Compute, International Conference on Artificial Neural Networks, LNCS vol VV, pp XX-YY, Springer, Heidelberg, 2012. In Press
Full Text
C40 B V Benjamin, J V Authur, P Gao, P Merolla and K Boahen, Superposable Silicon Synapse with Programmable Reversal Potential, International Conference of the IEEE Engineering and Medicine in Biology Society, pp. XXX-YYY, 2012. In Press
Full Text