Theory: Neural Computation
I grew up in Nanjing, China, and moved to Southern California with my family just before I turned sixteen. First introduced to action potentials and thermodynamics in high school classrooms, I studied neurobiology and physics at UC, Berkeley together with a few electrical engineering courses. I earned my bachelor degree in 2009, and joined the Brains in Silicon lab to further my theoretical knowledge in order to understand the nature of neural computation. Having earned my master degree in 2011, I am currently pursuing a doctoral degree under the co-advisorship of Prof. Surya Ganguli and Prof. Kwabena Boahen.
One of my research goals is to study principles of neural computation without making prior assumptions of the neural code. Whether neurons encode information in their spike rates or times is a hotly debated issue in computational neuroscience. Under the influence of such debates, choosing an appropriate neural code became a prerequisite for theoretical treatments of neural computation. However, experimental results have demonstrated that biological neurons can operate with one or the other code depending on the brain areas they are from, suggesting that the spiking neuron as a computational unit does not fundamentally restrict neural networks to a single code. In fact, the spike rate and time codes may simply be the extremes of a continuous spectrum. Recent developments of Time Encoding and Decoding Machines (deterministic) and of Generalized Linear Models (probabilistic) provide a way of decoding information from spikes using only the dynamics of the spike generation mechanism without a prior assumption of the neural code. Taking advantages of their generality in coding, I am able to construct neural networks that carry out a variety of computational tasks throughout the coding spectrum and then to study the underlying computational principles that are invariant to the neural code choice.
In wiring circuits in the brain or constructing artificial networks in software or hardware, practical considerations of wiring economy, memory capacity, component heterogeneity, functional robustness and matching to experimental observations raise questions about the topologies that underlie these networks. If computation is a network's function, topology is its structure. The relation between structure and function is one of the fundamental themes in modern biology. Theoretical treatments of this relation in computational neuroscience have led to many results on properties of random networks defined by topological parameters (e.g. sparsity, connection weight distribution). But ensembles of networks grouped together by their similar functions remain an untouched subject. My goal is to search for and to study topological invariances in such ensembles. In other words, I ask: what are the possible transformations on a network's connectivity that will keep its computational function intact? After having characterized such invariances, the aforementioned practical considerations can be formulated as constrained optimizations over the topological spaces defined by invariant transformations.