A memory frontier for complex synapses: from molecular neurobiology to mathematical theory
It is widely thought that our very ability to remember the past over longer time scales depends crucially on our ability to modify synapses in our brain in an experience dependent manner. Classical memory models show that it is possible to store, through local synaptic modification, an extensive number of long-term memories, proportional to the number of synapses. However, such models implicitly assume synaptic strengths can take any analog value; in contrast recent experimental work has shown that synapses are more digital than analog, and can only assume a finite number of distinguishable strengths. This one simple fact leads to a catastrophe in memory capacity: classical models with digital synapses have memory capacity proportional to the logarithm of the number of synapses. This result indicates that our entire theoretical basis for the storage of long-term memories in modifiable synapses is flawed. We show that a way out of this catastrophe is to drastically expand our theoretical conception of a synapse from a single number to an entire stochastic dynamical system on its own right, reflecting the molecular complexity of synaptic signaling. We derive new and improved theoretical upper bounds on synaptic memory capacity over the space of all possible stochastic synaptic dynamical systems of bounded complexity. These results actually yield new mathematical theorems about stochastic processes, and along the way, we will discuss how our proof methods are related to first passage time theory, web search, protein folding, and our ability to keep our eyes still.
Joint work with Subhaniel Lahiri
Surya began his academic career as an undergraduate at MIT, triple majoring in mathematics, physics, and EECS, and then moved to Berkeley to complete a PhD in string theory. There he worked on theories of how the geometry of space and time might holographically emerge from the statistical mechanics of large non-gravitational systems. After this, he chose to pursue the field of theoretical neuroscience, where theories could be tested against experiments. After completing a postdoc at UCSF, he has recently started a theoretical neuroscience laboratory at Stanford. He and his lab now study how networks of neurons and synapses cooperate to mediate important brain functions, like sensory perception, motor control, and memory. He has been awarded a Swartz-Fellowship in computational neuroscience, a Burroughs-Wellcome Career Award at the Scientific Interface, a Terman Award, and an Alfred P. Sloan foundation fellowship.