Alex Neckar
Architecture: Synaptic Connections

Personal Background

I grew up in Saint Paul, Minnesota and earned a BS in Computer Engineering and a BS in Electrical Engineering from Northwestern University. My undergraduate research interests were centered around Computer Architecture, especially thermal-aware architectures. Attracted to Stanford's emphasis on interdisciplinary research, I started my PhD in the fall of 2010. Wanting to apply my knowledge of Computer Architecture and feed my fascination with neuroscience, I joined the Brains in Silicon Lab in the fall of 2011.

John Arthur
 

Research Goals

I became an engineer because machines fascinated me, and I specialized in Computer Engineering and Computer Architecture because I thought there was something both unique and sublime about machines that manipulate information. However, for all of our ingenuity, humans have yet to produce a computer that rivals our own brains in many aspects of capability, adaptability, complexity, and efficiency. Nature has provided us with an amazing blueprint, one which Neuroscience is deciphering portions of every day. My research goals are to translate these clues about how our brains perform computation into machines based on the same paradigms, ultimately marrying Neuroscience with Computer Architecture. I feel the time is ripe to build a computational device whose hardware is inspired by the 3 lb, 12-Watt supercomputer carried by everyone.

Currently, my main project is NEF in Neurogrid. The Neural Engineering Framework (NEF) is a biologically plausible model for performing general computation using spiking neural networks developed by Eliasmith and Anderson and described in their book, Neural Engineering. Neurogrid is Brains in Silicon's networked set of silicon neuron pools. NEF in Neurogrid seeks to map the computational models of the NEF onto Neurogrid's hardware, using an FPGA to facilitate the weighted connections NEF requires. Ultimately, these two components will be unified into a single chip designed expressly as a hardware implementation of NEF. My work investigates what the architecture for such a chip will resemble, examining and exploiting NEF's localities, parallelisms, and inherent organization. To be useful, such an architecture has to implement the networks described by NEF in a high-throughput, low-memory footprint, and scalable fashion.

Project Status

So far, I have developed a highly scalable spike routing algorithm for NEF networks. Each time a neuron spikes, variable input must be delivered to its postsynaptic connections, governed by the synaptic strengths of those connections. Due to NEF's all-to-all connectivity between layers, naïve spike routing implementations run in O(N) time, where N is the number of neurons in each layer, because each connection weight must be retrieved from memory. When you have N neurons spiking continuously with each spike requiring an O(N) operation, the largest network that you can implement becomes severely constrained.

Silicon Hippocampus USB2.0 Board

Comparing Deterministic and Probabilistic Weights: Pool A is connected to Pool B1 with deterministic weights and to Pool B3 with probabilistic weights. The individual neurons' firing rates are similar (grids), as are the output values decoded (in a 0.5 second window) from those firing rates (plots). The weights' values were chosen to implement the identity operation: the input value (slider position) is echoed by the output value. Simulations were performed using Nengo.

My method takes advantage of conditional probability to achieve increased performance. Normally, NEF connection weights denote the amplitude of the input that postsynaptic neurons receive when the presynaptic neuron spikes. The first insight is to instead treat the weights as the probabilities of sending a fixed-strength input to the postsynaptic neurons. Treating each of these probabilistic transmissions as an event, we can calculate the probabilities that a given number of events will occur. Given the prior of number of events, the conditional probabilities that particular postsynaptic neurons are targeted by these events may also be calculated. By sampling from these two distributions in succession, my method performs the same operation as the naïve implementation, but runs much more quickly.

Sampling from probability distributions in hardware is akin to throwing darts at a dartboard. Each possible outcome (number of fixed-strength inputs to be sent, or target to send an input to) is given a section of dartboard sized according to its probability mass. A dart is thrown by generating a random number. The dartboards themselves are stored as an array containing the entries of the cumulative distribution function associated with the probability mass values. To find out what section of the board it landed in, we have to search these values. Since a CDF is a sorted list, we can perform a binary search, which leads to an overall O(logN) runtime for the algorithm. This opens the door for larger, more functional networks.

Miscellaneous Facts

I like riding my bike and going running nearby campus, hiking north of San Francisco, and surfing (poorly) in Santa Cruz. My favorite film is usually 2001: A Space Odyssey, but sometimes Lawrence of Arabia. My secret talent is making pottery.