Alex Neckar
Architecture: Synaptic Connections

I became an engineer because machines fascinated me, and I specialized in Computer Engineering and Computer Architecture because I thought there was something both unique and sublime about machines that manipulate information. However, for all of our ingenuity, humans have yet to produce a computer that rivals our own brains in many aspects of capability, adaptability, complexity, and efficiency. Nature has provided us with an amazing blueprint, one which Neuroscience is deciphering portions of every day. My research goals are to translate these clues about how our brains perform computation into machines based on the same paradigms, ultimately marrying Neuroscience with Computer Architecture. I feel the time is ripe to build a computational device whose hardware is inspired by the 3 lb, 12Watt supercomputer carried by everyone. Currently, my main project is NEF in Neurogrid. The Neural Engineering Framework (NEF) is a biologically plausible model for performing general computation using spiking neural networks developed by Eliasmith and Anderson and described in their book, Neural Engineering. Neurogrid is Brains in Silicon's networked set of silicon neuron pools. NEF in Neurogrid seeks to map the computational models of the NEF onto Neurogrid's hardware, using an FPGA to facilitate the weighted connections NEF requires. Ultimately, these two components will be unified into a single chip designed expressly as a hardware implementation of NEF. My work investigates what the architecture for such a chip will resemble, examining and exploiting NEF's localities, parallelisms, and inherent organization. To be useful, such an architecture has to implement the networks described by NEF in a highthroughput, lowmemory footprint, and scalable fashion. So far, I have developed a highly scalable spike routing algorithm for NEF networks. Each time a neuron spikes, variable input must be delivered to its postsynaptic connections, governed by the synaptic strengths of those connections. Due to NEF's alltoall connectivity between layers, naïve spike routing implementations run in O(N) time, where N is the number of neurons in each layer, because each connection weight must be retrieved from memory. When you have N neurons spiking continuously with each spike requiring an O(N) operation, the largest network that you can implement becomes severely constrained. My method takes advantage of conditional probability to achieve increased performance. Normally, NEF connection weights denote the amplitude of the input that postsynaptic neurons receive when the presynaptic neuron spikes. The first insight is to instead treat the weights as the probabilities of sending a fixedstrength input to the postsynaptic neurons. Treating each of these probabilistic transmissions as an event, we can calculate the probabilities that a given number of events will occur. Given the prior of number of events, the conditional probabilities that particular postsynaptic neurons are targeted by these events may also be calculated. By sampling from these two distributions in succession, my method performs the same operation as the naïve implementation, but runs much more quickly. Sampling from probability distributions in hardware is akin to throwing darts at a dartboard. Each possible outcome (number of fixedstrength inputs to be sent, or target to send an input to) is given a section of dartboard sized according to its probability mass. A dart is thrown by generating a random number. The dartboards themselves are stored as an array containing the entries of the cumulative distribution function associated with the probability mass values. To find out what section of the board it landed in, we have to search these values. Since a CDF is a sorted list, we can perform a binary search, which leads to an overall O(logN) runtime for the algorithm. This opens the door for larger, more functional networks.  
