Danny Bankman

From Murmann Mixed-Signal Group

Jump to: navigation, search

Bankman.png

BSEE, Massachusetts Institute of Technology, 2012
Email: dbankman AT stanford DOT edu

Research: Mixed-Mode Neuro-Inspired Information Processing

Evidence from the neuroscience community suggests that natural images are represented in the primary visual cortex using an efficient coding [1]. Energy efficiency is achieved in neural systems by representing a sensory stimulus in the activity of just a few neurons out of many. In mathematical terms, the neural activity is "sparse". Such an encoding preserves only the critical components of the information in a stimulus, and serves as a useful representation for higher level sensory processing.

The sparse coding approach can be taken to extract features from natural images that are useful for higher level computer vision tasks, such as object recognition. Sparse coding is accomplished by selecting a small subset of basis functions from an over-complete dictionary to represent an image. The problem of determining the optimally sparse representation of a signal is known as "sparse approximation" and can be solved using convex optimization. CPU-based sparse approximation solvers, however, can be both slow and costly in terms of energy and computing hardware, making it difficult to deploy them in real-time practical applications. I aim to implement a recently proposed sparse coding algorithm well-suited to parallel distributed hardware [2] using efficient mixed-mode signal processing techniques available in modern CMOS.

[1] B. Olshausen and D. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, pp. 607–609, 1996.
[2] C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen, “Sparse Coding via Thresholding and Local Competition in Neural Circuits,” Neural Computation, vol. 20, pp. 2526–2563, 2008.

Personal tools