From Murmann Mixed-Signal Group
SBEE, Massachusetts Institute of Technology, 2012
MSEE, Stanford University, 2015
Admitted to Ph.D. Candidacy: 2013-2014
Email: dbankman AT stanford DOT edu
Research: Charge Domain Signal Processing for Machine Learning
In the past decade, we have witnessed the beginning of rapid growth in electronic systems possessing artificial intelligence. Self-parking cars became available for the first time in North America in 2006, and the 2011 release of Apple’s Siri marked the beginning of ubiquitous speech interfaces in smart phones. Today, self-driving cars are undergoing development by several automotive and tech companies (Mercedes, Tesla, Google), and real-time speech translation is on the horizon for Skype. Common to these systems is the ability to understand sensory data (images and sounds), which as a computing problem lends itself well to a data-driven approach. To date, deep neural networks have set the state-of-the-art in image classification and speech recognition, although the machine learning community is still largely unsure of why they work so well, much in the same way as the link between the neuron and human thought remains a mystery to neuroscientists.
A deep neural network as a pattern classifier is a function from a vector (pixels of an image, samples of an audio spectrum) to a set of labels (cat, dog, “hello”, “goodbye”). This function is parametrized by a set of weights that model the connections between neurons in a topological arrangement with complexity that depends on the task at hand. The learning process consists of adjusting the weights until the network classifies inputs in a data set available for training with some specified degree of accuracy. Mathematically, the network classifies an input with a sequence of matrix multiplications, applying a simple nonlinear function to each intermediate result.
The recent top-performing networks rely on tens of millions of weights to classify images taken from one thousand categories . However, neural networks exhibit a resilience to errors in the individual multiplies and adds , perhaps due to the distributed nature of the computation. In order to deploy deep neural networks in embedded applications, this error tolerance must be leveraged by low energy circuits for approximate arithmetic. An opportunity exists to implement the arithmetic computation with circuits that produce errors in addition to the quantization error seen by reducing numerical precision with conventional CMOS arithmetic logic. In the SNR regime required for the classification of sensory data, analog arithmetic circuits may hold the key to improved energy efficiency .
We are presently developing internally analog, externally digital arithmetic circuits utilizing passive charge sharing and redistribution  to multiply and add. The massive scale of deep neural networks necessitates some form of digital storage, which requires D/A and A/D interfaces at the inputs and outputs of the analog arithmetic. These mixed-signal circuits may provide the efficiency needed to deploy deep neural networks in embedded applications where energy per classification, rather than speed, is of primary importance. We are currently designing a charge domain dot product kernel, which multiplies at the D/A interface , adds with passive charge sharing, and digitizes its final result allowing the kernel to reside in a digital VLSI environment.
 A. Krizhevsky, I. Sutskever, G.E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks," Advances in Neural Information Processing Systems, pp. 1097-1105, 2012.
 S. Venkataramani, A. Ranjan, K. Roy and A. Raghunathan, "AxNN: energy-efficient neuromorphic systems using approximate computing," Proceedings of the 2014 International Symposium on Low Power Electronics and Design, pp. 27-32, 2014.
 E.A. Vittoz, "Future of analog in the VLSI environment," IEEE International Symposium on Circuits and Systems, vol. 2, pp. 1372-1375, 1990.
 B. Sadhu, M. Sturm, B. M. Sadler, and R. Harjani, “Analysis and Design of a 5 GS / s Analog Charge-Domain FFT for an SDR Front-End,” IEEE J. Solid-State Circuits, vol. 48, no. 5, pp. 1199–1211, 2013.
 D. Bankman and B. Murmann, "Passive charge redistribution digital-to-analogue multiplier," Electronics Letters, vol. 51, no. 5, pp. 386-388, March 5 2015.