# Danny Bankman

### From Murmann Mixed-Signal Group

BSEE, Massachusetts Institute of Technology, 2012

Admitted to Ph.D. Candidacy: 2013-2014

**Email**: dbankman AT stanford DOT edu

**Research**: *Charge Domain Signal Processing for Machine Learning*

Deep neural networks (DNNs) have recently produced near-human performance in handwritten digit recognition and outperformed humans in traffic sign recognition [1]. These promising results suggest that we will soon begin to see the widespread use of DNNs in our everyday lives, enabling technologies such as self-driving cars and real-time machine translation. The state-of-the-art performance, however, comes at the price of an intensive, battery-draining computational load. In order to deploy DNNs in practical applications, the energy consumption of such systems must be dramatically reduced.

After the learning phase, the majority of the computations performed by a DNN are weighted sums. Depending on the application, these weighted sums can be carried out using far lower numerical precision than 32-bit floating point while maintaining acceptable performance. The ability of DNNs to tolerate approximate computation should be exploited to reduce the energetic burden. At low SNR, analog computation can perform more efficiently than digital, with respect to both power and area [2, 3]. We propose a mixed-signal dot product circuit that employs passive charge sharing and redistribution [4] with the goal of computing weighted sums at significantly lower energy than standard digital CMOS fixed-point multipliers and adders. A top-level schematic diagram of the dot product circuit (implementing the function of a single artificial neuron) is shown below:

The dot product circuit consists of N multiplier cells, one for each element of the length N input vectors X and Y. The input vectors are represented digitally in low-precision fixed-point. Each multiplier cell is a switched-capacitor circuit that converts the product of its two digital inputs into a charge. The summing operation is performed by connecting the output nodes of all the multiplier cells and allowing the charges to uniformly redistribute. The output voltage representing the dot product is then digitized. A digital input/output interface is necessitated by the massive scale of DNNs, which will require time-multiplexing of the dot product hardware and storage of neuron weights and activations in memory.

[1] D. Ciresan, U. Meier, J. Schmidhuber, "Multi-column Deep Neural Networks for Image Classification," CVPR, pp. 3642-3649, 2012.

[2] Vittoz, E.A, "Future of analog in the VLSI environment," IEEE International Symposium on Circuits and Systems, vol. 2, pp. 1372-1375, 1990.

[3] R. Sarpeshkar, "Analog versus digital: extrapolating from electronics to neural biology," Neural Computation, vol. 10, no. 7, pp. 1601-1638, 1998.

[4] B. Sadhu, M. Sturm, B. M. Sadler, and R. Harjani, “Analysis and Design of a 5 GS / s Analog Charge-Domain FFT for an SDR Front-End,” IEEE J. Solid-State Circuits, vol. 48, no. 5, pp. 1199–1211, 2013.