Danny Bankman

From Murmann Mixed-Signal Group

Jump to: navigation, search


SBEE, Massachusetts Institute of Technology, 2012
MSEE, Stanford University, 2015
Admitted to Ph.D. Candidacy: 2013-2014
Email: dbankman AT stanford DOT edu

Research: Charge Domain Signal Processing for Machine Learning

GPU-accelerated machine learning has enabled neural networks to reach the scale necessary to perform useful cognitive tasks, ranging from sorting cucumbers to diagnosing skin cancer [1, 2, 3]. While deep learning has seen widespread use as a cloud-based service, there now exists a trend to push artificial intelligence from cloud to edge, for reasons of limited network bandwidth, latency constraints, user privacy, and global availability [4]. This research focuses on embedded machine learning applications, addressing the circuits, architectures, and algorithms necessary to maximize the cognitive capacity of an energy-constrained intelligent system [5].

It has been demonstrated empirically in [6] that certain small-scale problems can be solved to near state-of-the-art accuracy using very low precision computation (1 to 4-bit weights and activations). Furthermore, it has been shown analytically in [7] that analog circuits can perform with greater power efficiency than digital circuits in low precision signal processing. However, the massive scale of the latest neural networks engineered by the machine learning community favors circuits that reside in the digital VLSI environment, with digital I/O interfaces. Mixed-signal circuits for low precision arithmetic, when arrayed in a regular structure leveraging data parallelism and dense SRAM storage [8], have the potential to reduce significantly the energy consumption of a neural network IC. As a proof of concept, we demonstrated an 8-bit, 16 input switched-capacitor dot product circuit for use in a three-layer neural network for handwritten digit recognition [9, 10]. We are currently working towards demonstrating a complete neural network IC containing a large-scale mixed-signal neuron array. The design is “CMOS-inspired” in the sense that the neural network topology is engineered for structural regularity of the underlying hardware, facilitating routability of interconnect between memory and compute blocks on chip.

Switched-Capacitor Dot Product Circuit
Die photo.jpg Block diagram.jpg


[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances In Neural Information Processing Systems, 2012.
[2] "How a Japanese cucumber farmer is using deep learning and TensorFlow" [Online]. Available: https://cloud.google.com/blog/big-data/2016/08/how-a-japanese-cucumber-farmer-is-using-deep-learning-and-tensorflow.
[3] A. Esteva*, B. Kuprel*, R. Novoa, J. Ko, S. Swetter, H. Blau, S. Thrun, "Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks," Nature, vol. 542, no. 7639, pp. 115-118, February 2 2017.
[4] "Nvidia wants AI to Get Out of the Cloud and Into a Camera, Drone, or Other Gadget Near You" [Online]. Available: http://spectrum.ieee.org/view-from-the-valley/computing/embedded-systems/nvidia-wants-ai-to-get-out-of-the-cloud-into-a-camera-drone-or-other-gadget-near-you.
[5] B. Murmann, D. Bankman, E. Chai, D. Miyashita, and L. Yang, "Mixed-Signal Circuits for Embedded Machine-Learning Applications," Asilomar Conference on Signals, Systems and Computers, Asilomar, CA, Nov. 2015.
[6] I. Hubara*, M. Courbariaux*, D. Soudry, R. El-Yaniv, Y. Bengio, "Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations," in arXiv preprint: arXiv:1609.07061v1, 2016.
[7] E. A. Vittoz, “Future of analog in the VLSI environment,” in IEEE International Symposium on Circuits and Systems, 1990, pp. 1372–1375.
[8] L. Yang and B. Murmann, "SRAM Voltage Scaling for Energy-Efficient Convolutional Neural Networks," International Symposium on Quality Electronic Design (ISQED), Santa Clara, CA, Mar. 2017, pp. 7-12.
[9] D. Bankman and B. Murmann, "Passive charge redistribution digital-to-analogue multiplier," Electronics Letters, vol. 51, no. 5, pp. 386-388, March 5 2015.
[10] D. Bankman and B. Murmann, "An 8-Bit, 16 Input, 3.2 pJ/op Switched-Capacitor Dot Product Circuit in 28-nm FDSOI CMOS," Proc. IEEE Asian Solid-State Circuits Conf., Toyama, Japan, Nov. 2016, pp. 21-24.

Personal tools