Peter Esser's Lecture on Training Deep Networks on the IBM TrueNorth Chip [3142]: (PDF), (PDF), (PDF), (PDF), (AUDIO).

References

[1]   Ben Varkey Benjamin, Peiran Gao, Emmett McQuinn, Swadesh Choudhary, Anand Chandrasekaran, Jean-Marie Bussat, Rodrigo Alvarez-Icaza, John V. Arthur, Paul Merolla, and Kwabena Boahen. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proceedings of the IEEE, 102(5):699--716, 2014.

[2]   M.N. Bojnordi and E. Ipek. Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning. In Proceedings of the International Symposium on High Performance Computer Architecture, 2016.

[3]   Steven K. Esser, Paul A. Merolla, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Alexander Andreopoulos, David J. Berg, Jeffrey L. McKinstry, Timothy Melano, Davis R. Barch, Carmelo di Nolfo, Pallab Datta, Arnon Amir, Brian Taba, Myron D. Flickner, and Dharmendra S. Modha. Convolutional networks for fast, energy-efficient neuromorphic computing. CoRR, arXiv:1603.08270, 2016.

[4]   Tayfun Gokmen and Yurii Vlasov. Acceleration of deep neural network training with resistive cross-point devices. CoRR, arXiv:1603.07341, 2016.