Andrew M. Saxe

PhD student, Department of Electrical Engineering, Stanford University

Trainee, Center for Mind, Brain, and Computation

Research Associate, Keck Center for Integrative Neuroscience, UCSF

asaxe@stanford.edu

 

Advisors: Jay McClelland, Andrew Ng, Christoph Schreiner

Support: NDSEG and honorary Stanford Graduate fellowships

Education: BSE in Electrical Engineering, Princeton University (summa cum laude)

 

 

Research interests: I am interested in the theory of deep learning and its application to phenomena in neuroscience and psychology.

Publications:

 

Saxe, A.M., McClelland, J.L., and Ganguli, S. (2013). Dynamics of learning in deep linear neural networks. In NIPS Workshop on Deep Learning 2013.(pdf)(supplementary material)

Saxe, A.M., McClelland, J.L., and Ganguli, S. (2013). Learning hierarchical category structure in deep networks. In CogSci 2013.(pdf)

Saxe, A.M., McClelland, J.L., & Ganguli, S. (2013, February). A Mathematical Theory of Semantic Development. Poster at COSYNE 2013, Salt Lake City. (pdf)

Saxe, A., Bhand, M., Mudur, R., Suresh, B., & Ng, A. Unsupervised learning models of primary cortical receptive fields and receptive field plasticity. In NIPS 2011.  (pdf) (supplementary material) (data upon request)

Saxe, A., Koh, P.W., Chen, Z., Bhand, M., Suresh, B., & Ng, A. (2011). On random weights and unsupervised feature learning. In ICML 2011. (pdf) (code)

Saxe, A., Bhand, M., Mudur, R., Suresh, B., & Ng, A. (2011, February). Modeling cortical representational plasticity with unsupervised feature learning. Poster at COSYNE 2011, Salt Lake City. (pdf)

Saxe, A., Koh, P.W., Chen, Z., Bhand, M., Suresh, B., & Ng, A. (2010). On random weights and unsupervised feature learning. In NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. (pdf) (supplementary material) (code)

Balci, F., Simen, P., Niyogi, R., Saxe, A., Hughes, J. A., Holmes, P., Cohen, J.D. (2010). Acquisition of decision making criteria: reward rate ultimately beats accuracy. Attention, Perception, & Psychophysics, 1–18. (pdf)

Goodfellow, I. J., Le, Q. V., Saxe, A. M., Lee, H., & Ng, A. Y. (2009). Measuring invariances in deep networks. In NIPS 2009. (pdf)

Baldassano, C. A., Franken, G. H., Mayer, J. R., Saxe, A. M., & Yu, D. D. (2009). Kratos: Princeton University’s entry in the 2008 Intelligent Ground Vehicle Competition. Proceedings of SPIE. (pdf)

Atreya, A.R., Cattle, B.C., Collins, B.M., Essenburg, B., Franken, G. H., Saxe, A. M., et al. (2006). Prospect Eleven: Princeton University’s entry in the 2005 DARPA Grand Challenge. Journal of Field Robotics, 23(9), 745-753. (pdf)

 

Software:

 

Object recognition with features from random weight TCNNs

Matlab maximally informative dimension solver (coming soon)