EE 375/Stat 375 – Mathematical problems in Machine Learning

Andrea Montanari, Stanford University, Spring 2021

This class provides an introduction to a certain number of theoretical ideas that have been developed with the objective of understanding modern deep learning methods. Specific topics might include

Empirical risk minimization and empirical process theory:

  • Uniform convergence guarantees (Radamacher complexity)

  • Complexity bounds for neural networks

  • Generalization bounds under weaker conditions

Implicit regularization:

  • Linear models

  • Examples with nonlinear models

Linear regression in infinite dimension

  • Ridge regression regression with random designs

  • Kernel ridge regression

  • Benign overfitting


  • The linear (neural tangent) regime

  • Very wide two-layer networks

  • Beyond the linear regime? Mean field and other approaches

Generalization in the linear regime

  • The interpolation phase transition

  • Random features models

  • Neural tangent models

Open problems

Class Times and Locations

  • Tue-Thu, 12:30PM-01:50PM


First lecture on Tuesday, March 30