|
Stat 314A : Syllabus
Here is a rough syllabus
(precise schedule will depend on the progress in class, and we might
end up not being able to cover all of the topics listed below).
Sep 24, 26
General setting, models and examples. Generalized single index models. Universality.
Hong Hu and Yue M Lu, Universality laws for high-dimensional learning with random
features, IEEE Transactions on Information Theory. 2022
Montanari A, Saeed BN. Universality of empirical risk minimization. In Conference on Learning Theory 2022
Han Q, Shen Y. Universality of regularized regression estimators in high dimensions.
The Annals of Statistics. 2023 Aug;51(4):1799-823.
Montanari A, Ruan F, Saeed B, Sohn Y. Universality of max-margin classifiers. arXiv preprint arXiv:2310.00176.
Oct 1, 3
Universality, continued. Gaussian comparison inequalities.
Oct 8, 10
Convex Empirical Risk Minimization. Examples: The Lasso, Logistic regression, Max Margin classification.
Gordon Y. Some inequalities for Gaussian processes and applications.
Israel Journal of Mathematics. 1985 Dec;50:265-89.
Gordon Y. . On Milman’s inequality and random subspaces which escape through a mesh in R
n. In J. Lindenstrauss and V. D. Milman, editors, Geometric Aspects of Functional Analysis, pages 84–106, Berlin,
Heidelberg, 1988. Springer Berlin Heidelberg.
Chandrasekaran V, Recht B, Parrilo PA, Willsky AS. The convex geometry of linear inverse problems.
Foundations of Computational mathematics. 2012 Dec;12(6):805-49.
C. Thrampoulidis, S. Oymak, and B. Hassibi. Regularized linear regression: A precise analysis of the
estimation error. In Conference on Learning Theory, pages 1683–1709, 2015
C. Thrampoulidis, E. Abbasi, and B. Hassibi. Precise error analysis of regularized m -estimators in high
dimensions. IEEE Transactions on Information Theory, 64(8):5592–5628, 2018.
Celentano M, Montanari A, Wei Y. The lasso with general gaussian designs with applications
to hypothesis testing. The Annals of Statistics. 2023 Oct;51(5):2194-220.
Montanari A, Ruan F, Sohn Y, Yan J. The generalization error of max-margin linear classifiers:
Benign overfitting and high dimensional asymptotics in the overparametrized regime.
arXiv:1911.01544.
Oct 15, 17
Exact recovery thresholds, noise sensitivity, connections with Gaussian widths.
Oct 22, 24
Approximate Message Passing algorithms. State evolution.
Gap between optimal convex M-estimator and optimal polynomial time estimator.
Bayati M, Montanari A. The dynamics of message passing on dense graphs, with
applications to compressed sensing. IEEE Transactions on Information Theory.
2011 Jan 20;57(2):764-85.
Bayati M, Lelarge M, Montanari A. Universality in polytope phase transitions
and message passing algorithms. Annals of Applied Probability. 2015;25(2):753-822
Chen WK, Lam WK. Universality of approximate message passing algorithms.
Electronic Journal of Probability. 2021 Mar 23;26:36.
Wang T, Zhong X, Fan Z. Universality of approximate message passing algorithms and
tensor networks. The Annals of Applied Probability. 2024 Aug;34(4):3943-94.
Fan Z. Approximate message passing algorithms for rotationally invariant matrices.
The Annals of Statistics. 2022 Feb;50(1):197-224.
Li G, Wei Y. A non-asymptotic framework for approximate message passing in spiked models.
arXiv preprint arXiv:2208.03313.
Oct 29, 31
Weak recovery thresholds, connection with spectral methods.
Mondelli M, Montanari A. Fundamental Limits of Weak Recovery with Applications to Phase
Retrieval. Foundations of Computational Mathematics. 2019 Jun 15;19:703-73.
Chen Y, Chi Y, Fan J, Ma C. Spectral methods for data science: A statistical perspective.
Foundations and Trends in Machine Learning. 2021 Oct 20;14(5):566-806.
Mondelli M, Venkataramanan R. Approximate message passing with spectral initialization for generalized linear models. InInternational Conference on Artificial Intelligence and Statistics 2021 Mar 18 (pp. 397-405). PMLR.
Nov 7
Analysis of spectral methods.
Bai Z, Silverstein JW. Spectral analysis of large dimensional random matrices.
New York: Springer; 2010 Feb.
Couillet R, Liao Z. Random matrix methods for machine learning. Cambridge University Press; 2022 Jul 21.
Oct 12, 15
Sharp asymptotics for Bayes optimal estimation. Information-computation gap.
Reeves G, Pfister HD. The replica-symmetric prediction for random linear estimation with
Gaussian matrices is exact. IEEE Transactions on Information Theory. 2019 Jan 10;65(4):2252-83.
Barbier J, Krzakala F, Macris N, Miolane L, Zdeborova L.
Optimal errors and phase transitions in high-dimensional generalized linear models.
Proceedings of the National Academy of Sciences. 2019 Mar 19;116(12):5451-60.
Barbier J, Macris N. The adaptive interpolation method: a simple scheme to prove replica
formulas in Bayesian inference. Probability theory and related fields. 2019 Aug 1;174:1133-85.
Dominguez T, Mourrat JC. Statistical mechanics of mean-field disordered systems:
a Hamilton-Jacobi approach. arXiv preprint arXiv:2311.08976. 2023 Nov 15.
Nov 19, 21
Non-isotropic models, benign overfitting.
Cheng C, Montanari A. Dimension free ridge regression. arXiv preprint arXiv:2210.08571. 2022 Oct 16.
Dec 3, 5
Non-convex ERM problems and multi-index models.
|