Large-scale convex optimization

Advances in randomized numerical linear algebra (randNLA) improve the runtime of a variety of fundamental linear algebraic operations. The resulting advances in low-rank matrix approximation have important implications throughout optimization: many subproblems admit speedups by using low-rank approximations to data matrices, problem variables, or Hessians. Our lab has used fast low rank approximations to accelerate cross-validation, dynamic optimization, optimal sensing, semidefinite programming, linear systems solvers, composite optimization, and conic optimization, and stochastic optimization. For example, we developed a low-rank preconditioner for solving linear systems (NystromPCG) and used it to speed up solution times for well-studied statistical learning problems such as LASSO, SVM, logistic regression by an order of magnitude (NysADMM and GeNIOS).

Talks

Software

  • GeNIOS for composite convex optimization

  • sketchySGD for stochastic optimization

  • PROMISE for faster stochastic optimization when the objective is convex

Papers

Gradient Methods with Online Scaling Part 2. Theoretical Foundations
W. Gao, Y. Chu, Y. Ye, and M. Udell
2025
[arxiv][bib]

Gradient Methods with Online Scaling Part I. Theoretical Foundations
W. Gao, Y. Chu, Y. Ye, and M. Udell
2025
[arxiv][bib]

Turbocharging Gaussian Process Inference with Approximate Sketch-and-Project
P. Rathore, Z. Frangella, S. Garg, S. Fazliani, M. Dereziński, and M. Udell
NeurIPS, 2025
[arxiv][url][bib]

Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent
Y. Chu, W. Gao, Y. Ye, and M. Udell
International Conference on Machine Learning (ICML), 2025
[arxiv][url][bib]

SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster Large-Scale Statistical Learning
J. Sun, Z. Frangella, and M. Udell
Major revision at SIAM Journal on Mathematics of Data Science (SIMODS), 2025
[arxiv][url][bib]

Gradient Methods with Online Scaling
W. Gao, Y. Chu, Y. Ye, and M. Udell
Conference on Learning Theory (COLT), 2025
[arxiv][url][bib]

Have ASkotch: A Neat Solution for Large-scale Kernel Ridge Regression
P. Rathore, Z. Frangella, J. Yang, M. Dereziński, and M. Udell
Submitted, 2025
[arxiv][url][bib]

When Does Primal Interior Point Method Beat Primal-dual in Linear Optimization?
W. Gao, H. Liu, Y. Ye, and M. Udell
Submitted, 2024
[arxiv][url][bib]

Randomized Nystr\"om Preconditioned Interior Point-Proximal Method of Multipliers
Y. Chu, L. Santos, and M. Udell
Accepted to SIAM Journal on Scientific Computing (SISC), 2024
[arxiv][bib]

PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates
P. Rathore, Z. Frangella, and M. Udell
JMLR, 2024
[arxiv][url][bib]

SketchySGD: Reliable Stochastic Optimization via Robust Curvature Estimates
Z. Frangella, P. Rathore, S. Zhao, and M. Udell
SIAM Journal on Mathematics of Data Science (SIMODS), 2024
[arxiv][url][bib]

Scalable Approximate Optimal Diagonal Preconditioning
W. Gao, Z. Qu, M. Udell, and Y. Ye
Submitted, 2023
[arxiv][bib]

GeNIOS: an (almost) second-order operator-splitting solver for large-scale convex optimization
T. Diamandis, Z. Frangella, S. Zhao, B. Stellato, and M. Udell
Accepted at Math Programming Computation, 2023
[arxiv][url][bib]

On the (linear) convergence of Generalized Newton Inexact ADMM
Z. Frangella, S. Zhao, T. Diamandis, B. Stellato, and M. Udell
Submitted, 2023
[arxiv][url][bib]

Randomized Numerical Linear Algebra for Optimization
M. Udell and Z. Frangella
SIAG/OPT Views and News, 2022
[pdf][url][bib][video]

NysADMM: faster composite convex optimization via low-rank approximation
S. Zhao, Z. Frangella, and M. Udell
International Conference on Machine Learning (ICML), 2022
[arxiv][url][bib]

Randomized Nystr\"om Preconditioning
Z. Frangella, J. A. Tropp, and M. Udell
SIAM Journal on Matrix Analysis and Applications, 2022
[arxiv][url][bib]

A Strict Complementarity Approach to Error Bound and Sensitivity of Solution of Conic Programs
L. Ding and M. Udell
Optimization Letters, 2022
[arxiv][pdf][url][bib]

On the simplicity and conditioning of low rank semidefinite programs
L. Ding and M. Udell
SIAM Journal on Optimization (SIOPT), 2021
[arxiv][url][bib]

Scalable Semidefinite Programming
A. Yurtsever, J. Tropp, O. Fercoq, M. Udell, and V. Cevher
SIAM Journal on Mathematics of Data Science (SIMODS), 2021
[arxiv][pdf][bib]

Randomized Sketching Algorithms for Low-Memory Dynamic Optimization
R. Muthukumar, D. Kouri, and M. Udell
SIAM Journal on Optimization (SIOPT), 2021
[pdf][url][bib]

An Optimal-Storage Approach to Semidefinite Programming using Approximate Complementarity
L. Ding, A. Yurtsever, V. Cevher, J. Tropp, and M. Udell
SIAM Journal on Optimization (SIOPT), 2021
Winner of 2017 INFORMS Optimization Society student paper prize
[arxiv][pdf][slides][bib]

kFW: A Frank-Wolfe style algorithm with stronger subproblem oracles
L. Ding, J. Fan, and M. Udell
In revision at Computational Optimization and Applications, 2020
[arxiv][url][bib]

Frank-Wolfe Style Algorithms for Large Scale Optimization
L. Ding and M. Udell
Large-Scale and Distributed Optimization, 2018
[arxiv][pdf][bib]

Sketchy Decisions: Convex Low-Rank Matrix Optimization with Optimal Storage
A. Yurtsever, M. Udell, J. Tropp, and V. Cevher
International Conference on Artificial Intelligence and Statistics (AISTATS), 2017
Oral Presentation
[arxiv][pdf][url][slides][bib][video]

The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM
D. Davis, B. Edmunds, and M. Udell
Advances in Neural Information Processing Systems, 2016
[arxiv][pdf][bib]