|
Publications
Google Scholar
Gradient Methods with Online Scaling
-
Gradient Methods with Online Scaling.
W. Gao, YC. Chu, Y. Ye, M. Udell.
COLT 2025.
-
Provable and Practical Online Learning Rate Adaptation with Hypergradient
Descent.
YC. Chu, W. Gao, Y. Ye, M. Udell.
ICML 2025.
-
Gradient Methods with Online Scaling. Part I. Theoretical Foundations.
W. Gao, YC. Chu, Y. Ye, M. Udell.
Submitted, 2025.
-
Gradient Methods with Online Scaling. Part II. Practical Aspects.
YC. Chu, W. Gao, Y. Ye, M. Udell.
Submitted, 2025.
This series of papers establishes a new mechanism for online learning algorithms to
accelerate
first-order methods.
It also provides the first theoretical analysis for hypergradient descent, a 25-year-old
optimization technique for machine learning.
The code implementation is available at
https://github.com/udellgroup/osgm-best-hypergrad.
Online Learning and Stochastic First-order Methods
-
Beyond $\mathcal{O}(\sqrt{T})$ Regret: Decoupling Learning and Decision-Making in
Online Linear
Programming.
W. Gao, D. Ge, C. Sun, C. Xue, Y. Ye.
Operations Research, 2026.
-
Small Gradient Norm Regret for Online Convex Optimization.
W. Gao, C. He, M. Udell.
Submitted, 2026.
-
A Smooth Approximation Framework for Weakly Convex Optimization.
Q. Deng, W. Gao.
Submitted, 2025.
-
New Results on the Polyak Stepsize: Tight Convergence Analysis and Universal Function
Classes.
C. He, W. Gao, B. Jiang, M. Udell, S. Zhang.
Submitted, 2025.
-
Wait-Less Offline Tuning and Re-solving for Online Decision Making.
J. Sun, W. Gao, E. Vitercik, Y. Ye.
ICML 2025.
-
Decoupling Learning and Decision-Making: Breaking the $\mathcal{O}(\sqrt{T})$ Barrier
in Online
Resource
Allocation with First-Order Methods.
W. Gao, C. Sun, C. Xue, Y. Ye.
ICML 2024.
-
Stochastic Weakly Convex Optimization Beyond Lipschitz Continuity.
W. Gao, Q. Deng.
ICML 2024.
-
Delayed Algorithms for Distributed Stochastic Weakly Convex Optimization.
W. Gao, Q. Deng.
NeurIPS 2024.
-
Solving Linear Programs with Fast Online Learning Algorithms.
W. Gao, D. Ge, C. Sun, Y. Ye.
ICML 2023.
-
Minibatch and Momentum Model-based Methods for Stochastic Weakly Convex
Optimization.
Q. Deng, W. Gao.
NeurIPS 2021.
Large-scale Numerical Optimization
-
Scalable Approximate Optimal Diagonal Preconditioning.
W. Gao, Z. Qu, M. Udell, Y. Ye.
Computational Optimization and Applications, 2026.
-
Algorithm 1055: HDSDP – Software for Semidefinite Programming.
W. Gao, D. Ge, Y. Ye.
ACM Transactions on Mathematical Software, 2025.
-
On Sinkhorn’s Algorithm and Choice Modeling.
Z. Qu, A. Galichon, W. Gao, J. Ugander.
Operations Research, 2025.
-
When Does Primal Interior Point Method Beat Primal-Dual in Linear
Optimization?
W. Gao, H. Liu, Y. Ye, M. Udell.
Preprint, 2024.
-
An Enhanced ADMM-based Interior Point Method for Linear and Conic
Optimization.
Q. Deng, Q. Feng, W. Gao et al.
INFORMS Journal on Computing, 2024.
-
Data-driven Mixed Integer Optimization through Probabilistic Multi-variable
Branching.
Y. Chen, W. Gao, W. Zhang, D. Ge, H. Liu, Y. Ye.
Submitted, 2023.
-
Optimal Diagonal Preconditioning: Theory and Practice.
Z. Qu, W. Gao, O. Hinder, Y. Ye, Z. Zhou.
Operations Research, 2022.
Large Language Models for Optimization Modeling
-
OptiMUS-0.3: Using Large Language Models to Model and Solve Optimization
Problems
at
Scale.
A. AhmadiTeshnizi, W. Gao, H. Brunborg, S. Talaei, M. Udell.
Major revision at Management Science, 2025.
-
OptiMUS: Scalable Optimization Modeling with (MI) LP Solvers and Large
Language
Models.
A. AhmadiTeshnizi, W. Gao, M. Udell.
ICML 2024.
|