Aaron Sidford
About Me:
I am an assistant professor of Management Science and Engineering and, by courtesy, of Computer Science at Stanford University. I received my PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner.
Research Interests:
My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures.
Email: sidford@stanford.edu
Conference Publications

The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood
With Nima Anari, Moses Charikar, and Kirankumar Shiragur.
In Conference on Learning Theory (COLT 2021).
(arXiv)

Thinking Inside the Ball: NearOptimal Minimization of the Maximal Loss
With Yair Carmon, Arun Jambulapati, and Yujia Jin.
In Conference on Learning Theory (COLT 2021).
(arXiv)

Towards Tight Bounds on the Sample Complexity of Averagereward MDPs
With Yujia Jin.
In International Conference on Machine Learning (ICML 2021).
(arXiv)

Minimum cost flows, MDPs, and ℓ1regression in nearly linear time for dense instances
With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang.
In Symposium on Theory of Computing (STOC 2021).
(arXiv)

Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers
With Arun Jambulapati.
In Symposium on Discrete Algorithms (SODA 2021).
(arXiv)

Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration
With Michael B. Cohen and Kevin Tian.
In Innovations in Theoretical Computer Science (ITCS 2021).
(arXiv)

Acceleration with a Ball Optimization Oracle
With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian.
In Conference on Neural Information Processing Systems (NeurIPS 2020).
Oral presentation
(arXiv)

Instance Based Approximations to Profile Maximum Likelihood
With Nima Anari, Moses Charikar, and Kirankumar Shiragur.
In Conference on Neural Information Processing Systems (NeurIPS 2020).
(arXiv)

LargeScale Methods for Distributionally Robust Optimization
With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution).
In Conference on Neural Information Processing Systems (NeurIPS 2020).
(arXiv)

Highprecision Estimation of Random Walks in Small Space
With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan.
In Symposium on Foundations of Computer Science (FOCS 2020).
(arXiv)

Bipartite Matching in Nearlylinear Time on Moderately Dense Graphs
With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang.
In Symposium on Foundations of Computer Science (FOCS 2020).
(arXiv)

Coordinate Methods for Matrix Games
With Yair Carmon, Yujia Jin, and Kevin Tian.
In Symposium on Foundations of Computer Science (FOCS 2020).
(arXiv)

Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time
With Tarun Kathuria and Yang P. Liu.
In Symposium on Foundations of Computer Science (FOCS 2020).
(Faster Divergence Maximization for Faster Maximum Flow (arXiv before merge))

Solving Discounted Stochastic TwoPlayer Games with NearOptimal Time and Sample Complexity
With Mengdi Wang, Lin Yang, and Yinyu Ye.
In International Conference on Artificial Intelligence and Statistics (AISTATS 2020).
(arXiv)

Efficiently Solving MDPs with Stochastic Mirror Descent
With Yujia Jin.
In International Conference on Machine Learning (ICML 2020).
(arXiv)

NearOptimal Methods for Minimizing StarConvex Functions and Beyond
With Oliver Hinder and Nimit Sharad Sohoni.
In Conference on Learning Theory (COLT 2020).
(arXiv)

Solving Tall Dense Linear Programs in Nearly Linear Time
With Jan van den Brand, Yin Tat Lee, and Zhao Song.
In Symposium on Theory of Computing (STOC 2020).
(arXiv)

Constant Girth Approximation for Directed Graphs in Subquadratic Time
With Shiri Chechik, Yang P. Liu, and Omer Rotem.
In Symposium on Theory of Computing (STOC 2020).
(arXiv)

Leverage Score Sampling for Faster Accelerated Regression and ERM
With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli.
In International Conference on Algorithmic Learning Theory (ALT 2020).
(arXiv)

Nearoptimal Approximate Discrete and Continuous Submodular Function Minimization
With Brian Axelrod and Yang P. Liu.
In Symposium on Discrete Algorithms (SODA 2020).
(arXiv)

Fast and Space Efficient Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos.
In Symposium on Discrete Algorithms (SODA 2020).
(arXiv)

Variance Reduction for Matrix Games
With Yair Carmon, Yujia Jin, and Kevin Tian.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Oral presentation
(arXiv)

Complexity of Highly Parallel NonSmooth Convex Optimization
With Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Spotlight presentation
(arXiv)

Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG
With Yujia Jin.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Spotlight presentation
(arXiv)

A Direct Õ(1/ϵ) Iteration Parallel Algorithm for Optimal Transport
With Arun Jambulapati and Kevin Tian.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
(arXiv)

A General Framework for Efficient Symmetric Property Estimation
With Moses Charikar and Kirankumar Shiragur.
In Conference on Neural Information Processing Systems (NeurIPS 2019).

Parallel Reachability in Almost Linear Work and Square Root Depth
With Arun Jambulapati and Yang P. Liu.
In Symposium on Foundations of Computer Science (FOCS 2019).
(arXiv)

Faster Matroid Intersection
With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiuwai Wong.
In Symposium on Foundations of Computer Science (FOCS 2019).
(arXiv)

Deterministic Approximation of Random Walks in Small Space
With Jack Murtagh, Omer Reingold, and Salil P. Vadhan.
In International Workshop on Randomization and Computation (RANDOM 2019).
(arXiv)

A Rank1 Sketch for Matrix Multiplicative Weights
With Yair Carmon, John C. Duchi, and Kevin Tian.
In Conference on Learning Theory (COLT 2019).
(arXiv)

Nearoptimal method for highly smooth convex optimization
With Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li.
In Conference on Learning Theory (COLT 2019).
(arXiv)

Efficient profile maximum likelihood for universal symmetric property estimation
With Moses Charikar and Kirankumar Shiragur.
In Symposium on Theory of Computing (STOC 2019).
(arXiv)

Memorysample tradeoffs for linear regression with small error
With Vatsal Sharan and Gregory Valiant.
In Symposium on Theory of Computing (STOC 2019).
(arXiv)

PerronFrobenius Theory in Nearly Linear Time: Positive Eigenvectors, Mmatrices, Graph Kernels, and Other Applications
With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi.
In Symposium on Discrete Algorithms (SODA 2019).
(arXiv)

Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression
With Neha Gupta.
In Conference on Neural Information Processing Systems (NeurIPS 2018).
(arXiv)

NearOptimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model
With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye.
In Conference on Neural Information Processing Systems (NeurIPS 2018).
(arXiv)

Coordinate Methods for Accelerating ℓ∞ Regression and Faster Approximate Maximum Flow
With Kevin Tian.
In Symposium on Foundations of Computer Science (FOCS 2018).
(arXiv)

Solving Directed Laplacian Systems in NearlyLinear Time through Sparse LU Factorizations
With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao.
In Symposium on Foundations of Computer Science (FOCS 2018).
(arXiv)

Efficient Convex Optimization with Membership Oracles
With Yin Tat Lee and Santosh S. Vempala.
In Conference on Learning Theory (COLT 2018).
(arXiv)

Accelerating Stochastic Gradient Descent for Least Squares Regression
With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli.
In Conference on Learning Theory (COLT 2018).
(arXiv)

Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners
With Jakub Pachocki, Liam Roditty, Roei Tov, and Virginia Vassilevska Williams.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)

Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes
With Mengdi Wang, Xian Wu, and Yinyu Ye.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)

Efficient Õ(n/ϵ) Spectral Sketches for the Laplacian and its Pseudoinverse
With Arun Jambulapati.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)

Stability of the Lanczos Method for Matrix Function Approximation
With Cameron Musco and Christopher Musco.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)

Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness
With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff.
In Innovations in Theoretical Computer Science (ITCS 2018).
(arXiv)

Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space
With Jack Murtagh, Omer Reingold, and Salil P. Vadhan.
In Symposium on Foundations of Computer Science (FOCS 2017).
(arXiv)

"Convex Until Proven Guilty": DimensionFree Acceleration of Gradient Descent on NonConvex Functions
With Yair Carmon, John C. Duchi, and Oliver Hinder.
In International Conference on Machine Learning (ICML 2017).
(arXiv)

AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu.
In Symposium on Theory of Computing (STOC 2017).
(arXiv)

Subquadratic Submodular Function Minimization
With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiuwai Wong.
In Symposium on Theory of Computing (STOC 2017).
(arXiv)

Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More
With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu.
In Symposium on Foundations of Computer Science (FOCS 2016).
(arXiv)

Geometric Median in Nearly Linear Time
With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki.
In Symposium on Theory of Computing (STOC 2016).
(arXiv)

Routing Under Balance
With Alina Ene, Gary L. Miller, and Jakub Pachocki.
In Symposium on Theory of Computing (STOC 2016).
(arXiv)

Streaming PCA: Matching Matrix Bernstein and NearOptimal Finite Sample Guarantees for Oja's Algorithm
With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli.
InConference on Learning Theory (COLT 2016).
(arXiv)

Principal Component Projection Without Principal Component Analysis
With Roy Frostig, Cameron Musco, and Christopher Musco.
In International Conference on Machine Learning (ICML 2016).
(arXiv)

Faster Eigenvector Computation via ShiftandInvert Preconditioning
With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli.
In International Conference on Machine Learning (ICML 2016).
(arXiv)

Efficient Algorithms for Largescale Generalized Eigenvector Computation and Canonical Correlation Analysis
With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli.
In International Conference on Machine Learning (ICML 2016).
(arXiv)

A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization
With Yin Tat Lee and Sam Chiuwai Wong
In Symposium on Foundations of Computer Science (FOCS 2015).
Machtey Award for Best Student Paper.
(arXiv)

Efficient Inverse Maintenance and Faster Algorithms for Linear Programming
With Yin Tat Lee
In Symposium on Foundations of Computer Science (FOCS 2015).
(arXiv)

Competing with the Empirical Risk Minimizer in a Single Pass
With Roy Frostig, Rong Ge, and Sham Kakade,
In Conference on Learning Theory (COLT 2015)
(arXiv)

Unregularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization
With Roy Frostig, Rong Ge, and Sham Kakade,
In International Conference on Machine Learning (ICML 2015)
(arXiv)

Uniform Sampling for Matrix Approximation
With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng.
In Innovations in Theoretical Computer Science (ITCS 2015).
(arXiv)

PathFinding Methods for Linear Programming : Solving Linear Programs in Õ(√rank) Iterations and Faster Algorithms for Maximum Flow
With Yin Tat Lee.
In Symposium on Foundations of Computer Science (FOCS 2014).
Best Paper Award and Machtey Award for Best Student Paper.
(arXiv Part I) and (arXiv Part II)

Single Pass Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco.
In Symposium on Foundations of Computer Science (FOCS 2014).
(arXiv)

An AlmostLinearTime Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations
With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia.
In Symposium on Discrete Algorithms (SODA 2014).
Best Paper Award
(arXiv)

Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems
With Yin Tat Lee.
In Symposium on Fondations of Computer Science (FOCS 2013).
(arXiv)

A Simple, Combinatorial Algorithm for Solving SDD Systems in NearlyLinear Time
With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu.
In Symposium on the Theory of Computing (STOC 2013).
(arXiv)
Journal Publications

Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space
With Jack Murtagh, Omer Reingold, and Salil Vadhan
SIAM Journal on Computing, 2021.
(arXiv)

Deterministic Approximation of Random Walks in Small Space
With Jack Murtagh, Omer Reingold, and Salil P. Vadhan.
Theory of Computing, 2021.
(arXiv)

Lower bounds for finding stationary points II: firstorder methods
With Yair Carmon, John C. Duchi, and Oliver Hinder.
Mathematical Programming, 2019.
(arXiv)

Lower bounds for finding stationary points I
With Yair Carmon, John C. Duchi, and Oliver Hinder.
Mathematical Programming, 2019.
(arXiv)

Accelerated Methods for NonConvex Optimization
With Yair Carmon, John C. Duchi, and Oliver Hinder.
SIAM Journal on Optimization, 28(2), pp 17511772, 2018.
(arXiv)

Parallelizing Stochastic Gradient Descent for Least Squares Regression: Minibatching, Averaging, and Model Misspecification
With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli.
Journal of Machine Learning Research, 18, pp 223:1223:42, 2017.
(arXiv)

Single Pass Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco.
SIAM Journal on Computing, 46(1), pp 456477, 2017.
(arXiv)
Students
I have the great privilege and good fortune to advise the following graduate students:
I have also had the great privilege and good fortune of advising the following students who have now graduated:
Courses