Aaron Sidford
About Me:
I am an assistant professor of Management Science and Engineering and, by courtesy, of Computer Science at Stanford University. I received my PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner.
Research Interests:
My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures.
Email: sidford@stanford.edu
Conference Publications

Faster Energy Maximization for Faster Maximum Flow
With Yang P. Liu.
Accepted to Symposium on Theory of Computing (STOC 2020).
(arXiv)

Solving Tall Dense Linear Programs in Nearly Linear Time
With Jan van den Brand, Yin Tat Lee, and Zhao Song.
Accepted to Symposium on Theory of Computing (STOC 2020).
(arXiv)

Constant Girth Approximation for Directed Graphs in Subquadratic Time
With Shiri Chechik, Yang P. Liu, and Omer Rotem.
Accepted to Symposium on Theory of Computing (STOC 2020).
(arXiv)

Leverage Score Sampling for Faster Accelerated Regression and ERM
With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli.
In International Conference on Algorithmic Learning Theory (ALT 2020).
(arXiv)

Nearoptimal Approximate Discrete and Continuous Submodular Function Minimization
With Brian Axelrod and Yang P. Liu.
In Symposium on Discrete Algorithms (SODA 2020).
(arXiv)

Nearoptimal Approximate Discrete and Continuous Submodular Function Minimization
With Brian Axelrod and Yang P. Liu.
In Symposium on Discrete Algorithms (SODA 2020).
(arXiv)

Fast and Space Efficient Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos.
In Symposium on Discrete Algorithms (SODA 2020).
(arXiv)

Variance Reduction for Matrix Games
With Yair Carmon, Yujia Jin, and Kevin Tian.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Oral presentation
(arXiv)

Complexity of Highly Parallel NonSmooth Convex Optimization
With Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Spotlight presentation
(arXiv)

Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG
With Yujia Jin.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Spotlight presentation
(arXiv)

A Direct Õ(1/ϵ) Iteration Parallel Algorithm for Optimal Transport
With Arun Jambulapati and Kevin Tian.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
(arXiv)

A General Framework for Efficient Symmetric Property Estimation
With Moses Charikar and Kirankumar Shiragur.
In Conference on Neural Information Processing Systems (NeurIPS 2019).

Parallel Reachability in Almost Linear Work and Square Root Depth
With Arun Jambulapati and Yang P. Liu.
In Symposium on Foundations of Computer Science (FOCS 2019).
(arXiv)

Faster Matroid Intersection
With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiuwai Wong.
In Symposium on Foundations of Computer Science (FOCS 2019).
(arXiv)

Deterministic Approximation of Random Walks in Small Space
With Jack Murtagh, Omer Reingold, and Salil P. Vadhan.
In International Workshop on Randomization and Computation (RANDOM 2019).
(arXiv)

A Rank1 Sketch for Matrix Multiplicative Weights
With Yair Carmon, John C. Duchi, and Kevin Tian.
In Conference on Learning Theory (COLT 2019).
(arXiv)

Nearoptimal method for highly smooth convex optimization
With Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li.
In Conference on Learning Theory (COLT 2019).
(arXiv)

Efficient profile maximum likelihood for universal symmetric property estimation
With Moses Charikar and Kirankumar Shiragur.
In Symposium on Theory of Computing (STOC 2019).
(arXiv)

Memorysample tradeoffs for linear regression with small error
With Vatsal Sharan and Gregory Valiant.
In Symposium on Theory of Computing (STOC 2019).
(arXiv)

PerronFrobenius Theory in Nearly Linear Time: Positive Eigenvectors, Mmatrices, Graph Kernels, and Other Applications
With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi.
In Symposium on Discrete Algorithms (SODA 2019).
(arXiv)

Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression
With Neha Gupta.
In Conference on Neural Information Processing Systems (NeurIPS 2018).
(arXiv)

NearOptimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model
With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye.
In Conference on Neural Information Processing Systems (NeurIPS 2018).
(arXiv)

Coordinate Methods for Accelerating ℓ∞ Regression and Faster Approximate Maximum Flow
With Kevin Tian.
In Symposium on Foundations of Computer Science (FOCS 2018).
(arXiv)

Solving Directed Laplacian Systems in NearlyLinear Time through Sparse LU Factorizations
With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao.
In Symposium on Foundations of Computer Science (FOCS 2018).
(arXiv)

Efficient Convex Optimization with Membership Oracles
With Yin Tat Lee and Santosh S. Vempala.
In Conference on Learning Theory (COLT 2018).
(arXiv)

Accelerating Stochastic Gradient Descent for Least Squares Regression
With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli.
In Conference on Learning Theory (COLT 2018).
(arXiv)

Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners
With Jakub Pachocki, Liam Roditty, Roei Tov, and Virginia Vassilevska Williams.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)

Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes
With Mengdi Wang, Xian Wu, and Yinyu Ye.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)

Efficient Õ(n/ϵ) Spectral Sketches for the Laplacian and its Pseudoinverse
With Arun Jambulapati.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)

Stability of the Lanczos Method for Matrix Function Approximation
With Cameron Musco and Christopher Musco.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)

Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness
With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff.
In Innovations in Theoretical Computer Science (ITCS 2018).
(arXiv)

Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space
With Jack Murtagh, Omer Reingold, and Salil P. Vadhan.
In 57th Annual Symposium on Foundations of Computer Science (FOCS 2017).
(arXiv)

"Convex Until Proven Guilty": DimensionFree Acceleration of Gradient Descent on NonConvex Functions
With Yair Carmon, John C. Duchi, and Oliver Hinder.
In 34th International Conference on Machine Learning (ICML 2017).
(arXiv)

AlmostLinearTime Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu.
In 48th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2017).
(arXiv)

Subquadratic Submodular Function Minimization
With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiuwai Wong.
In 48th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2017).
(arXiv)

Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More
With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu.
In 57th Annual Symposium on Foundations of Computer Science (FOCS 2016).
(arXiv)

Geometric Median in Nearly Linear Time
With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki.
In 48th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2016).
(arXiv)

Routing Under Balance
With Alina Ene, Gary L. Miller, and Jakub Pachocki.
In 48th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2016).
(arXiv)

Streaming PCA: Matching Matrix Bernstein and NearOptimal Finite Sample Guarantees for Oja's Algorithm
With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli.
In 29th Conference on Learning Theory (COLT 2016).
(arXiv)

Principal Component Projection Without Principal Component Analysis
With Roy Frostig, Cameron Musco, and Christopher Musco.
In 33rd International Conference on Machine Learning (ICML 2016).
(arXiv)

Faster Eigenvector Computation via ShiftandInvert Preconditioning
With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli.
In 33rd International Conference on Machine Learning (ICML 2016).
(arXiv)

Efficient Algorithms for Largescale Generalized Eigenvector Computation and Canonical Correlation Analysis
With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli.
In 33rd International Conference on Machine Learning (ICML 2016).
(arXiv)

A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization
With Yin Tat Lee and Sam Chiuwai Wong
In 56th Annual Symposium on Foundations of Computer Science (FOCS 2015).
Machtey Award for Best Student Paper.
(arXiv)

Efficient Inverse Maintenance and Faster Algorithms for Linear Programming
With Yin Tat Lee
In 56th Annual Symposium on Foundations of Computer Science (FOCS 2015).
(arXiv)

Competing with the Empirical Risk Minimizer in a Single Pass
With Roy Frostig, Rong Ge, and Sham Kakade,
In 28th Conference on Learning Theory (COLT 2015)
(arXiv)

Unregularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization
With Roy Frostig, Rong Ge, and Sham Kakade,
In 32nd International Conference on Machine Learning (ICML 2015)
(arXiv)

Uniform Sampling for Matrix Approximation
With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng.
In 6th Innovations in Theoretical Computer Science (ITCS 2015).
(arXiv)

PathFinding Methods for Linear Programming : Solving Linear Programs in Õ(√rank) Iterations and Faster Algorithms for Maximum Flow
With Yin Tat Lee.
In 55th Annual Symposium on Foundations of Computer Science (FOCS 2014).
Best Paper Award and Machtey Award for Best Student Paper.
(arXiv Part I) and (arXiv Part II)

Single Pass Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco.
In 55th Annual Symposium on Foundations of Computer Science (FOCS 2014).
(arXiv)

An AlmostLinearTime Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations
With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia.
In 25th Annual ACMSIAM Symposium on Discrete Algorithms (SODA 2014).
Best Paper Award
(arXiv)

Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems
With Yin Tat Lee.
In 54th Annual IEEE Symposium on Fondations of Computer Science (FOCS 2013).
(arXiv)

A Simple, Combinatorial Algorithm for Solving SDD Systems in NearlyLinear Time
With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu.
In 45th Annual ACM Symposium on the Theory of Computing (STOC 2013).
(arXiv)
Journal Publications

Lower bounds for finding stationary points II: firstorder methods
With Yair Carmon, John C. Duchi, and Oliver Hinder.
Mathematical Programming, 2019.
(arXiv)

Lower bounds for finding stationary points I
With Yair Carmon, John C. Duchi, and Oliver Hinder.
Mathematical Programming, 2019.
(arXiv)

Accelerated Methods for NonConvex Optimization
With Yair Carmon, John C. Duchi, and Oliver Hinder.
SIAM Journal on Optimization, 28(2), pp 17511772, 2018.
(arXiv)

Parallelizing Stochastic Gradient Descent for Least Squares Regression: Minibatching, Averaging, and Model Misspecification
With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli.
Journal of Machine Learning Research, 18, pp 223:1223:42, 2017.
(arXiv)

Single Pass Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco.
SIAM Journal on Computing, 46(1), pp 456477, 2017.
(arXiv)
Students
I have the great privelege and good fortune to advise the following graduate students.
Courses