Yujia Jin
I am a fifthandfinalyear PhD student in the Department of Management Science and Engineering at Stanford in
the Operations Research group. I am fortunate to be advised by Aaron Sidford.
I am broadly interested in optimization problems, sometimes in the intersection with machine learning
theory and graph applications. I enjoy understanding the theoretical ground of many algorithms that are
of practical importance.
Prior to coming to Stanford, in 2018 I received my Bachelor's degree in Applied Math at Fudan
University, where
I was fortunate to work with Prof. Zhongzhi Zhang. From 2016 to 2018, I also worked in
Research Institute for Interdisciplinary Sciences (RIIS) at
SHUFE, where I was fortunate
to be advised by Prof. Dongdong Ge.
Email /
Google Scholar


The Complexity of InfiniteHorizon GeneralSum Stochastic Games
Oct. 2022  Algorithm Seminar, Google Research
Oct. 2022  INFORMS Annual Meeting

The Complexity of Optimizing Single and Multiplayer Games
Oct. 2022  Young Researcher Workshop, Cornell ORIE

A NearOptimal Method for Minimizing the Maximum of N Convex Loss Functions
Apr. 2022  Learning and Games Program, Simons Institute

On the Sample Complexity for Averagereward Markov Decision Processes
Mar. 2022  RL Theory Seminar, online
Feb. 2022  OR Seminar, Stanford

Stochastic Methods for Minimax Games
Sept. 2021  Young Researcher Workshop, Cornell ORIE

Stochastic Methods for Matrix Games and its Applications
Sept. 2021  ACO Student Seminar, Georgia Tech

Acceleration with a Ball Optimization Oracle
Dec. 2020  NeurIPS Oral presentation

Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG
Dec. 2019  NeurIPS Spotlight presentation
Oct. 2019  INFORMS Annual Meeting

VOQL: Towards Optimal Regret in Modelfree RL with Nonlinear Function Approximation
[pdf]
with Alekh Agarwal and Tong Zhang
to appear in Conference of Learning Theory (COLT), 2023
"An effort towards better regret (in terms of horizon and function class complexity) under sparse rewards with general function approximation."

Moments, Random Walks, and Limits for Spectrum Approximation
[pdf]
with Chris Musco, Aaron Sidford and Apoorv Singh
to appear in Conference of Learning Theory (COLT), 2023
"How many samples are necessary to learn the spectrum of a large graph (when n>>1/eps)?"

Quantum Speedups for ZeroSum Games via Improved Dynamic Gibbs Sampling
[pdf]
with Adam Bouland, Yosheb Getachew, Aaron Sidford and Kevin Tian
to appear in International Conference of Machine Learning (ICML), 2023
"Even more efficient Gibbs sampler and multiplicative weight updates given quantum model access."

The Complexity of InfiniteHorizon GeneralSum Stochastic Games
[pdf]
with Vidya Muthukumar and Aaron Sidford
Innovations in Theoretical Computer Science (ITCS), 2022
"Computing stationary solution for multiagent RL is hard: Indeed, CCE for simultaneous games and NE for turnbased games are both PPADhard."

Optimal and Adaptive MonteiroSvaiter Acceleration
[pdf]
with Yair Carmon, Danielle Hausler, Arun Jambulapati and Aaron Sidford
Neural Information Processing Systems (NeurIPS), 2022
"An attempt to make MonteiroSvaiter acceleration practical: no binary search and no need to know smoothness parameter!"

Regularized BoxSimplex Games and Dynamic Decremental Bipartite Matching
[pdf]
with Arun Jambulapati, Aaron Sidford and Kevin Tian
International Colloquium on Automata, Languages, and Programming (ICALP), 2022
"A general continuous optimization framework for better dynamic (decremental) matching algorithms."

Sharper Rates for Separable Minimax and Finite Sum Optimization via PrimalDual Extragradient Methods
[pdf]
with Kevin Tian and Aaron Sidford
Conference of Learning Theory (COLT), 2022
"Faster algorithms for separable minimax, finitesum and separable finitesum minimax."

RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
[pdf]
with Yair Carmon, Arun Jambulapati and Aaron Sidford
Internatioonal Conference of Machine Learning (ICML), 2022
"A new Catalyst framework with relaxed error condition for faster finitesum and minimax solvers."

SemiStreaming Bipartite Matching in Fewer Passes and Optimal Space
[pdf]
with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian
ACMSIAM Symposium on Discrete Algorithms (SODA), 2022
"Streaming matching (and optimal transport) in \(\tilde{O}(1/\epsilon)\) passes and \(O(n)\) space."

Stochastic BiasReduced Gradient Methods
[pdf]
with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford
Neural Information Processing Systems (NeurIPS), 2021
"A lowbias lowcost estimator of subproblem solution suffices for acceleration! "

Thinking Inside the Ball: NearOptimal Minimization of the Maximal Loss
[pdf] [talk] [poster]
with Yair Carmon, Arun Jambulapati and Aaron Sidford
Conference of Learning Theory (COLT), 2021
"Improved upper and lower bounds on firstorder queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\)."

Towards Tight Bounds on the Sample Complexity of Averagereward MDPs
[pdf] [talk] [poster]
with Aaron Sidford
International Conference on Machine Learning (ICML), 2021
"Sample complexity for averagereward MDPs? A nearly matching upper and lower bound for constant error here!"

Acceleration with a Ball Optimization Oracle
[pdf] [talk] [poster]
with Yair Carmon, Arun Jambulapati, Qijia Jiang, Yin Tat Lee, Aaron Sidford and Kevin Tian
Neural Information Processing Systems (NeurIPS, Oral), 2020
"How many \(\epsilon\)length segments do you need to look at for finding an \(\epsilon\)optimal minimizer of convex function on a line?"

Coordinate Methods for Matrix Games
[pdf] [talk]
with Yair Carmon, Kevin Tian and Aaron Sidford
Symposium on Foundations of Computer Science (FOCS), 2020
"About how and why coordinate (variancereduced) methods are a good idea for exploiting (numerical) sparsity of data."

Efficiently Solving MDPs with Stochastic Mirror Descent
[pdf] [talk]
with Aaron Sidford
International Conference on Machine Learning (ICML), 2020
"Teamconvexoptimization for solving discounted and averagereward MDPs!"

Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG
[pdf] [slides]
with Aaron Sidford
Neural Information Processing Systems (NeurIPS, Spotlight), 2019
"A special case where variance reduction can be used to nonconvex optimization (monotone operators)."

Variance Reduction for Matrix Games
[pdf] [poster]
with Yair Carmon, Aaron Sidford and Kevin Tian
Neural Information Processing Systems (NeurIPS, Oral), 2019
"General variance reduction framework for solving saddlepoint problems & Improved runtimes for matrix games."

A NearOptimal Method for Minimizing the Maximum of N Convex Loss Functions
with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford
BayLearn, 2021
"We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\)."

On the Sample Complexity of Averagereward MDPs
[pdf] [poster]
with Aaron Sidford
ICML Workshop on Reinforcement Learning Theory, 2021
"Collection of new upper and lower sample complexity bounds for solving averagereward MDPs."

Variance Reduction for Matrix Games
[pdf] [poster]
with Yair Carmon, Aaron Sidford and Kevin Tian
NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019
"Collection of variancereduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains."

Variance Reduction for Matrix Games
[pdf] [poster]
with Yair Carmon, Aaron Sidford and Kevin Tian
BayLearn, 2019
"A short version of the conference publication under the same title."

