Yujia Jin

I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group. I am fortunate to be advised by Aaron Sidford.

I am broadly interested in optimization problems, sometimes in the intersection with machine learning theory and graph applications. I enjoy understanding the theoretical ground of many algorithms that are of practical importance.

Prior to coming to Stanford, in 2018 I received my Bachelor's degree in Applied Math at Fudan University, where I was fortunate to work with Prof. Zhongzhi Zhang. From 2016 to 2018, I also worked in Research Institute for Interdisciplinary Sciences (RIIS) at SHUFE, where I was fortunate to be advised by Prof. Dongdong Ge.

Email / Google Scholar

Selected Talks

The Complexity of Infinite-Horizon General-Sum Stochastic Games

  • Oct. 2022 - Algorithm Seminar, Google Research
  • Oct. 2022 - INFORMS Annual Meeting
  • The Complexity of Optimizing Single and Multi-player Games

  • Oct. 2022 - Young Researcher Workshop, Cornell ORIE
  • A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions

  • Apr. 2022 - Learning and Games Program, Simons Institute
  • On the Sample Complexity for Average-reward Markov Decision Processes

  • Mar. 2022 - RL Theory Seminar, online
  • Feb. 2022 - OR Seminar, Stanford
  • more talks ...

    Publications

    VOQL: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation [pdf]
    with Alekh Agarwal and Tong Zhang
    to appear in Conference of Learning Theory (COLT), 2023

    "An effort towards better regret (in terms of horizon and function class complexity) under sparse rewards with general function approximation."

    Moments, Random Walks, and Limits for Spectrum Approximation [pdf]
    with Chris Musco, Aaron Sidford and Apoorv Singh
    to appear in Conference of Learning Theory (COLT), 2023

    "How many samples are necessary to learn the spectrum of a large graph (when n>>1/eps)?"

    Quantum Speedups for Zero-Sum Games via Improved Dynamic Gibbs Sampling [pdf]
    with Adam Bouland, Yosheb Getachew, Aaron Sidford and Kevin Tian
    to appear in International Conference of Machine Learning (ICML), 2023

    "Even more efficient Gibbs sampler and multiplicative weight updates given quantum model access."

    The Complexity of Infinite-Horizon General-Sum Stochastic Games [pdf]
    with Vidya Muthukumar and Aaron Sidford
    Innovations in Theoretical Computer Science (ITCS), 2022

    "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard."

    Optimal and Adaptive Monteiro-Svaiter Acceleration [pdf]
    with Yair Carmon, Danielle Hausler, Arun Jambulapati and Aaron Sidford
    Neural Information Processing Systems (NeurIPS), 2022

    "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter!"

    Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching [pdf]
    with Arun Jambulapati, Aaron Sidford and Kevin Tian
    International Colloquium on Automata, Languages, and Programming (ICALP), 2022

    "A general continuous optimization framework for better dynamic (decremental) matching algorithms."

    Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods [pdf]
    with Kevin Tian and Aaron Sidford
    Conference of Learning Theory (COLT), 2022

    "Faster algorithms for separable minimax, finite-sum and separable finite-sum minimax."

    RECAPP: Crafting a More Efficient Catalyst for Convex Optimization [pdf]
    with Yair Carmon, Arun Jambulapati and Aaron Sidford
    Internatioonal Conference of Machine Learning (ICML), 2022

    "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers."

    Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space [pdf]
    with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian
    ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022

    "Streaming matching (and optimal transport) in \(\tilde{O}(1/\epsilon)\) passes and \(O(n)\) space."

    Stochastic Bias-Reduced Gradient Methods [pdf]
    with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford
    Neural Information Processing Systems (NeurIPS), 2021

    "A low-bias low-cost estimator of subproblem solution suffices for acceleration! "

    Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss [pdf] [talk] [poster]
    with Yair Carmon, Arun Jambulapati and Aaron Sidford
    Conference of Learning Theory (COLT), 2021

    "Improved upper and lower bounds on first-order queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\)."

    Towards Tight Bounds on the Sample Complexity of Average-reward MDPs [pdf] [talk] [poster]
    with Aaron Sidford
    International Conference on Machine Learning (ICML), 2021

    "Sample complexity for average-reward MDPs? A nearly matching upper and lower bound for constant error here!"

    Acceleration with a Ball Optimization Oracle [pdf] [talk] [poster]
    with Yair Carmon, Arun Jambulapati, Qijia Jiang, Yin Tat Lee, Aaron Sidford and Kevin Tian
    Neural Information Processing Systems (NeurIPS, Oral), 2020

    "How many \(\epsilon\)-length segments do you need to look at for finding an \(\epsilon\)-optimal minimizer of convex function on a line?"

    Coordinate Methods for Matrix Games [pdf] [talk]
    with Yair Carmon, Kevin Tian and Aaron Sidford
    Symposium on Foundations of Computer Science (FOCS), 2020

    "About how and why coordinate (variance-reduced) methods are a good idea for exploiting (numerical) sparsity of data."

    Efficiently Solving MDPs with Stochastic Mirror Descent [pdf] [talk]
    with Aaron Sidford
    International Conference on Machine Learning (ICML), 2020

    "Team-convex-optimization for solving discounted and average-reward MDPs!"

    Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG [pdf] [slides]
    with Aaron Sidford
    Neural Information Processing Systems (NeurIPS, Spotlight), 2019

    "A special case where variance reduction can be used to nonconvex optimization (monotone operators)."

    Variance Reduction for Matrix Games[pdf] [poster]
    with Yair Carmon, Aaron Sidford and Kevin Tian
    Neural Information Processing Systems (NeurIPS, Oral), 2019

    "General variance reduction framework for solving saddle-point problems & Improved runtimes for matrix games."

    Workshop Publications

    A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions
    with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford
    BayLearn, 2021

    "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\)."

    On the Sample Complexity of Average-reward MDPs[pdf] [poster]
    with Aaron Sidford
    ICML Workshop on Reinforcement Learning Theory, 2021

    "Collection of new upper and lower sample complexity bounds for solving average-reward MDPs."

    Variance Reduction for Matrix Games[pdf] [poster]
    with Yair Carmon, Aaron Sidford and Kevin Tian
    NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019

    "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains."

    Variance Reduction for Matrix Games[pdf] [poster]
    with Yair Carmon, Aaron Sidford and Kevin Tian
    BayLearn, 2019

    "A short version of the conference publication under the same title."