Junzi Zhang 张峻梓
About me
Contact
Email: junziz [at] stanford (dot) edu
Personal Email: saslas (dot) c (dot) royale [at] gmail (dot) com
[ResearchGate]
[Google Scholar]
[github]
[LinkedIn]
Education
Research Interests
Optimization
Reinforcement learning, optimal control & game theory
Machine learning, statistics & applied probability
News
Our paper A General Framework for Learning Mean-Field Games is accepted by Mathematics of Operations Research. (April 5, 2022)
Our paper On the Global Convergence of Momentum-based Policy Gradient is accepted by AISTATS 2022. (January 18, 2022)
Our paper on fictitious discount algorithms for episodic reinforcement learning is accepted by AAAI 2022. (December 1, 2021)
I organized a session on Recent Advances in Data Efficient Reinforcement Learning with Policy Gradient Methods at the 2021 INFORMS Annual Meeting.
Two new papers posted on arXiv. In these papers, we derive the first set of global convergence results for stochastic policy gradient methods with momentum and entropy. (October 19, 2021)
New paper posted on arXiv. In this paper, we quantify the widely-used “fictitious discount” empirical trick in finite-horizon episodic reinforcement learning from a rigorous theoretical perspective for the first time. (September 13, 2021)
I gave a talk at Seminars in Applied Mathematics, Yau Mathematical Sciences Center, Tsinghua University. (August 19, 2021)
I'm recognized as an outstanding reviewer at ICLR 2021. (March 19, 2021)
I recently joined Amazon Advertising (Palo Alto) as an Applied Scientist II in March 2021.
I gave a talk on “New Discoveries of Old Wisdoms for Faster Optimization” as an invited speaker at the NeurIPS 2020 Nairobi Meetup. [Video] (December 10, 2020)
Our paper Sample Efficient Reinforcement Learning with REINFORCE is accepted by AAAI 2021. (December 2, 2020)
|