I wanted to talk about a problem I encountered in 2020 during Putnam seminar, and the subsequent exploration I did on it. It’s still unresolved but I hope my thought processes could help those who are interested in understanding how to propose problems.
(x-posted from SIMO XMen blog.)
Just penning down a note about how establishing that the gap is $O(\beta^2)$ is inherently related to subgaussianity.
In Spring 2021, I took Yanjun Han’s excellent course “Information-theoretic Lower Bounds in Data Science (EE 378C)”. For the entirety of the course, we looked at methods of proving statistical lower bounds, but being a pragmatist I was always looking for ways that this could inform how we construct statistical estimators. “Dualizing Le Cam’s method” (Polyanski and Wu, 2021) promised exactly this, so I chose it for my reading project.
I’ll try to illustrate the main takeaway from this paper without the hefty proofs that come with it.
In this article, we present a novel result for the mean-field gap bound in the case of PSD matrices. In particular, it is the only known bound with the “correct” dependence on the inverse temperature $\beta$ - all results in the literature give a $O(\beta)$ gap, while the optimal construction (using the GOE matrix) is actually $O(\beta^2)$.