A short note about how energy and entropy appears together.
A short note about how energy and entropy appears together.
$\providecommand{\E}{\mathbb E} \providecommand{\Unif}{\mathrm {Unif}} \providecommand{\GAP}{\mathsf{GAP}} \providecommand{\norm}[1]{\left| #1 \right|}$
We prove Gibbs’ Inequality, which gives a lower bound of the log-partition function in terms of an optimization problem over distributions. This naturally leads us to mean-field approximations.
Why is it that $p(\sigma) \propto e^{\beta f(\sigma)}$ for the Gibbs’ distribution? Let’s look at two possible reasons.
We will give the basic setup for statistical mechanics, and define the log-partition function, the Gibbs distribution, and the Ising model.
I wrote this up because I was challenged to do it by Stephen Boyd during Convex Optimization. His point was that this is a really key fact in high dimensional probability but somehow there isn’t a simpler proof of this.
A function $f$ is log-concave if $f(\lambda x + (1-\lambda)y)\le f(x)^{\lambda} f(y)^{1-\lambda}$ for all $x,y$ and $\lambda\in [0,1]$.
We would like to show:
Problem. If $f(x,y)$ is continuous, measurable and log-concave (where $f:\mathbb R^m \times \mathbb R^k \to \mathbb R$, then $g(x) = \int f(x,y)dy$ is log-concave.