Minimum Rates of Approximate Sufficient Statistics

Vincent Y.F. Tan
Assistant Professor, National University of Singapore
Date: Apr. 17th, 2017
Time: 4:15 -- 5:30 pm
Venue: Packard 202


Given a sufficient statistic for a parametric family of distributions, one can estimate the parameter without access to the data itself but by using a sufficient statistic. However, the memory size for storing the sufficient statistic may be prohibitive. Indeed, for $n$ independent data samples drawn from a $k$-nomial distribution with $d=k-1$ degrees of freedom, the length of the code scales as $d\log n+O(1)$. In many applications though, we may not have a useful notion of sufficient statistics and also may not need to reconstruct the generating distribution exactly. By adopting an information-theoretic approach in which we consider allow a small error in estimating the generating distribution, we construct various notions of {\em approximate sufficient statistics} and show that the code length can be reduced to $\frac{d}{2}\log n + O(1)$. We consider errors measured according to the relative entropy and variational distance criteria. For the code construction parts, we leverage Rissanen's minimum description length principle, which yields a non-vanishing error measured using the relative entropy. For the converse parts, we use Clarke and Barron's asymptotic expansion for the relative entropy of a parametrized distribution and the corresponding mixture distribution. The limitation of this method is that only a weak converse for the variational distance can be shown. We develop new techniques to achieve vanishing errors and we also prove strong converses for all our statements. The latter means that even if the code is allowed to have a non-vanishing error, its length must still be at least $\frac{d}{2}\log n$.

This is joint work with Prof. Masahito Hayashi (Graduate School of Mathematics, Nagoya University and Center for Quantum Technologies, NUS


Vincent Tan is an assistant professor in the ECE and Math department at NUS.