(Mathematics Education, 7.11) All complex roots of the equation $A_0 X^n + A_1 X^{n-1} + \dots + A_n = 0$ are strictly less than 1 in absolute value. The sequence ${v_k = A_0 u_{k+n} + A_1 u_{k+n-1} + \dots + A_n u_k}$ converges. Prove that the sequence ${u_k}$ also converges.
Proof.
- Consider the generating functions $U(x) = \sum_{k=0}^\infty u_k x^k$ and $V(x) = \sum_{k=0}^\infty v_k x^k$.
- Then, $U(x) (A_0x^{-n} + A_1x^{-k+1} + \dots + A_n) = V(x)$.
- So $U(x) = \frac{x^nV(x)}{A_0 + A_1 x + \dots + A_n x^n} = \frac{x^n V(x)}{A_0\prod (1-\alpha_i x)}$ for $|\alpha_i| < 1$, and hence it is expandable as an absolutely convergent power series. The result follows.
Commentary. This approach is straightforward if you think about the special case where the polynomial is linear, then you can advance this step-by-step.
In the finance world, a common saying is that “past performance does not guarantee future returns”. In statistics, what exists is a generalization gap, which is some typical degradation of predictive power when one moves from training/in-sample data to test/out-of-sample data. Partly, this is because parameters are directly optimized on training data, and thus in-sample measures are necessarily optimistic.
In this note we use the classical asymptotic theory in statistics to derive some generalization gaps, and show how the Akaike Information Criteria is a special case when the model is correctly specified.
Singular learning theory is a topic that I’ve always wanted to learn, since it sits at the intersection of learning theory and algebraic geometry, two fields which I’ve spent some time learning already. The core interface it has with algebraic geometry is through this one asymptotic integral, so I thought I would spend some time trying to understand it.
$\newcommand \E {\mathbb E}$
$\newcommand \eps {\epsilon}$
$\newcommand \tr {\mathrm{tr}}$
$\newcommand \Cov {\mathrm{Cov}}$
In this article, I outline a computation for linear models that I feel is really essential but was never asked of me during most of my education in machine learning.
Second post in the cubic curves series. We will show how cubic curves arise as the loci of triple cross ratios.
Third post: TBD.