David Kewei Lin

Welcome to my personal site!

Some problems I

(Mathematics Education, 7.11) All complex roots of the equation $A_0 X^n + A_1 X^{n-1} + \dots + A_n = 0$ are strictly less than 1 in absolute value. The sequence $\{v_k = A_0 u_{k+n} + A_1 u_{k+n-1} + \dots + A_n u_k\}$ converges. Prove that the sequence $\{u_k\}$ also converges.

Proof.

Commentary. This approach is straightforward if you think about the special case where the polynomial is linear, then you can advance this step-by-step.


(NSMath Problem Column 54/3) For a positive integer $n$, let $f(n)$ denote the largest positive integer $m$ that satisfies the following condition: the set $\{1,2,...,n\}$ can be (disjointly) partitioned as $X \sqcup Y$ such that

$$\sum_{x\in X} x^k = \sum_{y\in Y} y^k\qquad \text{for any } k\in \{1,2,...,m\}.$$

Prove that $f(n)= O((\ln n)^2)$.

Proof.

Comment. This argument is from Boyd (1997). Some related facts:

Themes. There are two ideas:

Background. The famous Prouhet–Tarry–Escott problem is the problem of finding two disjoint multisets $X,Y$ of $n$ integers each where $$ \sum_{x\in X} x^k = \sum_{y\in Y} y^k\qquad \text{for any } k\in \{1,2,...,m\}. $$

(Mathematics Education, 9.4) Given a sequence $\{a_k\}_{k=1}$ such that $a_1 = 1$, $a_k = a_{k-1} + a_{\lfloor k/2 \rfloor}$ for $k > 1$. Prove that every term is not a multiple of 4.

Proof. There are a handful of recursive relations that define the function. First note that by definition, $a_{2k-1}, a_{2k}, a_{2k+1}$ form an A.P. with common difference $a_k$. Now, $$ \begin{align*} a_{4k+3} &= a_{4k+2} + a_{2k+1}\\ &= a_{4k+1} + a_{2k+1} + a_{2k+1}\\ &= a_{4k} + a_{2k} + a_{2k+1} + a_{2k+1} = a_{4k} + 2a_{2k+1} + a_{2k}\\ &= a_{4k-1} + 2a_{2k+1} + 2a_{2k} = a_{4k-1} + 3a_{2k+1} + a_{2k-1}\\ \end{align*} $$

In otherwords, $a_{4k+3} + a_{2k+1} \equiv a_{4k-1} + a_{2k-1} \pmod{4}$, so we can induct upwards that $a_{4k+3} \equiv -a_{2k+1} \pmod 4$. Using similar ideas, we can get the full recurrence: $$ \begin{cases} a_{4k+3} \equiv -a_{2k+1} \pmod 4\\ a_{4k+2} \equiv 2 \pmod 4\\ a_{4k+1} \equiv a_{2k+1} \pmod 4\\ a_{4k} \equiv a_k \pmod 4 \end{cases} $$

Theme. How I solved this was to observe that $\{a_{2k+1}\}$ was the Thue-Morse sequence but with 1 and 3, then one guesses that you can define them by such recurrences. I called these type of problems “automata problems”. I imagine there is a fast and slick solution out there but I couldn’t figure it out.

Remark. This number has a OEIS sequence. The neatest combinatorial interpretation I found there was that $a_n$ is the half of the number of ways to partition $2n$ into powers of 2. It’s conjectured that for all $m$ not divisible by 4, there are infinitely many terms divisible by $m$.


(Mathematics Education, 1.6)

  • (a) A polynomial $P(X)$ is given. For any $X > 0$: $P(X) > 0$. Prove that $P = Q/T$, where $Q$ and $T$ are polynomials with non-negative coefficients.
  • (b) Let $P$ be a quadratic polynomial, and $\alpha$ the argument of its complex root. Then the degree of $Q$ is not less than $\pi/\alpha$.

Proof. (a) First, we can reduce to the case where $P$ is quadratic (by factoring P). Since $P$ has no positive roots, neither do any of its factors and thus they must satisfy the same properties (after choosing a sign).

Next, by rescaling $X$, we may assume that the quadratic is of the form $P(x) = x^2 - 2ax + 1$. If $a \le 0$, we’re done, and because $P(1) > 0$ we must have $a < 1$. Note that $$x^2 - 2ax + 1 = \frac{(x^2+1)^2 - (2ax)^2}{x^2 + 2ax+1} = \frac{x^4 - 2(a^2 - 2)x^2 + 1}{x^2 + 2ax+1}.$$ so effectively we’ve replaced $x\mapsto x^2$ and $a\mapsto a^2 - 2$.

We are hoping to get to a positive $a$, so at this point we could study the dynamics. However, one could note that by substituting $a=2\cos \alpha$, we have $a^2 -2 = 2\cos (2\alpha)$, so this is just doubling the angle. Starting with $\alpha\in (0,\pi/2)$, by repeated doubling we eventually end up in the interval $[\pi/2, \pi)$, which gives us $2\cos \alpha \le 0$ as desired.

(b) We sketch an easier but looser bound: start with $P(x) = x^2 - (2\cos \alpha) x + 1$ with $\alpha \in[\pi/2, \pi)$. The two roots are $e^{\pm i\alpha}$, so $$Q(e^{i\alpha}) + Q(e^{-i\alpha}) = 2\cdot \sum_{i=1}^n q_k \cos(k\alpha) = 0$$ with all $q_k > 0$. Thus, we need $\cos(n\alpha) < 0$, or $n > \frac{\pi}{2\alpha}$.

The trick is to instead note that $\mathrm{Im} (\alpha^k) > 0$ up to $k < \pi/\alpha$, so $\mathrm{Im}(Q(x)) > 0$ if its coefficients are non-negative and the degree is less than $\pi/\alpha$

Themes. Complex numbers, trigonometric relations. It’s pretty common that the trick you end up using is some 1-dimensional projection, and there are at least two obvious choices (so remember to try them both!). A problem I remember painfully is:

(China 2015/1) Let $z_1,z_2,...,z_n$ be complex numbers satisfying $|z_i - 1| \leq r$ for some $r$ in $(0,1)$. Show that

$$ \left | \sum_{i=1}^n z_i \right | \cdot \left | \sum_{i=1}^n \frac{1}{z_i} \right | \geq n^2(1-r^2).$$

Remark. Turns out USA TSTST 2017/3 basically asks you for the minimum degree of $Q$ in (b). There is also a 1997 IMO Longlist problem that say that $(x+1)^n P(x)$ has non-negative coefficients for large enough $n$ (which could be an alternative method for part (a)).

Alternate Solution. One considers a smoothing approach for part (b) as follows: split polynomial $T$ into two parts: $$T(x) = \underbrace{1 + t_1x + ... + t_{k-1}x^{k-1}}_{=T_1(x)} + \underbrace{t_kx^k + ... + t_nx^n}_{=T_2(x)}$$

We now consider $T'(x) = T_1(x) + \lambda T_2(x)$, starting from $\lambda = 1$ and decreasing. Note that $q_\ell' = q_\ell$ for $\ell \le k-1$ and $q_\ell' = \lambda q_\ell$ for $\ell\ge k+2$, so all retain their “sign”. For the exceptions, we see that $$ \begin{cases} q_k' = q_k - (1-\lambda)t_k \\ \frac{q_{k+1}'}\lambda = q_{k+1} + (1/\lambda -1)t_{k-1} \end{cases} $$ Note that as we decrease $\lambda$, $q_k$ will decrease and $q_{k+1}$ will increase. So one of two things must happen:

By choosing $k$ to be the index of the non-constant term with the smallest degree, this ensures that we can zero out all but the coefficients of $\{1, x^{n-1}, x^n\}$ in $Q$.

At this point, this suggests that the resulting $T$ must match the Taylor expansion of $$ \frac{1}{1 - 2\cos\varphi x + x^2} = \frac{1}{(\alpha - \bar{\alpha})x} \cdot \left(\frac{1}{1 - \alpha x} - \frac{1}{1 - \bar{\alpha} x}\right), $$ up to its maximal degree. But note that each coefficient is simply $$ t_k = \frac{1}{2i \sin \varphi} (\alpha^{k+1} - \bar{\alpha}^{k+1}) = \frac{\sin((k+1)\varphi)}{\sin \varphi}. $$ so the second last coefficient of $Q$ is $$ q_{d-1} = - \frac{\sin n\varphi}{\sin \varphi} $$ and since this is non-negative, we must have $n\varphi \ge \pi$, so $n$ is at least $\pi/\varphi$.

Themes. Smoothing - a looser idea is that we can always “jiggle” the coefficients of $T$ until a coefficient of $Q$ hits zero, then we add it in as a new constraint. This gets us three non-zero coefficients. This also gives a very “exact” feel, in a sense that by some reduction-type operations we are always able to get to exactly the same polynomial.

Related. The so-called Ky Fan’s inequality:

If $m_1, m_2, \dots, m_n$ sum to 0, then $$ \sum_{i=1}^n m_im_{i+1} \le \cos \frac{2\pi}{n} \sum_{i=1}^n m_i^2$$

See also: Korea 2023/6


(Lemma for a Miklos-Schweitzer problem) If $f,g$ are additive functions on $\mathbb F_{2^m}$, show that the equation $xy+f(x) + g(y) = c$ has at most $2^{m+1} -1$ solutions.

$\newcommand{\Tr}{\operatorname{Tr}}$ Proof. We set up the Jacobi sum to count the number of solutions. Recall that $(-1)^{\Tr(z)}$ is a multiplicative character, and $$ \sum_{t\in \mathbb F_{2^m}} (-1)^{\Tr(tx)} = \begin{cases} 2^m & \text{if } x = 0\\ 0 & \text{if } x \neq 0 \end{cases} $$ or more succinctly, $\frac{1}{2^m}\sum_{t\in \mathbb F_{2^m}} (-1)^{\Tr(tx)} = \mathbb I_{x=0}$.

Thus, $$ \mathbb I_{xy + f(x) + g(y) = c} = \sum_{t\in \mathbb F_{2^m}} (-1)^{\Tr(t(xy+f(x)+g(y)-c))} $$ Hence, summing this over all $x,y$ we get $$ \#\{(x,y) : xy + f(x) + g(y) = c\} = \frac 1 {2^m} \sum_{t\in \mathbb F_{2^m}} (-1)^{\Tr(tc)} \sum_{x,y} (-1)^{\Tr(t(xy+f(x)+g(y)))} $$ We claim that

Lemma. $$ \left|\sum_{x,y} (-1)^{\Tr(t(xy+f(x)+g(y))}\right| = \begin{cases} 2^{2m} & \text{if } t = 0\\ 2^m & \text{if } t \neq 0. \end{cases} $$

Thus, we can do an upper bound: $$ \frac 1 {2^m}\sum_{t\in \mathbb F_{2^m}} (-1)^{-ct} \sum_{x,y} (-1)^{t(xy+f(x)+g(y))} \le \frac 1 {2^m} (2^{2m} + (2^m-1) 2^m) = 2^{m+1} - 1 $$ as desired.

To prove the lemma, first we realize the existence of $f_t$ such that $\Tr(tf(x)) = \Tr(t\cdot f_t\cdot x)$. This can be done by arguing that $\{\Tr(a\cdot x)\}_{a\in \mathbb F_{2^m}}$ is the full space of all linear maps $\mathbb F_{2^m} \to \mathbb F_2.$ Therefore, $$ \begin{align*} \sum_{x,y} (-1)^{t(xy+f(x)+g(y))} &= \sum_{x,y} (-1)^{t(xy+f_t x + g_t y)}\\ &= (-1)^{tf_tg_t}\sum_{x,y} (-1)^{t(x+g_t)(y+f_t)} \\ &= (-1)^{tf_tg_t}\sum_{x,y} (-1)^{\Tr(txy)} \end{align*} $$ At this point, we realize that $\sum_{y}(-1)^{\Tr(txy)}$ is 0 if $x\neq 0$ and $2^m$ otherwise, so we are done.

Remark. The more “classical” way to show the size argument is as follows:

(Balanced lemma.) If a function $f: \mathbb F_2^k \to \mathbb F_2$ satisfies $$ > \sum_{x\in \mathbb F_2^k} (-1)^{f(x)+f(x+h)} = 0 > $$ for all $h\neq 0$ (i.e. $f$ is “balanced”), then $$ > \left|\sum_{x\in \mathbb F_2^k} (-1)^{f(x) + L(x)}\right| = 2^k. > $$ for any linear function $L: \mathbb F_2^k \to \mathbb F_2$.

We show it by squaring the quantity and then moving from $(x,y)$ to $(x,h) = (x,x+y)$.

To apply it to the original problem, we have to use $k=2m$. Then it remains to check that $$ \Tr((x+u)(y+v)) + \Tr(xu) = \Tr(xv) + \Tr(yu) + \Tr(uv) $$ is “balanced”, but this is clear becuse the result is linear in $x$ and $y$.

Themes. Jacobi sums, discrete Fourier analysis.

Alternate Solution. (Shown to me by Thomas Swayze)

The key idea is to move “slices” of the sum to get a simpler equation with the same number of solutions. Write $$(m_y+f)(x) = g(y) + c$$ where $m_y$ is the multiplication map (by $y$). Notice that $m_y+f$ is a $\mathbb F_2$-linear map, so either:

In either case, there are less solutions than the equation $xy + f(x) = 0$. Now, repeat the argument with $x$ and $y$ switched around to reduce the equation to $xy$.

Themes. This is part linear algebra (noting the subspace structure of the solutions) and part combinatorial (noting that we can just move each section in parallel). The closest I got to along these lines is trying to show that no two sections can share a pair of solutions.

Remark. The original Miklos-Schweitzer problem asks to show that the number of solutions to $f(x) = x^{q+1}$ in $\mathbb F_{q^2}$ for affine $f$ is at most $2q-1$, where $q$ is a power of 2. The reduction follows from a standard trick: each element $x\in \mathbb F_{q^2}$ as $x = au+b$ where $a,b\in\mathbb F_q$.