The last part of a 3-part series on Analytic NT. Part III is about Fourier stuff.
In this part we take a quick tour through Fourier things (and why they appear here) as well as a bunch of neat looking facts (no proofs!).
Show that any prime $p$ has a quadratic non-residue of size at most $\sqrt{p}$. (That is, there exists $0 < a \le \sqrt p$ such that $a$ is not a square mod $p$.)
We will see a unbelievably strong version of this in the last section.
One of the cool things we learnt in this class was some useful heuristics: for example, suppose we want to know how fast $$\int_2^x \frac{1}{\sqrt{t\log t}}\,dt = O(?)$$ grows. The integral on the left tends to infinity but is not exact, so how do we compute it?
A way to “guess” the answer is to pretend $\log t$ is a constant (at $t=x$): $$\int_2^x \frac{1}{\sqrt{t\log t}}\,dt \overset ? = \frac{1}{\sqrt{\log x}}\int_2^x \frac{dt}{\sqrt t}= \frac{2\sqrt{x}}{\sqrt{\log x}}$$ and this is indeed right (by using something like L’Hopital’s rule).
Why does this work? Roughly speaking, $\log x$ is essentially a constant on $[cx,x]$, so if most of the mass of the integral is on that interval then $\log t\approx \log x$ is a good estimate overall.
Another heuristic is the primes have density $\frac{1}{\log n}$ around $n$, which means that we can (often) replace the sum $\sum_{n\le x}f(n)$ with the weighted sum $\sum_{p^k\le x}f(p)\log p$. For instance, we are able to rigorously show the equivalence $$ \pi(x) \sim x/\log x \Leftrightarrow \psi(x) := \sum_{p^k\le x} \log p \sim x$$ which will be critical when showing the PNT.
Another useful theorem crossing between holomorphy and convergence is for integrals:
Theorem. (Newman) Define the integral $$I(s) := \int_1^\infty \frac{F(x)}{x^s}\,dx$$ for a bounded integrable $F$.
If $I$ has a holomorphic extension to $\{\sigma \ge 1\}$, then $I(1)$ converges (as the integral) and coincides with the value of the holomorphic extension.
This sounds like the integral version of Landau, except this time we don’t require positivity! In exchange, we demand both
(For those with analytic backgrounds: do you see a parallel between monotone convergence and bounded convergence theorems?)
This is the reason why we often like to write Dirichlet series as integrals of that form (aptly named Dirichlet integrals) as so $$\sum_{n\ge 1} \frac{a_n}{n^s} = s\int_1^\infty \frac{A(x):= \sum_{n\le x}a_n}{x^{s+1}}\,dx$$ Let’s understand how we get this improvement using the larger holomorphic boundary.
Note: I gave up fleshing out the proof of PNT around this point. So the rest of this will be incredibly sketchy.
This is essentially the very cool fact that contour integrals (of a holomorphic function) around the loop take the same value even when you deform the loop (provided you do not pass through a pole along the way).
And how this helps is that we simply take this function over some contour around $z = 1$, and do some estimates, and by some Fourier reasons most of it cancels out. (Gave up figuring out the details.)
This is another piece of magic that I won’t remember (in fact I don’t remember it already), but applying the above with $F = \psi$ we can get $\psi(x) \sim x$. And of course this implies the long awaited $$\pi(x) \sim \frac{x}{\log x}$$
I think what’s left is just talking about the cool things.
In the previous part we managed to extend $\zeta$ holomorphically back to $\{\sigma > 0\}$ (except $s=1$ as usual), but the more general fact is that it can be extended to all of $\mathbb C$. By setting $Z(s) = \pi^{-s/2} \Gamma(s/2)\zeta(s)$, we have the symmetry $$Z(s) = Z(1-s)$$ where the only poles are at $s=1,-1$.
This is done using very spooky Fourier analysis facts. I’ll give an example of one of them:
Theorem. (Jacobi) For $y>0$: $$\sum_{n\in \mathbb Z} e^{-\pi n^2/y} = \sqrt{y} \sum_{n\in \mathbb Z} e^{-\pi n^2 y} $$ If anyone has any intuition why something like this might be true, I’ll be super interested to hear it.
Also this formula tells us that there are a bunch of “trivial zeroes” at $s=-2,-4,...$, and that all the other zeroes are in the $\{0<\sigma<1\}$ strip. The Riemann Hypothesis says that all the zeroes in this strip actually satisfy $\sigma = 1/2$.
A spectacular result is the following formula, which follows (informally!) by the computation of integrals among two paths:
$$ \begin{align*} {\sum_{p^k\le x}}^* \log p &= \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} - \frac{\zeta'(s)}{\zeta(s)} \frac{x^s}s \, ds\qquad \text{for any }c>1\\ &= \text{residues at }(1) + (\text{nontrivial $\zeta$-zeroes}) + (0) + (-2,-4,\cdots)\\ &= x-\sum_{\rho\text{ nontrivial $\zeta$-zero}} \frac{x^\rho}{\rho} - \log (2\pi) - \frac 1 2 \log \left(1 - \frac{1}{x^2}\right) \end{align*} $$ It should be absolutely mindblowing that this is true (no error term!). A cool thing to note is that this suggests that we can figure out the growth rate of $\psi$ to a lot finer detail if we knew where all the zeroes of $\zeta$ were.
For instance, we know that in the band $\{T\le t\le T+1\}$ there are $O(\log T)$ zeroes, so the assumption that all the $\zeta$-zeroes lay in the band $\{1-\theta \le \sigma \le \theta\}$ tells us that $$\psi(x) = x + O_{\theta}(x^\theta(\log x)^2)$$ and now we see why the Riemann Hypothesis might be a really good thing to know.
Actually, it is also hypothesized that the roots of any Dirichlet $L$-function $L(s,\chi)$ should have its nontrivial zeroes on the line $\sigma = 1/2$. This is called the Generalized Riemann Hypothesis (GRH).
This gives a very cool result (with some very real applications):
Theorem. Assuming GRH, then any proper subgroup of $(\mathbb Z/ n\mathbb Z)^\times$ cannot contain the set $$S_x = \{a: 1\le a \le x, (a,n)=1\}$$ where $x=2(\log n)^2$.
Here’s an attempt to show why this would be true. Pick the character $\chi$ that is 1 on this subgroup (and we may furthermore assume it is primitive, i.e. it doesn’t come from a smaller moduli than $n$). Then, the sum up to $x$ matches the $\chi$-twisted sum up to $x$, so the explicit formula (and the $\chi$-twisted version) gives.
$$ \begin{align*} x - \sum_{\rho}\frac{x^\rho}{\rho} + O(\log x) &= {\sum_{m\le x}}^* \Lambda(m)\\ &={\sum_{m\le x}}^* \chi(m)\Lambda(m)\\ &= (\text{residue at 0}) - \sum_{\rho}\frac{x^{\rho_{\chi}}}{{\rho_\chi}} + O(\log x) \end{align*} $$
which doesn’t quite work but gets close.