Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation

What's new 2018-03-10

This is the second “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant {\Lambda}, continuing this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We now have the following proposition (see this page for a proof sketch) that looks like it can give a numerically feasible approach to bound {\Lambda}:

Proposition 1 Suppose that one has parameters {t_0, T, \varepsilon > 0} obeying the following properties:

  • All the zeroes of {H_0(x+iy)=0} with {0 \leq x \leq T} are real.
  • There are no zeroes {H_t(x+iy)=0} with {0 \leq t \leq t_0} in the region {\{ x+iy: x \geq T; 1-2t \geq y^2 \geq \varepsilon^2 + (T-x)^2 \}}.
  • There are no zeroes {H_{t_0}(x+iy)=0} with {x > T} and {y \geq \varepsilon}.

Then one has {\Lambda \leq t_0 + \frac{1}{2} \varepsilon^2}.

The first hypothesis is already known for {T} up to about {10^{12}} (we should find out exactly what we can reach here). Preliminary calculations suggest that we can obtain the third item provided that {t_0, \varepsilon \gg \frac{1}{\log T}}. The second hypothesis requires good numerical calculation for {H_t}, to which we now turn.

The initial definition of {H_t} is given by the formula

\displaystyle  H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du

where

\displaystyle  \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3\pi n^2 e^{5u} ) \exp(-\pi n^2 e^{4u}).

This formula has proven numerically computable to acceptable error up until about the first hundred zeroes of {H_t}, but degrades after that, and so other exact or approximate formulae for {H_t} are needed. One possible exact formula that could be useful is

\displaystyle  H_t(z) = \frac{1}{2} (K_{t,\theta}(z) + \overline{K_{t,\theta}(\overline{z})})

where

\displaystyle  K_{t,\theta}(z) := \sum_{n=1}^\infty (2\pi^2 n^4 I_{t,\theta}(z-9i, \pi n^2) - 3\pi n^2I_{t,\theta}(z-5i, \pi n^2))

and

\displaystyle  I_{t,\theta}(b,\beta) := \int_{i\theta}^{i\theta+i\infty} \exp(tu^2 - \beta e^{4u} + ibu)\ du

and {-\pi/8 < \theta < \pi/8} can be chosen arbitrarily. We are still trying to see if this can be implemented numerically to give better accuracy than the previous formula.

It seems particularly promising to develop a generalisation of the Riemann-Siegel approximate functional equation for {H_0}. Preliminary computations suggest in particular that we have the approximation

\displaystyle  H_t(x+iy) \approx \frac{1}{4} (F_t(\frac{1+ix-y}{2}) + \overline{F_t(\frac{1+ix+y}{2})})

where

\displaystyle  F_t(s) := \pi^{-s/2} \Gamma(\frac{s+4}{2}) \sum_{n \leq \sqrt{\mathrm{Im}(s)/2\pi}} \frac{\exp( \frac{t}{16} \log^2 \frac{s+4}{2\pi n^2})}{n^s}.

Some very preliminary numerics suggest that this formula is reasonably accurate even for moderate values of {x}, though further numerical verification is needed. As a proof of concept, one could take this approximation as exact for the purposes of seeing what ranges of {T} one can feasibly compute with (and for extremely large values of {T}, we will presumably have to introduce some version of the Odlyzko-Schönhage algorithm. Of course, to obtain a rigorous result, we will eventually need a rigorous version of this formula with explicit error bounds. It may also be necessary to add more terms to the approximation to reduce the size of the error.

Sujit Nair has kindly summarised for me the current state of affairs with the numerics as follows:

  • We need a real milestone and work backward to set up intermediate goals. This will definitely help bring in focus!
  • So far, we have some utilities to compute zeroes of {H_t} with a nonlinear solver which uses roots of {H_0} as an initial condition. The solver is a wrapper around MINPACK’s implementation of Powell’s method. There is some room for optimization. For example, we aren’t providing the solver with the analytical Jacobian which speeds up the computation and increases accuracy.
  • We have some results in the output folder which contains the first 1000 roots of {H_t} for some small values of {t \in \{0.01, 0.1, 0.22\}}, etc. They need some more organization and visualization.

We have a decent initial start but we have some ways to go. Moving forward, here is my proposition for some areas of focus. We should expand and prioritize after some open discussion.

  1. Short term Optimize the existing framework and target to have the first million zeros of {H_t} (for a reasonable range of {t}) and the corresponding plots. With better engineering practice and discipline, I am confident we can get to a few tens of millions range. Some things which will help include parallelization, iterative approaches (using zeroes of {H_t} to compute zeroes of {H_{t + \delta t}}), etc.
  2. Medium term We need to explore better ways to represent the zeros and compute them. An analogy is the computation of Riemann zeroes up to height {T}. It is computed by computing the sign changes of {Z(t)} (page 119 of Edwards) and by exploiting the {\sqrt T} speed up with the Riemann-Siegel formulation (over Euler-Maclaurin). For larger values of {j}, I am not sure the root solver based approach is going to work to understand the gaps between zeroes.
  3. Long term We also need a better understanding of the errors involved in the computation — truncation, hardware/software, etc.