Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation
What's new 2018-03-10
This is the second “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
We now have the following proposition (see this page for a proof sketch) that looks like it can give a numerically feasible approach to bound :
Proposition 1 Suppose that one has parameters obeying the following properties:
- All the zeroes of with are real.
- There are no zeroes with in the region .
- There are no zeroes with and .
Then one has .
The first hypothesis is already known for up to about (we should find out exactly what we can reach here). Preliminary calculations suggest that we can obtain the third item provided that . The second hypothesis requires good numerical calculation for , to which we now turn.
The initial definition of is given by the formula
where
This formula has proven numerically computable to acceptable error up until about the first hundred zeroes of , but degrades after that, and so other exact or approximate formulae for are needed. One possible exact formula that could be useful is
where
and
and can be chosen arbitrarily. We are still trying to see if this can be implemented numerically to give better accuracy than the previous formula.
It seems particularly promising to develop a generalisation of the Riemann-Siegel approximate functional equation for . Preliminary computations suggest in particular that we have the approximation
where
Some very preliminary numerics suggest that this formula is reasonably accurate even for moderate values of , though further numerical verification is needed. As a proof of concept, one could take this approximation as exact for the purposes of seeing what ranges of one can feasibly compute with (and for extremely large values of , we will presumably have to introduce some version of the Odlyzko-Schönhage algorithm. Of course, to obtain a rigorous result, we will eventually need a rigorous version of this formula with explicit error bounds. It may also be necessary to add more terms to the approximation to reduce the size of the error.
Sujit Nair has kindly summarised for me the current state of affairs with the numerics as follows:
—
- We need a real milestone and work backward to set up intermediate goals. This will definitely help bring in focus!
- So far, we have some utilities to compute zeroes of with a nonlinear solver which uses roots of as an initial condition. The solver is a wrapper around MINPACK’s implementation of Powell’s method. There is some room for optimization. For example, we aren’t providing the solver with the analytical Jacobian which speeds up the computation and increases accuracy.
- We have some results in the output folder which contains the first 1000 roots of for some small values of , etc. They need some more organization and visualization.
We have a decent initial start but we have some ways to go. Moving forward, here is my proposition for some areas of focus. We should expand and prioritize after some open discussion.
- Short term Optimize the existing framework and target to have the first million zeros of (for a reasonable range of ) and the corresponding plots. With better engineering practice and discipline, I am confident we can get to a few tens of millions range. Some things which will help include parallelization, iterative approaches (using zeroes of to compute zeroes of ), etc.
- Medium term We need to explore better ways to represent the zeros and compute them. An analogy is the computation of Riemann zeroes up to height . It is computed by computing the sign changes of (page 119 of Edwards) and by exploiting the speed up with the Riemann-Siegel formulation (over Euler-Maclaurin). For larger values of , I am not sure the root solver based approach is going to work to understand the gaps between zeroes.
- Long term We also need a better understanding of the errors involved in the computation — truncation, hardware/software, etc.