Polymath15, third thread: computing and approximating H_t
What's new 2018-03-10
This is the third “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this previous thread. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
We are making progress on the following test problem: can one show that whenever , , and ? This would imply that
which would be the first quantitative improvement over the de Bruijn bound of (or the Ki-Kim-Lee refinement of ). Of course we can try to lower the two parameters of later on in the project, but this seems as good a place to start as any. One could also potentially try to use finer analysis of dynamics of zeroes to improve the bound further, but this seems to be a less urgent task.
Probably the hardest case is , as there is a good chance that one can then recover the case by a suitable use of the argument principle. Here we appear to have a workable Riemann-Siegel type formula that gives a tractable approximation for . To describe this formula, first note that in the case we have
and the Riemann-Siegel formula gives
for any natural numbers , where is a contour from to that winds once anticlockwise around the zeroes of but does not wind around any other zeroes. A good choice of to use here is
In this case, a classical steepest descent computation (see wiki) yields the approximation
where
Thus we have
where
with and given by (1).
Heuristically, we have derived (see wiki) the more general approximation
for (and in particular for ), where
In practice it seems that the term is negligible once the real part of is moderately large, so one also has the approximation
For large , and for fixed , e.g. , the sums converge fairly quickly (in fact the situation seems to be significantly better here than the much more intensively studied case), and we expect the first term
of the series to dominate. Indeed, analytically we know that (or ) as (holding fixed), and it should also be provable that as well. Numerically with , it seems in fact that (or ) stay within a distance of about of once is moderately large (e.g. ). This raises the hope that one can solve the toy problem of showing for by numerically controlling for small (e.g. ), numerically controlling and analytically bounding the error for medium (e.g. ), and analytically bounding both and for large (e.g. ). (These numbers and are arbitrarily chosen here and may end up being optimised to something else as the computations become clearer.)
Thus, we now have four largely independent tasks (for suitable ranges of “small”, “medium”, and “large” ):
- Numerically computing for small (with enough accuracy to verify that there are no zeroes)
- Numerically computing for medium (with enough accuracy to keep it away from zero)
- Analytically bounding for large (with enough accuracy to keep it away from zero); and
- Analytically bounding for medium and large (with a bound that is better than the bound away from zero in the previous two tasks).
Note that tasks 2 and 3 do not directly require any further understanding of the function .
Below we will give a progress report on the numeric and analytic sides of these tasks.
— 1. Numerics report (contributed by Sujit Nair) —
There is some progress on the code side but not at the pace I was hoping. Here are a few things which happened (rather, mistakes which were taken care of).
- We got rid of code which wasn’t being used. For example, @dhjpolymath computed based on an old version but only realized it after the fact.
- We implemented tests to catch human/numerical bugs before a computation starts. Again, we lost some numerical cycles but moving forward these can be avoided.
- David got set up on GitHub and he is able to compare his output (in C) with the Python code. That is helping a lot.
Two areas which were worked on were
- Computing and zeroes for around
- Computing quantities like , , , etc. with the goal of understanding the zero free regions.
Some observations for , , include:
- does seem to avoid the negative real axis
- (based on the oscillations and trends in the plots)
- seems to be settling around range.
See the figure below. The top plot is on the complex plane and the bottom plot is the absolute value. The code to play with this is here.
— 2. Analysis report —
The Riemann-Siegel formula and some manipulations (see wiki) give , where
where is a contour that goes from to staying a bounded distance away from the upper imaginary and right real axes, and is the complex conjugate of . (In each of these sums, it is the first term that should dominate, with the second one being about as large.) One can then evolve by the heat flow to obtain , where
Steepest descent heuristics then predict that , , and . For the purposes of this project, we will need effective error estimates here, with explicit error terms.
A start has been made towards this goal at this wiki page. Firstly there is a “effective Laplace method” lemma that gives effective bounds on integrals of the form if the real part of is either monotone with large derivative, or has a critical point and is decreasing on both sides of that critical point. In principle, all one has to do is manipulate expressions such as , , by change of variables, contour shifting and integration by parts until it is of the form to which the above lemma can be profitably applied. As one may imagine though the computations are messy, particularly for the term. As a warm up, I have begun by trying to estimate integrals of the form
for smallish complex numbers , as these sorts of integrals appear in the form of . As of this time of writing, there are effective bounds for the case, and I am currently working on extending them to the case, which should give enough control to approximate and . The most complicated task will be that of upper bounding , but it also looks eventually doable.