Polymath15, sixth thread: the test problem and beyond
What's new 2018-03-19
This is the sixth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
The last two threads have been focused primarily on the test problem of showing that whenever
. We have been able to prove this for most regimes of
, or equivalently for most regimes of the natural number parameter
. In many of these regimes, a certain explicit approximation
to
was used, together with a non-zero normalising factor
; see the wiki for definitions. The explicit upper bound
has been proven for certain explicit expressions (see \hrefe{http://michaelnielsen.org/polymath1/index.php?title=Effective_bounds_on_H_t_-_second_approach}{here}) depending on
. In particular, if
satisfies the inequality
then is non-vanishing thanks to the triangle inequality. (In principle we have an even more accurate approximation
available, but it is looking like we will not need it for this test problem at least.)
We have explicit upper bounds on ,
,
; see this wiki page for details. They are tabulated in the range
here. For
, the upper bound
for
is monotone decreasing, and is in particular bounded by
, while
and
are known to be bounded by
and
respectively (see here).
Meanwhile, the quantity can be lower bounded by
for certain explicit coefficients and an explicit complex number
. Using the triangle inequality to lower bound this by
we can obtain a lower bound of for
, which settles the test problem in this regime. One can get more efficient lower bounds by multiplying both Dirichlet series by a suitable Euler product mollifier; we have found
for
to be good choices to get a variety of further lower bounds depending only on
, see this table and this wiki page. Comparing this against our tabulated upper bounds for the error terms we can handle the range
.
In the range , we have been able to obtain a suitable lower bound
(where
exceeds the upper bound for
) by numerically evaluating
at a mesh of points for each choice of
, with the mesh spacing being adaptive and determined by
and an upper bound for the derivative of
; the data is available here.
This leaves the final range (roughly corresponding to
). Here we can numerically evaluate
to high accuracy at a fine mesh (see the data here), but to fill in the mesh we need good upper bounds on
. It seems that we can get reasonable estimates using some contour shifting from the original definition of
(see here). We are close to finishing off this remaining region and thus solving the toy problem.
Beyond this, we need to figure out how to show that for
as well. General theory lets one do this for
, leaving the region
. The analytic theory that handles
and
should also handle this region; for
presumably the argument principle will become relevant.
The full argument also needs to be streamlined and organised; right now it sprawls over many wiki pages and github code files. (A very preliminary writeup attempt has begun here). We should also see if there is much hope of extending the methods to push much beyond the bound of that we would get from the above calculations. This would also be a good time to start discussing whether to move to the writing phase of the project, or whether there are still fruitful research directions for the project to explore.
Participants are also welcome to add any further summaries of the situation in the comments below.