Polymath proposal: upper bounding the de Bruijn-Newman constant
What's new 2018-03-10
Building on the interest expressed in the comments to this previous post, I am now formally proposing to initiate a “Polymath project” on the topic of obtaining new upper bounds on the de Bruijn-Newman constant . The purpose of this post is to describe the proposal and discuss the scope and parameters of the project.
De Bruijn introduced a family of entire functions for each real number , defined by the formula
where is the super-exponentially decaying function
As discussed in this previous post, the Riemann hypothesis is equivalent to the assertion that all the zeroes of are real.
De Bruijn and Newman showed that there existed a real constant – the de Bruijn-Newman constant – such that has all zeroes real whenever , and at least one non-real zero when . In particular, the Riemann hypothesis is equivalent to the upper bound . In the opposite direction, several lower bounds on have been obtained over the years, most recently in my paper with Brad Rodgers where we showed that , a conjecture of Newman.
As for upper bounds, de Bruijn showed back in 1950 that . The only progress since then has been the work of Ki, Kim and Lee in 2009, who improved this slightly to . The primary proposed aim of this Polymath project is to obtain further explicit improvements to the upper bound of . Of course, if we could lower the upper bound all the way to zero, this would solve the Riemann hypothesis, but I do not view this as a realistic outcome of this project; rather, the upper bounds that one could plausibly obtain by known methods and numerics would be comparable in achievement to the various numerical verifications of the Riemann hypothesis that exist in the literature (e.g., that the first non-trivial zeroes of the zeta function lie on the critical line, for various large explicit values of ).
In addition to the primary goal, one could envisage some related secondary goals of the project, such as a better understanding (both analytic and numerical) of the functions (or of similar functions), and of the dynamics of the zeroes of these functions. Perhaps further potential goals could emerge in the discussion to this post.
I think there is a plausible plan of attack on this project that proceeds as follows. Firstly, there are results going back to the original work of de Bruijn that demonstrate that the zeroes of become attracted to the real line as increases; in particular, if one defines to be the supremum of the imaginary parts of all the zeroes of , then it is known that this quantity obeys the differential inequality
whenever is positive; furthermore, once for some , then for all . I hope to explain this in a future post (it is basically due to the attraction that a zero off the real axis has to its complex conjugate). As a corollary of this inequality, we have the upper bound
for any real number . For instance, because all the non-trivial zeroes of the Riemann zeta function lie in the critical strip , one has , which when inserted into (2) gives . The inequality (1) also gives for all . If we could find some explicit between and where we can improve this upper bound on by an explicit constant, this would lead to a new upper bound on .
Secondly, the work of Ki, Kim and Lee (based on an analysis of the various terms appearing in the expression for ) shows that for any positive , all but finitely many of the zeroes of are real (in contrast with the situation, where it is still an open question as to whether the proportion of non-trivial zeroes of the zeta function on the critical line is asymptotically equal to ). As a key step in this analysis, Ki, Kim, and Lee show that for any and , there exists a such that all the zeroes of with real part at least , have imaginary part at most . Ki, Kim and Lee do not explicitly compute how depends on and , but it looks like this bound could be made effective.
If so, this suggests a possible strategy to get a new upper bound on :
- Select a good choice of parameters .
- By refining the Ki-Kim-Lee analysis, find an explicit such that all zeroes of with real part at least have imaginary part at most .
- By a numerical computation (e.g. using the argument principle), also verify that zeroes of with real part between and have imaginary part at most .
- Combining these facts, we obtain that ; hopefully, one can insert this into (2) and get a new upper bound for .
Of course, there may also be alternate strategies to upper bound , and I would imagine this would also be a legitimate topic of discussion for this project.
One appealing thing about the above strategy for the purposes of a polymath project is that it naturally splits the project into several interacting but reasonably independent parts: an analytic part in which one tries to refine the Ki-Kim-Lee analysis (based on explicitly upper and lower bounding various terms in a certain series expansion for – I may detail this later in a subsequent post); a numerical part in which one controls the zeroes of in a certain finite range; and perhaps also a dynamical part where one sees if there is any way to improve the inequality (2). For instance, the numerical “team” might, over time, be able to produce zero-free regions for with an increasingly large value of , while in parallel the analytic “team” might produce increasingly smaller values of beyond which they can control zeroes, and eventually the two bounds would meet up and we obtain a new bound on . This factoring of the problem into smaller parts was also a feature of the successful Polymath8 project on bounded gaps between primes.
The project also resembles Polymath8 in another aspect: that there is an obvious way to numerically measure progress, by seeing how the upper bound for decreases over time (and presumably there will also be another metric of progress regarding how well we can control in terms of and ). However, in Polymath8 the final measure of progress (the upper bound on gaps between primes) was a natural number, and thus could not decrease indefinitely. Here, the bound will be a real number, and there is a possibility that one may end up having an infinite descent in which progress slows down over time, with refinements to increasingly less significant digits of the bound as the project progresses. Because of this, I think it makes sense to follow recent Polymath projects and place an expiration date for the project, for instance one year after the launch date, in which we will agree to end the project and (if the project was successful enough) write up the results, unless there is consensus at that time to extend the project. (In retrospect, we should probably have imposed similar sunset dates on older Polymath projects, some of which have now been inactive for years, but that is perhaps a discussion for another time.)
Some Polymath projects have been known for a breakneck pace, making it hard for some participants to keep up. It’s hard to control these things, but I am envisaging a relatively leisurely project here, perhaps taking the full year mentioned above. It may well be that as the project matures we will largely be waiting for the results of lengthy numerical calculations to come in, for instance. Of course, as with previous projects, we would maintain some wiki pages (and possibly some other resources, such as a code repository) to keep track of progress and also to summarise what we have learned so far. For instance, as was done with some previous Polymath projects, we could begin with some “online reading seminars” where we go through some relevant piece of literature (most obviously the Ki-Kim-Lee paper, but there may be other resources that become relevant, e.g. one could imagine the literature on numerical verification of RH to be of value).
One could also imagine some incidental outcomes of this project, such as a more efficient way to numerically establish zero free regions for various analytic functions of interest; in particular, the project may well end up focusing on some other aspect of mathematics than the specific questions posed here.
Anyway, I would be interested to hear in the comments below from others who might be interested in participating, or at least observing, this project, particularly if they have suggestions regarding the scope and direction of the project, and on organisational structure (e.g. if one should start with reading seminars, or some initial numerical exploration of the functions , etc..) One could also begin some preliminary discussion of the actual mathematics of the project itself, though (in line with the leisurely pace I was hoping for), I expect that the main burst of mathematical activity would happen later, once the project is formally launched (with wiki page resources, blog posts dedicated to specific aspects of the project, etc.).