The De Bruijn-Newman constant is non-negative
What's new 2018-03-10
Brad Rodgers and I have uploaded to the arXiv our paper “The De Bruijn-Newman constant is non-negative“. This paper affirms a conjecture of Newman regarding to the extent to which the Riemann hypothesis, if true, is only “barely so”. To describe the conjecture, let us begin with the Riemann xi function
where is the Gamma function and is the Riemann zeta function. Initially, this function is only defined for , but, as was already known to Riemann, we can manipulate it into a form that extends to the entire complex plane as follows. Firstly, in view of the standard identity , we can write
and hence
By a rescaling, one may write
and similarly
and thus (after applying Fubini’s theorem)
We’ll make the change of variables to obtain
If we introduce the mild renormalisation
of , we then conclude (at least for ) that
which one can verify to be rapidly decreasing both as and as , with the decrease as faster than any exponential. In particular extends holomorphically to the upper half plane.
If we normalize the Fourier transform of a (Schwartz) function as , it is well known that the Gaussian is its own Fourier transform. The creation operator interacts with the Fourier transform by the identity
Since , this implies that the function
is its own Fourier transform. (One can view the polynomial as a renormalised version of the fourth Hermite polynomial.) Taking a suitable linear combination of this with , we conclude that
is also its own Fourier transform. Rescaling by and then multiplying by , we conclude that the Fourier transform of
is
and hence by the Poisson summation formula (using symmetry and vanishing at to unfold the summation in (2) to the integers rather than the natural numbers) we obtain the functional equation
which implies that and are even functions (in particular, now extends to an entire function). From this symmetry we can also rewrite (1) as
which now gives a convergent expression for the entire function for all complex . As is even and real-valued on , is even and also obeys the functional equation , which is equivalent to the usual functional equation for the Riemann zeta function. The Riemann hypothesis is equivalent to the claim that all the zeroes of are real.
De Bruijn introduced the family of deformations of , defined for all and by the formula
From a PDE perspective, one can view as the evolution of under the backwards heat equation . As with , the are all even entire functions that obey the functional equation , and one can ask an analogue of the Riemann hypothesis for each such , namely whether all the zeroes of are real. De Bruijn showed that these hypotheses were monotone in : if had all real zeroes for some , then would also have all zeroes real for any . Newman later sharpened this claim by showing the existence of a finite number , now known as the de Bruijn-Newman constant, with the property that had all zeroes real if and only if . Thus, the Riemann hypothesis is equivalent to the inequality . Newman then conjectured the complementary bound ; in his words, this conjecture asserted that if the Riemann hypothesis is true, then it is only “barely so”, in that the reality of all the zeroes is destroyed by applying heat flow for even an arbitrarily small amount of time. Over time, a significant amount of evidence was established in favour of this conjecture; most recently, in 2011, Saouter, Gourdon, and Demichel showed that .
In this paper we finish off the proof of Newman’s conjecture, that is we show that . The proof is by contradiction, assuming that (which among other things, implies the truth of the Riemann hypothesis), and using the properties of backwards heat evolution to reach a contradiction.
Very roughly, the argument proceeds as follows. As observed by Csordas, Smith, and Varga (and also discussed in this previous blog post, the backwards heat evolution of the introduces a nice ODE dynamics on the zeroes of , namely that they solve the ODE
for all (one has to interpret the sum in a principal value sense as it is not absolutely convergent, but let us ignore this technicality for the current discussion). Intuitively, this ODE is asserting that the zeroes repel each other, somewhat like positively charged particles (but note that the dynamics is first-order, as opposed to the second-order laws of Newtonian mechanics). Formally, a steady state (or equilibrium) of this dynamics is reached when the are arranged in an arithmetic progression. (Note for instance that for any positive , the functions obey the same backwards heat equation as , and their zeroes are on a fixed arithmetic progression .) The strategy is to then show that the dynamics from time to time creates a convergence to local equilibrium, in which the zeroes locally resemble an arithmetic progression at time . This will be in contradiction with known results on pair correlation of zeroes (or on related statistics, such as the fluctuations on gaps between zeroes), such as the results of Montgomery (actually for technical reasons it is slightly more convenient for us to use related results of Conrey, Ghosh, Goldston, Gonek, and Heath-Brown). Another way of thinking about this is that even very slight deviations from local equilibrium (such as a small number of gaps that are slightly smaller than the average spacing) will almost immediately lead to zeroes colliding with each other and leaving the real line as one evolves backwards in time (i.e., under the forward heat flow). This is a refinement of the strategy used in previous lower bounds on , in which “Lehmer pairs” (pairs of zeroes of the zeta function that were unusually close to each other) were used to limit the extent to which the evolution continued backwards in time while keeping all zeroes real.
How does one obtain this convergence to local equilibrium? We proceed by broad analogy with the “local relaxation flow” method of Erdos, Schlein, and Yau in random matrix theory, in which one combines some initial control on zeroes (which, in the case of the Erdos-Schlein-Yau method, is referred to with terms such as “local semicircular law”) with convexity properties of a relevant Hamiltonian that can be used to force the zeroes towards equilibrium.
We first discuss the initial control on zeroes. For , we have the classical Riemann-von Mangoldt formula, which asserts that the number of zeroes in the interval is as . (We have a factor of here instead of the more familiar due to the way is normalised.) This implies for instance that for a fixed , the number of zeroes in the interval is . Actually, because we get to assume the Riemann hypothesis, we can sharpen this to , a result of Littlewood (see this previous blog post for a proof). Ideally, we would like to obtain similar control for the other , , as well. Unfortunately we were only able to obtain the weaker claims that the number of zeroes of in is , and that the number of zeroes in is , that is to say we only get good control on the distribution of zeroes at scales rather than at scales . Ultimately this is because we were only able to get control (and in particular, lower bounds) on with high precision when (whereas has good estimates as soon as is larger than (say) ). This control is obtained by the expressing in terms of some contour integrals and using the method of steepest descent (actually it is slightly simpler to rely instead on the Stirling approximation for the Gamma function, which can be proven in turn by steepest descent methods). Fortunately, it turns out that this weaker control is still (barely) enough for the rest of our argument to go through.
Once one has the initial control on zeroes, we now need to force convergence to local equilibrium by exploiting convexity of a Hamiltonian. Here, the relevant Hamiltonian is
ignoring for now the rather important technical issue that this sum is not actually absolutely convergent. (Because of this, we will need to truncate and renormalise the Hamiltonian in a number of ways which we will not detail here.) The ODE (3) is formally the gradient flow for this Hamiltonian. Furthermore, this Hamiltonian is a convex function of the (because is a convex function on ). We therefore expect the Hamiltonian to be a decreasing function of time, and that the derivative should be an increasing function of time. As time passes, the derivative of the Hamiltonian would then be expected to converge to zero, which should imply convergence to local equilibrium.
Formally, the derivative of the above Hamiltonian is
Again, there is the important technical issue that this quantity is infinite; but it turns out that if we renormalise the Hamiltonian appropriately, then the energy will also become suitably renormalised, and in particular will vanish when the are arranged in an arithmetic progression, and be positive otherwise. One can also formally calculate the derivative of to be a somewhat complicated but manifestly non-negative quantity (a sum of squares); see this previous blog post for analogous computations in the case of heat flow on polynomials. After flowing from time to time , and using some crude initial bounds on and in this region (coming from the Riemann-von Mangoldt type formulae mentioned above and some further manipulations), we can eventually show that the (renormalisation of the) energy at time zero is small, which forces the to locally resemble an arithmetic progression, which gives the required convergence to local equilibrium.
There are a number of technicalities involved in making the above sketch of argument rigorous (for instance, justifying interchanges of derivatives and infinite sums turns out to be a little bit delicate). I will highlight here one particular technical point. One of the ways in which we make expressions such as the energy finite is to truncate the indices to an interval to create a truncated energy . In typical situations, we would then expect to be decreasing, which will greatly help in bounding (in particular it would allow one to control by time-averaged quantities such as , which can in turn be controlled using variants of (4)). However, there are boundary effects at both ends of that could in principle add a large amount of energy into , which is bad news as it could conceivably make undesirably large even if integrated energies such as remain adequately controlled. As it turns out, such boundary effects are negligible as long as there is a large gap between adjacent zeroes at boundary of – it is only narrow gaps that can rapidly transmit energy across the boundary of . Now, narrow gaps can certainly exist (indeed, the GUE hypothesis predicts these happen a positive fraction of the time); but the pigeonhole principle (together with the Riemann-von Mangoldt formula) can allow us to pick the endpoints of the interval so that no narrow gaps appear at the boundary of for any given time . However, there was a technical problem: this argument did not allow one to find a single interval that avoided gaps for all times simultaneously – the pigeonhole principle could produce a different interval for each time ! Since the number of times was uncountable, this was a serious issue. (In physical terms, the problem was that there might be very fast “longitudinal waves” in the dynamics that, at each time, cause some gaps between zeroes to be highly compressed, but the specific gap that was narrow changed very rapidly with time. Such waves could, in principle, import a huge amount of energy into by time .) To resolve this, we borrowed a PDE trick of Bourgain’s, in which the pigeonhole principle was coupled with local conservation laws. More specifically, we use the phenomenon that very narrow gaps take a nontrivial amount of time to expand back to a reasonable size (this can be seen by comparing the evolution of this gap with solutions of the scalar ODE , which represents the fastest at which a gap such as can expand). Thus, if a gap is reasonably large at some time , it will also stay reasonably large at slightly earlier times for some moderately small . This lets one locate an interval that has manageable boundary effects during the times in , so in particular is basically non-increasing in this time interval. Unfortunately, this interval is a little bit too short to cover all of ; however it turns out that one can iterate the above construction and find a nested sequence of intervals , with each non-increasing in a different time interval , and with all of the time intervals covering . This turns out to be enough (together with the obvious fact that is monotone in ) to still control for some reasonably sized interval , as required for the rest of the arguments.
ADDED LATER: the following analogy (involving functions with just two zeroes, rather than an infinite number of zeroes) may help clarify the relation between this result and the Riemann hypothesis (and in particular why this result does not make the Riemann hypothesis any easier to prove, in fact it confirms the delicate nature of that hypothesis). Suppose one had a quadratic polynomial of the form , where was an unknown real constant. Suppose that one was for some reason interested in the analogue of the “Riemann hypothesis” for , namely that all the zeroes of are real. A priori, there are three scenarios:
- (Riemann hypothesis false) , and has zeroes off the real axis.
- (Riemann hypothesis true, but barely so) , and both zeroes of are on the real axis; however, any slight perturbation of in the positive direction would move zeroes off the real axis.
- (Riemann hypothesis true, with room to spare) , and both zeroes of are on the real axis. Furthermore, any slight perturbation of will also have both zeroes on the real axis.
The analogue of our result in this case is that , thus ruling out the third of the three scenarios here. In this simple example in which only two zeroes are involved, one can think of the inequality as asserting that if the zeroes of are real, then they must be repeated. In our result (in which there are an infinity of zeroes, that become increasingly dense near infinity), and in view of the convergence to local equilibrium properties of (3), the analogous assertion is that if the zeroes of are real, then they do not behave locally as if they were in arithmetic progression.