Large prime gaps and probabilistic models
What's new 2019-08-26
William Banks, Kevin Ford, and I have just uploaded to the arXiv my paper “Large prime gaps and probabilistic models“, submitted to Inventiones. In this paper we introduce a random model to help understand the connection between two well known conjectures regarding the primes , the Cramér conjecture and the Hardy-Littlewood conjecture:
Conjecture 1 (Cramér conjecture) If is a large number, then the largest prime gap in is of size . (Granville refines this conjecture to , where . Here we use the asymptotic notation for , for , for , and for .)
Conjecture 2 (Hardy-Littlewood conjecture) If are fixed distinct integers, then the number of numbers with all prime is as , where the singular series is defined by the formula
(One can view these conjectures as modern versions of two of the classical Landau problems, namely Legendre’s conjecture and the twin prime conjecture respectively.)
A well known connection between the Hardy-Littlewood conjecture and prime gaps was made by Gallagher. Among other things, Gallagher showed that if the Hardy-Littlewood conjecture was true, then the prime gaps with were asymptotically distributed according to an exponential distribution of mean , in the sense that
as for any fixed . Roughly speaking, the way this is established is by using the Hardy-Littlewood conjecture to control the mean values of for fixed , where ranges over the primes in . The relevance of these quantities arises from the Bonferroni inequalities (or “Brun pure sieve“), which can be formulated as the assertion that
when is even and
when is odd, for any natural number ; setting and taking means, one then gets upper and lower bounds for the probability that the interval is free of primes. The most difficult step is to control the mean values of the singular series as ranges over -tuples in a fixed interval such as .
Heuristically, if one extrapolates the asymptotic (1) to the regime , one is then led to Cramér’s conjecture, since the right-hand side of (1) falls below when is significantly larger than . However, this is not a rigorous derivation of Cramér’s conjecture from the Hardy-Littlewood conjecture, since Gallagher’s computations only establish (1) for fixed choices of , which is only enough to establish the far weaker bound , which was already known (see this previous paper for a discussion of the best known unconditional lower bounds on ). An inspection of the argument shows that if one wished to extend (1) to parameter choices that were allowed to grow with , then one would need as input a stronger version of the Hardy-Littlewood conjecture in which the length of the tuple , as well as the magnitudes of the shifts , were also allowed to grow with . Our initial objective in this project was then to quantify exactly what strengthening of the Hardy-Littlewood conjecture would be needed to rigorously imply Cramer\’s conjecture. The precise results are technical, but roughly we show results of the following form:
Theorem 3 (Large gaps from Hardy-Littlewood, rough statement)
- If the Hardy-Littlewood conjecture is uniformly true for -tuples of length , and with shifts of size , with a power savings in the error term, then .
- If the Hardy-Littlewood conjecture is “true on average” for -tuples of length and shifts of size for all , with a power savings in the error term, then .
In particular, we can recover Cramer’s conjecture given a sufficiently powerful version of the Hardy-Littlewood conjecture “on the average”.
Our proof of this theorem proceeds more or less along the same lines as Gallagher’s calculation, but now with allowed to grow slowly with . Again, the main difficulty is to accurately estimate average values of the singular series . Here we found it useful to switch to a probabilistic interpretation of this series. For technical reasons it is convenient to work with a truncated, unnormalised version
of the singular series, for a suitable cutoff ; it turns out that when studying prime tuples of size , the most convenient cutoff is the “Pólya magic cutoff“, defined as the largest prime for which
(this is well defined for ); by Mertens’ theorem, we have . One can interpret probabilistically as
where is the randomly sifted set of integers formed by removing one residue class uniformly at random for each prime . The Hardy-Littlewood conjecture can be viewed as an assertion that the primes behave in some approximate statistical sense like the random sifted set , and one can prove the above theorem by using the Bonferroni inequalities both for the primes and for the random sifted set, and comparing the two (using an even for the sifted set and an odd for the primes in order to be able to combine the two together to get a useful bound).
The proof of Theorem 3 ended up not using any properties of the set of primes other than that this set obeyed some form of the Hardy-Littlewood conjectures; the theorem remains true (with suitable notational changes) if this set were replaced by any other set. In order to convince ourselves that our theorem was not vacuous due to our version of the Hardy-Littlewood conjecture being too strong to be true, we then started exploring the question of coming up with random models of which obeyed various versions of the Hardy-Littlewood and Cramér conjectures.
This line of inquiry was started by Cramér, who introduced what we now call the Cramér random model of the primes, in which each natural number is selected for membership in with an independent probability of . This model matches the primes well in some respects; for instance, it almost surely obeys the “Riemann hypothesis”
and Cramér also showed that the largest gap was almost surely . On the other hand, it does not obey the Hardy-Littlewood conjecture; more precisely, it obeys a simplified variant of that conjecture in which the singular series is absent.
Granville proposed a refinement to Cramér’s random model in which one first sieves out (in each dyadic interval ) all residue classes for for a certain threshold , and then places each surviving natural number in with an independent probability . One can verify that this model obeys the Hardy-Littlewood conjectures, and Granville showed that the largest gap in this model was almost surely , leading to his conjecture that this bound also was true for the primes. (Interestingly, this conjecture is not yet borne out by numerics; calculations of prime gaps up to , for instance, have shown that never exceeds in this range. This is not necessarily a conflict, however; Granville’s analysis relies on inspecting gaps in an extremely sparse region of natural numbers that are more devoid of primes than average, and this region is not well explored by existing numerics.)
However, Granville’s model does not produce a power savings in the error term of the Hardy-Littlewood conjectures, mostly due to the need to truncate the singular series at the logarithmic cutoff . After some experimentation, we were able to produce a tractable random model for the primes which obeyed the Hardy-Littlewood conjectures with power savings, and which reproduced Granville’s gap prediction of (we also get an upper bound of for both models, though we expect the lower bound to be closer to the truth). The model can be described as follows. We select one residue class uniformly at random for each prime , and as before we let be the sifted set of integers formed by deleting the residue classes with . We then set
with Pólya’s magic cutoff (this is the cutoff that gives a density consistent with the prime number theorem or the Riemann hypothesis). As stated above, we are able to show that almost surely one has
and that the Hardy-Littlewood conjectures hold with power savings for up to for any fixed and for shifts of size . This is unfortunately a tiny bit weaker than what Theorem 3 requires (which more or less corresponds to the endpoint ), although there is a variant of Theorem 3 that can use this input to produce a lower bound on gaps in the model (but it is weaker than the one in (3)). In fact we prove a more precise almost sure asymptotic formula for that involves the optimal bounds for the linear sieve (or interval sieve), in which one deletes one residue class modulo from an interval for all primes up to a given threshold. The lower bound in (3) relates to the case of deleting the residue classes from ; the upper bound comes from the delicate analysis of the linear sieve by Iwaniec. Improving on either of the two bounds looks to be quite a difficult problem.
The probabilistic analysis of is somewhat more complicated than of or as there is now non-trivial coupling between the events as varies, although moment methods such as the second moment method are still viable and allow one to verify the Hardy-Littlewood conjectures by a lengthy but fairly straightforward calculation. To analyse large gaps, one has to understand the statistical behaviour of a random linear sieve in which one starts with an interval and randomly deletes a residue class for each prime up to a given threshold. For very small this is handled by the deterministic theory of the linear sieve as discussed above. For medium sized , it turns out that there is good concentration of measure thanks to tools such as Bennett’s inequality or Azuma’s inequality, as one can view the sieving process as a martingale or (approximately) as a sum of independent random variables. For larger primes , in which only a small number of survivors are expected to be sieved out by each residue class, a direct combinatorial calculation of all possible outcomes (involving the random graph that connects interval elements to primes if falls in the random residue class ) turns out to give the best results.