254A, Notes 9 – second moment and entropy methods
What's new 2019-11-13
In these notes we presume familiarity with the basic concepts of probability theory, such as random variables (which could take values in the reals, vectors, or other measurable spaces), probability, and expectation. Much of this theory is in turn based on measure theory, which we will also presume familiarity with. See for instance this previous set of lecture notes for a brief review.
The basic objects of study in analytic number theory are deterministic; there is nothing inherently random about the set of prime numbers, for instance. Despite this, one can still interpret many of the averages encountered in analytic number theory in probabilistic terms, by introducing random variables into the subject. Consider for instance the form
of the prime number theorem (where we take the limit ). One can interpret this estimate probabilistically as
where is a random variable drawn uniformly from the natural numbers up to , and denotes the expectation. (In this set of notes we will use boldface symbols to denote random variables, and non-boldface symbols for deterministic objects.) By itself, such an interpretation is little more than a change of notation. However, the power of this interpretation becomes more apparent when one then imports concepts from probability theory (together with all their attendant intuitions and tools), such as independence, conditioning, stationarity, total variation distance, and entropy. For instance, suppose we want to use the prime number theorem (1) to make a prediction for the sum
After dividing by , this is essentially
With probabilistic intuition, one may the random variables to be approximately independent (there is no obvious relationship between the number of prime factors of , and of ), and so the above average would be expected to be approximately equal to
which by (2) is equal to . Thus we are led to the prediction
The asymptotic (3) is widely believed (it is a special case of the Chowla conjecture, which we will discuss in later notes; while there has been recent progress towards establishing it rigorously, it remains open for now.
How would one try to make these probabilistic intuitions more rigorous? The first thing one needs to do is find a more quantitative measurement of what it means for two random variables to be “approximately” independent. There are several candidates for such measurements, but we will focus in these notes on two particularly convenient measures of approximate independence: the “” measure of independence known as covariance, and the “” measure of independence known as mutual information (actually we will usually need the more general notion of conditional mutual information that measures conditional independence). The use of type methods in analytic number theory is well established, though it is usually not described in probabilistic terms, being referred to instead by such names as the “second moment method”, the “large sieve” or the “method of bilinear sums”. The use of methods (or “entropy methods”) is much more recent, and has been able to control certain types of averages in analytic number theory that were out of reach of previous methods such as methods. For instance, in later notes we will use entropy methods to establish the logarithmically averaged version
of (3), which is implied by (3) but strictly weaker (much as the prime number theorem (1) implies the bound , but the latter bound is much easier to establish than the former).
As with many other situations in analytic number theory, we can exploit the fact that certain assertions (such as approximate independence) can become significantly easier to prove if one only seeks to establish them on average, rather than uniformly. For instance, given two random variables and of number-theoretic origin (such as the random variables and mentioned previously), it can often be extremely difficult to determine the extent to which behave “independently” (or “conditionally independently”). However, thanks to second moment tools or entropy based tools, it is often possible to assert results of the following flavour: if are a large collection of “independent” random variables, and is a further random variable that is “not too large” in some sense, then must necessarily be nearly independent (or conditionally independent) to many of the , even if one cannot pinpoint precisely which of the the variable is independent with. In the case of the second moment method, this allows us to compute correlations such as for “most” . The entropy method gives bounds that are significantly weaker quantitatively than the second moment method (and in particular, in its current incarnation at least it is only able to say non-trivial assertions involving interactions with residue classes at small primes), but can control significantly more general quantities for “most” thanks to tools such as the {Pinsker inequality.
— 1. Second moment methods —
In this section we discuss probabilistic techniques of an “” nature. We fix a probability space to model all of random variables; thus for instance we shall model a complex random variable in these notes by a measurable function . (Strictly speaking, there is a subtle distinction one can maintain between a random variable and its various measure-theoretic models, which becomes relevant if one later decides to modify the probability space , but this distinction will not be so important in these notes and so we shall ignore it. See this previous set of notes for more discussion.)
We will focus here on the space of complex random variables (that is to say, measurable maps ) whose second moment
of is finite. In many number-theoretic applications the finiteness of the second moment will be automatic because will only take finitely many values. As is well known, the space has the structure of a complex Hilbert space, with inner product
and norm
for . By slight abuse of notation, the complex numbers can be viewed as a subset of , by viewing any given complex number as a constant (deterministic) random variable. Then is a one-dimensional subspace of , spanned by the unit vector . Given a random variable to , the projection of to is then the mean
and we obtain an orthogonal splitting of any into its mean and its mean zero part . By Pythagoras’ theorem, we then have
The first quantity on the right-hand side is the square of the distance from to , and this non-negative quantity is known as the variance
The square root of the variance is known as the standard deviation. The variance controls the distribution of the random variable through Chebyshev’s inequality
for any , which is immediate from observing the inequality and then taking expectations of both sides. Roughly speaking, this inequality asserts that typically deviates from its mean by no more than a bounded multiple of the standard deviation .
A slight generalisation of Chebyshev’s inequality that can be convenient to use is
for any and any complex number (which typically will be a simplified approximation to the mean ), which is proven similarly to (6) but noting (from (5)) that .
Informally, (6) is an assertion that a square-integrable random variable will concentrate around its mean if its variance is not too large. See these previous notes for more discussion of the concentration of measure phenomenon. One can often obtain stronger concentration of measure than what is provided by Chebyshev’s inequality if one is able to calculate higher moments than the second moment, such as the fourth moment or exponential moments , but we will not pursue this direction in this set of notes.
Clearly the variance is homogeneous of order two, thus
for any and . In particular, the variance is not always additive: the claim fails in particular when is not almost surely zero. However, there is an important substitute for this formula. Given two random variables , the inner product of the corresponding mean zero parts is a complex number known as the covariance:
As are orthogonal to , it is not difficult to obtain the alternate formula
The covariance is then a positive semi-definite inner product on (it basically arises from the Hilbert space structure of the space of mean zero variables), and . From the Cauchy-Schwarz inequality we have
If have non-zero variance (that is, they are not almost surely constant), then the ratio
is then known as the correlation between and , and is a complex number of magnitude at most ; for real-valued that are not almost surely constant, the correlation is instead a real number between and . At one extreme, a correlation of magnitude occurs if and only if is a scalar multiple of . At the other extreme, a correlation of zero is an indication (though not a guarantee) of independence. Recall that two random variables are independent if one has
for all (Borel) measurable . In particular, setting , for and integrating using Fubini’s theorem, we conclude that
similarly with replaced by , and similarly for . In particular we have
and thus from (8) we thus see that independent random variables have zero covariance (and zero correlation, when they are not almost surely constant). On the other hand, the converse fails:
Exercise 1 Provide an example of two random variables which are not independent, but which have zero correlation or covariance with each other. (There are many ways to produce some examples. One comes from exploiting various systems of orthogonal functions, such as sines and cosines. Another comes from working with random variables taking only a small number of values, such as .
for any finite collection of random variables . These identities combine well with Chebyshev-type inequalities such as (6), (7), and this leads to a very common instance of the second moment method in action. For instance, we can use it to understand the distribution of the number of prime factors of a random number that fall within a given set . Given any set of natural numbers, define the logarithmic size to be the quantity
Thus for instance Euler’s theorem asserts that the primes have infinite logarithmic size.
Lemma 2 (Turan-Kubilius inequality, special case) Let be an interval of length at least , and let be an integer drawn uniformly at random from this interval, thus
for all . Let be a finite collection of primes, all of which have size at most . Then the random variable has mean
and variance
In particular,
and from (7) we have
for any .
Proof: For any natural number , we have
We now write . From (11) we see that each indicator random variable , has mean and variance ; similarly, for any two distinct , we see from (11), (8) the indicators , have covariance
and the claim now follows from (10).
The exponents of in the error terms here are not optimal; but in practice, we apply this inequality when is much larger than any given power of , so factors such as will be negligible. Informally speaking, the above lemma asserts that a typical number in a large interval will have roughly prime factors in a given finite set of primes, as long as the logarithmic size is large.
If we apply the above lemma to for some large , and equal to the primes up to (say) , we have , and hence
Since , we recover the main result
of Section 5 of Notes 1 (indeed this is essentially the same argument as in that section, dressed up in probabilistic language). In particular, we recover the Hardy-Ramanujan law that a proportion of the natural numbers in have prime factors.
Exercise 3 (Turan-Kubilius inequality, general case) Let be an additive function (which means that whenever are coprime. Show that
where
(Hint: one may first want to work with the special case when vanishes whenever so that the second moment method can be profitably applied, and then figure out how to address the contributions of prime powers larger than .)
Exercise 4 (Turan-Kubilius inequality, logarithmic version) Let with , and let be a collection of primes of size less than with . Show that
Exercise 5 (Paley-Zygmund inequality) Let be non-negative with positive mean. Show that
This inequality can sometimes give slightly sharper results than the Chebyshev inequality when using the second moment method.
Now we give a useful lemma that quantifies a heuristic mentioned in the introduction, namely that if several random variables do not correlate with each other, then it is not possible for any further random variable to correlate with many of them simultaneously. We first state an abstract Hilbert space version.
Lemma 6 (Bessel type inequality, Hilbert space version) If are elements of a Hilbert space , and are positive reals, then
Proof: We use the duality method. Namely, we can write the left-hand side of (14) as
for some complex numbers with (just take to be normalised by the left-hand side of (14), or zero if that left-hand side vanishes. By Cauchy-Schwarz, it then suffices to establish the dual inequality
The left-hand side can be written as
Using the arithmetic mean-geometric mean inequality and symmetry, this may be bounded by
Since , the claim follows.
Corollary 7 (Bessel type inequality, probabilistic version) If , and are positive reals, then
Proof: By subtracting the mean from each of we may assume that these random variables have mean zero. The claim now follows from Lemma 6.
To get a feel for this inequality, suppose for sake of discussion that and all have unit variance and , but that the are pairwise uncorrelated. Then the right-hand side is equal to , and the left-hand side is the sum of squares of the correlations between and each of the . Any individual correlation is then still permitted to be as large as , but it is not possible for multiple correlations to be this large simultaneously. This is geometrically intuitive if one views the random variables as vectors in a Hilbert space (and correlation as a rough proxy for the angle between such vectors). This lemma also shares many commonalities with the large sieve inequality, discussed in this set of notes.
One basic number-theoretic application of this inequality is the following sampling inequality of Elliott, that lets one approximate a sum of an arithmetic function by its values on multiples of primes :
Exercise 8 (Elliott’s inequality) Let be an interval of length at least . Show that for any function , one has
(Hint: Apply Corollary 7 with , , and , where is the uniform variable from Lemma 2.) Conclude in particular that for every , one has
for all primes outside of a set of exceptional primes of logarithmic size .
Informally, the point of this inequality is that an arbitrary arithmetic function may exhibit correlation with the indicator function of the multiples of for some primes , but cannot exhibit significant correlation with all of these indicators simultaneously, because these indicators are not very correlated to each other. We note however that this inequality only gains a tiny bit over trivial bounds, because the set of primes up to only has logarithmic size by Mertens’ theorems; thus, any asymptotics that are obtained using this inequality will typically have error terms that only improve upon the trivial bound by factors such as .
Exercise 9 (Elliott’s inequality, logarithmic form) Let with . Show that for any function , one has
and thus for every , one has
for all primes outside of an exceptional set of primes of logarithmic size .
Exercise 10 Use Exercise (9) and a duality argument to provide an alternate proof of Exercise 4. (Hint: express the left-hand side of (12) as a correlation between and some suitably -normalised arithmetic function .)
As a quick application of Elliott’s inequality, let us establish a weak version of the prime number theorem:
Proposition 11 (Weak prime number theorem) For any we have
whenever are sufficiently large depending on .
This estimate is weaker than what one can obtain by existing methods, such as Exercise 56 of Notes 1. However in the next section we will refine this argument to recover the full prime number theorem.
Proof: Fix , and suppose that are sufficiently large. From Exercise 9 one has
for all primes outside of an exceptional set of logarithmic size . If we restrict attention to primes then one sees from the integral test that one can replace the sum by and only incur an additional error of . If we furthermore restrict to primes larger than , then the contribution of those that are divisible by is also . For not divisible by , one has . Putting all this together, we conclude that
for all primes outside of an exceptional set of logarithmic size . In particular, for large enough this statement is true for at least one such . The claim then follows.
As another application of Elliott’s inequality, we present a criterion for orthogonality between multiplicative functions and other sequences, first discovered by Katai (with related results also introduced earlier by Daboussi and Delange), and rediscovered by Bourgain, Sarnak, and Ziegler:
Proposition 12 (Daboussi-Delange-Katai-Bourgain-Sarnak-Ziegler criterion) Let be a multiplicative function with for all , and let be another bounded function. Suppose that one has
as for any two distinct primes . Then one has
as .
Proof: Suppose the claim fails, then there exists (which we can assume to be small) and arbitrarily large such that
By Exercise 8, this implies that
for all primes outside of an exceptional set of logarithmic size . Call such primes “good primes”. In particular, by the pigeonhole principle, and assuming large enough, there exists a dyadic range with which contains good primes.
Fix a good prime in . From (15) we have
We can replace the range by with negligible error. We also have except when is a multiple of , but this latter case only contributes which is also negligible compared to the right-hand side. We conclude that
for every good prime. On the other hand, from Lemma 6 we have
where range over the good primes in . The left-hand side is then , and by hypothesis the right-hand side is for large enough. As and is small, this gives the desired contradiction
Exercise 13 (Daboussi-Delange theorem) Let be irrational, and let be a multiplicative function with for all . Show that
as . If instead is rational, show that there exists be a multiplicative function with for which the statement (16) fails. (Hint: use Dirichlet characters and Plancherel’s theorem for finite abelian groups.)
— 2. An elementary proof of the prime number theorem —
Define the Mertens function
As shown in Theorem 58 of Notes 1, the prime number theorem is equivalent to the bound
as . We now give a recent proof of this theorem, due to Redmond McNamara (personal communication), that relies primarily on Elliott’s inequality and the Selberg symmetry formula; it is a relative of the standard elementary proof of this theorem due to Erdös and Selberg. In order to keep the exposition simple, we will not arrange the argument in a fashion that optimises the decay rate (in any event, there are other proofs of the prime number theorem that give significantly stronger bounds).
Firstly we see that Elliott’s inequality gives the following weaker version of (17):
Lemma 14 (Oscillation for Mertens’ function) If and , then we have
for all primes outside of an exceptional set of primes of logarithmic size .
Proof: We may assume as the claim is trivial otherwise. From Exercise 8 applied to and , we have
for all outside of an exceptional set of primes of logarithmic size . Since for not divisible by , the right-hand side can be written as
Since outside of an exceptional set of logarithmic size , the claim follows.
Informally, this lemma asserts that for most primes , which morally implies that for most primes . If we can then locate suitable primes with , thus should then lead to , which should then yield the prime number theorem . The manipulations below are intended to make this argument rigorous.
It will be convenient to work with a logarithmically averaged version of this claim.
Corollary 15 (Logarithmically averaged oscillation) If and is sufficiently large depending on , then
Proof: For each , we have from the previous lemma that
for all outside of an exceptional set of logarithmic size . We then have
so it suffices by Markov’s inequality to show that
But by Fubini’s theorem, the left-hand side may be bounded by
and the claim follows.
Let be sufficiently small, and let be sufficiently large depending on . Call a prime good if the bound (18) holds and bad otherwise, thus all primes outside of an exceptional set of bad primes of logarithmic size are good. Now we observe that we can make small as long as we can make two good primes multiply to be close to a third:
Proof: By definition of good prime, we have the bounds \beginequation} \int_{\log x}^x |\frac{M(t)}{t} + \frac{M(t/p_1)}{t/p_1}| \frac{dt}{t} \ll \varepsilon \log x
We rescale (19) by to conclude that
We can replace the integration range here from to with an error of if is large enough. Also, since , we have . Thus we have
Combining this with (2), (20) and the triangle inequality (writing as a linear combination of , , and ) we conclude that
This is an averaged version of the claim we need. To remove the averaging, we use the identity (see equation (63) of Notes 1) to conclude that
From the triangle inequality one has
and hence by Mertens’ theorem
From the Brun-Titchmarsh inequality (Corollary 61 of ) we have
and so from the previous estimate and Fubini’s theorem one has
and hence by \eqref), there exist (deterministic) good primes with the required properties.
Of course, using the prime number theorem here to prove the prime number theorem would be circular. However, we can still locate a good triple of primes using the Selberg symmetry formula
as , where is the second von Mangoldt function
see Proposition 60 of Notes 1. We can strip away the contribution of the primes:
Exercise 17 Show that
as .
In particular, on evaluating this at and subtracting, we have
whenever is sufficiently large depending on . In particular, for any such , one either has
(or both). Informally, the Selberg symmetry formula shows that the interval contains either a lot of primes, or a lot of semiprimes. The factor of is slightly annoying, so we now remove it. Consider the contribution of those primes to (24) with . This is bounded by
which we can bound crudely using the Chebyshev bound by
which by Mertens theorem is . Thus the contribution of this case can be safely removed from (24). Similarly for those cases when . For the remaining cases we bound . We conclude that for any sufficiently large , either (23) or
In order to find primes with close to , it would be very convenient if we could find a for which (23) and (25) both hold. We can’t quite do this directly, but due to the “connected” nature of the set of scales , but we can do the next best thing:
Proposition 18 Suppose is sufficiently large depending on . Then there exists with such that
Proof: We know that every in obeys at least one of (26), (27). Our task is to produce an adjacent pair of , one of which obeys (26) and the other obeys (27). Suppose for contradiction that no such pair exists, then whenever fails to obey (26), then any adjacent must also fail to do so, and similarly for (27). Thus either (26) will fail to hold for all , or (27) will fail to hold for all such . If (26) fails for all , then on summing we have
which contradicts Mertens’ theorem if is large enough because the left-hand side is . Similarly, if (27) fails for all , then
and again Mertens’ theorem can be used to lower bound the left-hand side by (in fact one can even gain an additional factor of if one works things through carefully) and obtain a contradiction.
The above proposition does indeed provide a triple of primes with . If is sufficiently large depending on and less than (say) , so that , this would give us what we need as long as one of the triples consisted only of good primes. The only way this can fail is if either
for some , or if
for some . In the first case, we can sum to conclude that
and in the second case we have
Since the total set of bad primes up to has logarithmic size , we conclude from the pigeonhole principle (and the divergence of the harmonic series ) that for any depending only on , and any large enough, there exists such that neither of (28) and (29) hold. Indeed the set of obeying (28) has logarithmic size , and similarly for (29). Choosing a that avoids both of these scenarios, we then find a good and good with , so that , and then by Proposition 16 we conclude that for all sufficiently large . Sending to zero, we obtain the prime number theorem.
— 3. Entropy methods —
In the previous section we explored the consequences of the second moment method, which applies to square-integrable random variables taking values in the real or complex numbers. Now we explore entropy methods, which now apply to random variables which take a finite number of values (equipped with the discrete sigma-algebra), but whose range need not be numerical in nature. (One could extend entropy methods to slightly larger classes of random variables, such as ones that attain a countable number of values, but for our applications finitely-valued random variables will suffice.)
The fundamental notion here is that of the Shannon entropy of a random variable. If takes values in a finite set , its Shannon entropy (or entropy for short) is defined by the formula
where ranges over all the possible values of , and we adopt the convention , so that values that are almost surely not attained by do not influence the entropy. We choose here to use the natural logarithm to normalise our entropy (in which case a unit of entropy is known as a “nat“); in the information theory literature it is also common to use the base two logarithm to measure entropy (in which case a unit of entropy is known as a “bit“, which is equal to nats). However, the precise choice of normalisation will not be important in our discussion.
It is clear that if two random variables have the same probability distribution, then they have the same entropy. Also, the precise choice of range set is not terribly important: if takes values in , and is an injection, then it is clear that and have the same entropy:
This is in sharp contrast to moment-based statistics such as the mean or variance, which can be radically changed by applying some injective transformation to the range values.
Informally, the entropy informally measures how “spread out” or “disordered” the distribution of is, behaving like a logarithm of the size of the “essential support” of such a variable; from an information-theoretic viewpoint, it measures the amount of “information” one learns when one is told the value of . Here are some basic properties of Shannon entropy that help support this intuition:
Exercise 19 (Basic properties of Shannon entropy) Let be a random variable taking values in a finite set .
- (i) Show that , with equality if and only if is almost surely deterministic (that is to say, it is almost surely equal to a constant ).
- (ii) Show that
with equality if and only if is uniformly distributed on . (Hint: use Jensen’s inequality and the convexity of the map on .)
- (iii) (Shannon-McMillan-Breiman theorem) Let be a natural number, and let be independent copies of . As , show that there is a subset of cardinality with the properties that
and
uniformly for all . (The proof of this theorem will require Stirling’s formula, which you may assume here as a black box; see also this previous blog post.) Informally, we thus see a large tuple of independent samples of approximately behaves like a uniform distribution on values.
The concept of Shannon entropy becomes significantly more powerful when combined with that of conditioning. Recall that a random variable taking values in a range set can be modeled by a measurable map from a probability space to the range . If is an event in of positive probability, we can then condition to the event to form a new random variable on the conditioned probability space , where
is the restriction of the -algebra to ,
is the conditional probability measure on , and is the restriction of to . This random variable lives on a different probability space than itself, so it does not make sense to directly combine these variables (thus for instance one cannot form the sum even when both random variables are real or complex valued); however, one can still form the Shannon entropy of the conditioned random variable , which is given by the same formula
Given another random variable taking values in another finite set , we can then define the conditional Shannon entropy to be the expected entropy of the level sets , thus
with the convention that the summand here vanishes when . From the law of total probability we have
for any , and hence by Jensen’s inequality
for any ; summing we obtain the Shannon entropy inequality
Informally, this inequality asserts that the new information content of can be decreased, but not increased, if one is first told some additional information .
This inequality (32) can be rewritten in several ways:
Exercise 20 Let , be random variables taking values in finite sets respectively.
- (i) Establish the chain rule
where is the joint random variable . In particular, (32) can be expressed as a subadditivity formula
- (ii) If is a function of , in the sense that for some (deterministic) function , show that .
- (iii) Define the mutual information by the formula
Establish the inequalities
with the first inequality holding with equality if and only if are independent, and the latter inequalities holding if and only if is a function of (or vice versa).
From the above exercise we see that the mutual information is a measure of dependence between and , much as correlation or covariance was in the previous sections. There is however one key difference: whereas a zero correlation or covariance is a consequence but not a guarantee of independence, zero mutual information is logically equivalent to independence, and is thus a stronger property. To put it another way, zero correlation or covariance allows one to calculate the average in terms of individual averages of , but zero mutual information is stronger because it allows one to calculate the more general averages in terms of individual averages of , for arbitrary functions taking values into the complex numbers. This increased power of the mutual information statistic will allow us to estimate various averages of interest in analytic number theory in ways that do not seem amenable to second moment methods.
The subadditivity property formula can be conditioned to any event occuring with positive probability (replacing the random variables by their conditioned counterparts ), yielding the inequality
Applying this inequality to the level events of some auxiliary random variable taking values in another finite set , multiplying by , and summing, we conclude the inequality
In other words, the conditional mutual information
between and conditioning on is always non-negative:
One has conditional analogues of the above exercise:
Exercise 21 Let , , be random variables taking values in finite sets respectively.
- (i) Establish the conditional chain rule
In particular, (35) is equivalent to the inequality
- (ii) Show that equality holds in (35) if and only if are conditionally independent relative to , which means that
for any , , .
- (iii) Show that , with equality if and only if is almost surely a deterministic function of .
- (iv) Show the data processing inequality
for any functions , , and more generally that
- (v) If is an injective function, show that
However, if is not assumed to be injective, show by means of examples that there is no order relation between the left and right-hand side of (39) (in other words, show that either side may be greater than the other). Thus, increasing or decreasing the amount of information that is known may influence the mutual information between two remaining random variables in either direction.
- (vi) If is a function of , and also a function of (thus for some and ), and a further random variable is a function jointly of (thus for some ), establish the submodularity inequality
We now give a key motivating application of the Shannon entropy inequalities. Suppose one has a sequence of random variables, all taking values in a finite set , which are stationary in the sense that the tuples and have the same distribution for every . In particular we will have
and hence by (38)
If we write , we conclude from (33) that we have the concavity property
In particular we have for any , which on summing and telescoping series (noting that ) gives
and hence we have the entropy monotonicity
In particular, the limit exists. This quantity is known as the Kolmogorov-Sinai entropy of the stationary process ; it is an important statistic in the theory of dynamical systems, and roughly speaking measures the amount of entropy produced by this process as a function of a discrete time vairable . We will not directly need the Kolmogorov-Sinai entropy in our notes, but a variant of the entropy monotonicity formula (40) will be important shortly.
In our application we will be dealing with processes that are only asymptotically stationary rather than stationary. To control this we recall the notion of the total variation distance between two random variables taking values in the same finite space , defined by
There is an essentially equivalent notion of this distance which is also often in use:
Exercise 22 If two random variables take values in the same finite space , establish the inequalities
and for any , establish the inequality
Shannon entropy is continuous in total variation distance as long as we keep the range finite. More quantitatively, we have
Lemma 23 If two random variables take values in the same finite space , then
with the convention that the error term vanishes when .
Proof: Set . The claim is trivial when (since then have the same distribution) and when (from (31)), so let us assume , and our task is to show that
If we write , , and , then
By dividing into the cases and we see that
since , it thus suffices to show that
But from Jensen’s inequality (31) one has
since , the claim follows.
In the converse direction, if a random variable has entropy close to the maximum , then one can control the total variation:
Lemma 24 (Special case of Pinsker inequality) If takes values in a finite set , and is a uniformly distributed random variable on , then
Of course, we have , so we may also write the above inequality as
The optimal value of the implied constant here is known to equal , but we will not use this sharp version of the inequality here.
Proof: If we write and , and , then we can rewrite the claimed inequality as
Observe that the function is concave, and in fact for all . From this and Taylor expansion with remainder we have the inequality
for some between and . Since is independent of , and , we thus have on summing in
By Cauchy-Schwarz we then have
Since and , the claim follows.
The above lemma does not hold when the comparison variable is not assumed to be uniform; in particular, two non-uniform random variables can have precisely the same entropy but yet have different distributions, so that their total variation distance is positive. There is a more general variant, known as the Pinsker inequality, which we will not use in these notes:
Exercise 25 (Pinsker inequality) If take values in a finite set , define the Kullback-Leibler divergence of relative to by the formula
(with the convention that the summand vanishes when vanishes).
- (i) Establish the Gibbs inequality .
- (ii) Establish the Pinsker inequality
In particular, vanishes if and only if have identical distribution. Show that this implies Lemma 24 as a special case.
- (iii) Give an example to show that the Kullback-Liebler divergence need not be symmetric, thus there exist such that .
- (iv) If are random variables taking values in finite sets , and are independent random variables taking values in respectively with each having the same distribution of , show that
In our applications we will need a relative version of Lemma 24:
Corollary 26 (Relative Pinsker inequality) If takes values in a finite set , takes values in a finite set , and is a uniformly distributed random variable on that is independent of , then
Proof:
From direct calculation we have the identity
As is independent of , is uniformly distributed on . From Lemma 24 we conclude
Inserting this bound and using the Cauchy-Schwarz inequality, we obtain the claim.
Now we are ready to apply the above machinery to give a key inequality that is analogous to Elliott’s inequality. Inequalities of this type first appeared in one of my papers, introducing what I called the “entropy decrement argument”; the following arrangement of the inequality and proof is due to Redmond McNamara (personal communication).
Theorem 27 (Entropy decrement inequality) Let be a random variable taking values in a finite set of integers, which obeys the approximate stationarity
for some . Let be a collection of distinct primes less than some threshold , and let be natural numbers that are also bounded by . Let be a function taking values in a finite set . For , let denote the -valued random variable
and let denote the -valued random variable
Also, let be a random variable drawn uniformly from , independently of . Then
The factor (arising from an invocation of the Chinese remainder theorem in the proof) unfortunately restricts the usefulness of this theorem to the regime in which all the primes involved are of “sub-logarithmic size”, but once one is in that regime, the second term on the right-hand side of (44) tends to be negligible in practice. Informally, this theorem asserts that for most small primes , the random variables and behave as if they are independent of each other.
Proof: We can assume , as the claim is trivial for (the all have zero entropy). For , we introduce the -valued random variable
The idea is to exploit some monotonicity properties of the quantity , in analogy with (40). By telescoping series we have
where we extend (43) to the case. From (37) we have
Now we lower bound the summand on the right-hand side. From multiple applications of the conditional chain rule (36) we have
We now use the approximate stationarity of to derive an approximate monotonicity property for . If , then from (38) we have
Write and
Note that is a deterministic function of and vice versa. Thus we can replace by in the above formula, and conclude that
The tuple takes values in a set of cardinality thanks to the Chebyshev bounds. Hence by two applications of Lemma 23, (42) we have
The first term on the right-hand side is . Worsening the error term slightly, we conclude that
and hence
for any . In particular
which by (46), (47) rearranges to
From (45) we conclude that
Meanwhile, from Corollary 26, (38), (37) we have
The probability distribution of is a function on , which by the Chinese remainder theorem we can identify with a cyclic group where . From (42) we see that the value of this distribution at adjacent values of this cyclic group varies by , hence the total variation distance between this random variable and the uniform distribution on is by Chebyshev bounds. By Lemma 23 we then have
and thus
The claim follows.
We now compare this result to Elliott’s inequality. If one tries to address precisely the same question that Elliott’s inequality does – namely, to try to compare a sum with sampled subsums – then the results are quantitatively much weaker:
Corollary 28 (Weak Elliott inequality) Let be an interval of length at least . Let be a function with for all , and let . Then one has
for all primes outside of an exceptional set of primes of logarithmic size .
Comparing this with Exercise 8 we see that we cover a much smaller range of primes ; also the size of the exceptional set is slightly worse. This version of Elliot’s inequality is however still strong enough to recover a proof of the prime number theorem as in the previous section.
Proof: We can assume that is small, as the claim is trivial for comparable to . We can also assume that
since the claim is also trivial otherwise (just make all primes up to exceptional, then use Mertens’ theorem). As a consequence of this, any quantity involving in the denominator will end up being completely negligible in practice. We can also restrict attention to primes less than (say) , since the remaining primes between and have logarithmic size .
By rounding the real and imaginary parts of to the nearest multiple of , we may assume that takes values in some finite set of complex numbers of size with cardinality . Let be drawn uniformly at random from . Then (42) holds with , and from Theorem 27 with and (which makes the second term of the right-hand side of (44) negligible) we have
where are the primes up to , arranged in increasing order. By Markov’s inequality, we thus have
for outside of a set of primes of logarithmic size .
Let be as above. Now let be the function
that is to say picks out the unique component of the tuple in which is divisible by . This function is bounded by , and then by (41) we have
The left-hand side is equal to
which on switching the summations and using the large nature of can be rewritten as
Meanwhile, the left-hand side is equal to
which again by switching the summations becomes
The claim follows.
In the above argument we applied (41) with a very specific choice of function . The power of Theorem 27 lies in the ability to select many other such functions , leading to estimates that do not seem to be obtainable purely from the second moment method. In particular we have the following generalisation of the previous estimate:
Proposition 29 (Weak Elliott inequality for multiple correlations) Let be an interval of length at least . Let be a function with for all , and let . Let be integers. Then one has
for all primes outside of an exceptional set of primes of logarithmic size .
Proof: We allow all implied constants to depend on . As before we can assume that is sufficiently small (depending on ), that takes values in a set of bounded complex numbers of cardinality , and that is large in the sense of (48), and restrict attention to primes up to . By shifting the and using the large nature of we can assume that the are all non-negative, taking values in for some . We now apply Theorem 27 with and conclude as before that
for outside of a set of primes of logarithmic size .
Let be as above. Let be the function
This function is still bounded by , so by (41) as before we have
The left-hand side is equal to
which on switching the summations and using the large nature of can be rewritten as
Meanwhile, the left-hand side is equal to
which again by switching the summations becomes
The claim follows.
There is a logarithmically averaged version of the above proposition:
Exercise 30 (Weak Elliott inequality for logarithmically averaged multiple correlations) Let with , let be a function bounded in magnitude by , let , and let be integers. Show that
for all primes outside of an exceptional set of primes of logarithmic size .
When one specialises to multiplicative functions, this lets us dilate shifts in multiple correlations by primes:
Exercise 31 Let with , let be a multiplicative function bounded in magnitude by , let , and let be nonnegative integers. Show that
for all primes outside of an exceptional set of primes of logarithmic size .
For instance, setting to be the Möbius function, , , and (say), we see that
for all primes outside of an exceptional set of primes of logarithmic size . In particular, for large enough, one can obtain bounds of the form
for various moderately large sets of primes . It turns out that these double sums on the right-hand side can be estimated by methods which we will cover in later series of notes. Among other things, this allows us to establish estimates such as
as , which to date have only been established using these entropy methods (in conjunction with the methods discussed in later notes). This is progress towards an open problem in analytic number theory known as Chowla’s conjecture, which we will also discuss in later notes.