Twisted convolution and the sensitivity conjecture
What's new 2019-08-20
Earlier this month, Hao Huang (who, incidentally, was a graduate student here at UCLA) gave a remarkably short proof of a long-standing problem in theoretical computer science known as the sensitivity conjecture. See for instance this blog post of Gil Kalai for further discussion and links to many other online discussions of this result. One formulation of the theorem proved is as follows. Define the -dimensional hypercube graph
to be the graph with vertex set
, and with every vertex
joined to the
vertices
, where
is the standard basis of
.
Theorem 1 (Lower bound on maximum degree of induced subgraphs of hypercube) Let
be a set of at least
vertices in
. Then there is a vertex in
that is adjacent (in
) to at least
other vertices in
.
The bound (or more precisely,
) is completely sharp, as shown by Chung, Furedi, Graham, and Seymour; we describe this example below the fold. When combined with earlier reductions of Gotsman-Linial and Nisan-Szegedy; we give these below the fold also.
Let be the adjacency matrix of
(where we index the rows and columns directly by the vertices in
, rather than selecting some enumeration
), thus
when
for some
, and
otherwise. The above theorem then asserts that if
is a set of at least
vertices, then the
minor
of
has a row (or column) that contains at least
non-zero entries.
The key step to prove this theorem is the construction of rather curious variant of the adjacency matrix
:
Proposition 2 There exists a
matrix
which is entrywise dominated by
in the sense that
Assuming this proposition, the proof of Theorem 1 can now be quickly concluded. If we view as a linear operator on the
-dimensional space
of functions of
, then by hypothesis this space has a
-dimensional subspace
on which
acts by multiplication by
. If
is a set of at least
vertices in
, then the space
of functions on
has codimension at most
in
, and hence intersects
non-trivially. Thus the
minor
of
also has
as an eigenvalue (this can also be derived from the Cauchy interlacing inequalities), and in particular this minor has operator norm at least
. By Schur’s test, this implies that one of the rows or columns of this matrix has absolute values summing to at least
, giving the claim.
Remark 3 The argument actually gives a strengthening of Theorem 1: there exists a vertex
of
with the property that for every natural number
, there are at least
paths of length
in the restriction
of
to
that start from
. Indeed, if we let
be an eigenfunction of
on
, and let
be a vertex in
that maximises the value of
, then for any
we have that the
component of
is equal to
; on the other hand, by the triangle inequality, this component is at most
times the number of length
paths in
starting from
, giving the claim.
This argument can be viewed as an instance of a more general “interlacing method” to try to control the behaviour of a graph on all large subsets
by first generating a matrix
on
with very good spectral properties, which are then partially inherited by the
minor of
by interlacing inequalities. In previous literature using this method (see e.g., this survey of Haemers, or this paper of Wilson), either the original adjacency matrix
, or some non-negatively weighted version of that matrix, was used as the controlling matrix
; the novelty here is the use of signed controlling matrices. It will be interesting to see what further variants and applications of this method emerge in the near future. (Thanks to Anurag Bishoi in the comments for these references.)
The “magic” step in the above argument is constructing . In Huang’s paper,
is constructed recursively in the dimension
in a rather simple but mysterious fashion. Very recently, Roman Karasev gave an interpretation of this matrix in terms of the exterior algebra on
. In this post I would like to give an alternate interpretation in terms of the operation of twisted convolution, which originated in the theory of the Heisenberg group in quantum mechanics.
Firstly note that the original adjacency matrix , when viewed as a linear operator on
, is a convolution operator
where
is the counting measure on the standard basis , and
denotes the ordinary convolution operation
As is well known, this operation is commutative and associative. Thus for instance the square of the adjacency operator
is also a convolution operator
where the convolution kernel is moderately complicated:
The factor in this expansion comes from combining the two terms
and
, which both evaluate to
.
More generally, given any bilinear form , one can define the twisted convolution
of two functions . This operation is no longer commutative (unless
is symmetric). However, it remains associative; indeed, one can easily compute that
In particular, if we define the twisted convolution operator
then the square is also a twisted convolution operator
and the twisted convolution kernel can be computed as
For general bilinear forms , this twisted convolution is just as messy as
is. But if we take the specific bilinear form
then for
and
for
, and the above twisted convolution simplifies to
and now is very simple:
Thus the only eigenvalues of are
and
. The matrix
is entrywise dominated by
in the sense of (1), and in particular has trace zero; thus the
and
eigenvalues must occur with equal multiplicity, so in particular the
eigenvalue occurs with multiplicity
since the matrix has dimensions
. This establishes Proposition 2.
Remark 4 Twisted convolution
is actually just a component of ordinary convolution, but not on the original group
; instead it relates to convolution on a Heisenberg group extension of this group. More specifically, define the Heisenberg group
to be the set of pairs
with group law
and inverse operation
(one can dispense with the negative signs here if desired, since we are in characteristic two). Convolution on
is defined in the usual manner: one has
for any
. Now if
is a function on the original group
, we can define the lift
by the formula
and then by chasing all the definitions one soon verifies that
for any
, thus relating twisted convolution
to Heisenberg group convolution
.
Remark 5 With the twisting by the specific bilinear form
given by (2), convolution by
and
now anticommute rather than commute. This makes the twisted convolution algebra
isomorphic to a Clifford algebra
(the real or complex algebra generated by formal generators
subject to the relations
for
) rather than the commutative algebra more familiar to abelian Fourier analysis. This connection to Clifford algebra (also observed independently by Tom Mrowka and by Daniel Matthews) may be linked to the exterior algebra interpretation of the argument in the recent preprint of Karasev mentioned above.
Remark 6 One could replace the form (2) in this argument by any other bilinear form
that obeyed the relations
and
for
. However, this additional level of generality does not add much; any such
will differ from
by an antisymmetric form
(so that
for all
, which in characteristic two implied that
for all
), and such forms can always be decomposed as
, where
. As such, the matrices
and
are conjugate, with the conjugation operator being the diagonal matrix with entries
at each vertex
.
Remark 7 (Added later) This remark combines the two previous remarks. One can view any of the matrices
in Remark 6 as components of a single canonical matrix
that is still of dimensions
, but takes values in the Clifford algebra
from Remark 5; with this “universal algebra” perspective, one no longer needs to make any arbitrary choices of form
. More precisely, let
denote the vector space of functions
from the hypercube to the Clifford algebra; as a real vector space, this is a
dimensional space, isomorphic to the direct sum of
copies of
, as the Clifford algebra is itself
dimensional. One can then define a canonical Clifford adjacency operator
on this space by
where
are the generators of
. This operator can either be identified with a Clifford-valued
matrix or as a real-valued
matrix. In either case one still has the key algebraic relations
and
, ensuring that when viewed as a real
matrix, half of the eigenvalues are equal to
and half equal to
. One can then use this matrix in place of any of the
to establish Theorem 1 (noting that Schur’s test continues to work for Clifford-valued matrices because of the norm structure on
).
To relate
to the real
matrices
, first observe that each point
in the hypercube
can be associated with a one-dimensional real subspace
(i.e., a line) in the Clifford algebra
by the formula
for any
(note that this definition is well-defined even if the
are out of order or contain repetitions). This can be viewed as a discrete line bundle over the hypercube. Since
for any
, we see that the
-dimensional real linear subspace
of
of sections of this bundle, that is to say the space of functions
such that
for all
, is an invariant subspace of
. (Indeed, using the left-action of the Clifford algebra on
, which commutes with
, one can naturally identify
with
, with the left action of
acting purely on the first factor and
acting purely on the second factor.) Any trivialisation of this line bundle lets us interpret the restriction
of
to
as a real
matrix. In particular, given one of the bilinear forms
from Remark 6, we can identify
with
by identifying any real function
with the lift
defined by
whenever
. A somewhat tedious computation using the properties of
then eventually gives the intertwining identity
and so
is conjugate to
.
— 1. The Chung, Furedi, Graham, and Seymour example —
The paper of by Chung, Furedi, Graham, and Seymour gives, for any , an example of a subset
of
of cardinality
for which the maximum degree of
restricted to
is at most
, thus showing that Theorem 1 cannot be improved (beyond the trivial improvement of upgrading
to
, because the maximum degree is obviously a natural number).
Define the “Möbius function” to be the function
for . This function is extremely balanced on coordinate spaces. Indeed, from the binomial theorem (which uses the convention
) we have
More generally, given any index set of cardinality
, we have
Now let be a partition of
into disjoint non-empty sets. For each
, let
be the subspace of
consisting of those
such that
for all
. Then for any
, we have
and the right-hand side vanishes if and equals
when
. Applying the inclusion-exclusion principle, we conclude that
and thus also (assuming )
so that
Thus, if denotes the set of those
with
, together with those
with
, then
has to have two more elements than its complement
, and hence
has cardinality
.
Now observe that, if with
and
, then
, and if
then
unless
. Thus in this case the total number of
for which
is at most
. Conversely, if
with
and
, then
, and for each
there is at most one
that will make
lie in
. Hence in this case the total number of
for which
is at most
. Thus the maximum degree of the subgraph of
induced by
is at most
. By choosing the
to be a partition of
into
pieces, each of cardinality at most
, we obtain the claim.
Remark 8 Suppose that
is a perfect square, then the lower bound here exactly matches the upper bound in Theorem 1. In particular, the
minor of the matrix
must have an eigenvector of eigenvalue
. Such an eigenvector can be explicitly constructed as follows. Let
be the vector defined by setting
for some
,
,
, and
for all other
(one easily verifies that the previous types of
lie in
). We claim that
for all
. Expanding out the left-hand side, we wish to show that
First suppose that
is of the form (3). One checks that
lies in
precisely when
for one of the
, in which case
Since
, this simplifies using (3) as
giving (5) in this case. Similarly, if
is of the form (4), then
lies in
precisely when
, in which case one can argue as before to show that
and (5) again follows. Finally, if
is not of either of the two forms (3), (4), one can check that
is never of these forms either, and so both sides of (5) vanish.
The same analysis works for any of the other bilinear forms
in Remark 6. Using the Clifford-valued operator
from Remark 7, the eigenfunction
is cleaner; it is defined by
when
is of the form (3), and
when
is of the form (4), with
otherwise.
— 2. From induced subgraph bounds to the sensitivity conjecture —
On the hypercube , let
denote the functions
The monomials in
are then the characters of
, so by Fourier expansion every function
can be viewed as a polynomial in the
(with each monomial containing at most one copy of
; higher powers of each
are unnecessary since
. In particular, one can meaningfully talk about the degree of a function
. Observe also that the Möbius function
from the preceding section is just the monomial
.
Define the sensitivity of a Boolean function
to be the largest number for which there is an
such that there are at least
values of
with
. Using an argument of Gotsman and Linial, we can now relate the sensitivity of a function to its degree:
Corollary 9 (Lower bound on sensitivity) For any boolean function
, one has
.
Proof: Write . By permuting the indices, we may assume that
contains a non-trivial multiple of the monomial
. By restricting to the subspace
(which cannot increase the sensitivity), we may then assume without loss of generality that
. The Fourier coefficient of
is just the mean value
of times the Möbius function
, so this mean value is non-zero. This means that one of the sets
or
has cardinality at least
. Let
denote the larger of these two sets. By Theorem 1, there is an
such that
for at least
values of
; since
, this implies that
for at least
values of
, giving the claim.
The construction of Chung, Furedi, Graham, and Seymour from the previous section can be easily adapted to show that this lower bound is tight (other than the trivial improvement of replacing by
).
Now we need to digress on some bounds involving polynomials of one variable. We begin with an inequality of Bernstein concerning trigonometric polynomials:
Lemma 10 (Bernstein inequality) Let
be a trigonometric polynomial of degree at most
, that is to say a complex linear combination of
for
. Then
Observe that equality holds when or
. Specialising to linear combinations of
, we obtain the classical Bernstein inequality
for complex polynomials of degree at most .
Proof: If one is willing to lose a constant factor in this estimate, this bound can be easily established from modern Littlewood-Paley theory (see e.g., Exercise 52 of these lecture notes). Here we use an interlacing argument due to Boas. We first restrict to the case when has real coefficients. We may normalise
. Let
be a real parameter in
. The trigonometric polynomial
alternately takes the values
and
at the
values
. Thus the trigonometric polynomial
alternates in sign at these values, and thus by the intermediate value theorem has a zero on each of the intervals
. On the other hand, a trigonometric polynomial
of degree at most
can be expressed by de Moivre’s theorem as
times a complex polynomial in
of degree at most
, and thus has at most
zeroes. Thus we see that
has exactly one zero in each
. Furthermore, at this zero, the derivative
of this function must be positive if
is increasing on this interval, and negative if
is decreasing on this interval. In summary, we have shown that if
and
are such that
, then
has the same sign as
. By translating the function
, we also conclude that if
and
are such that
for some
, then
has the same sign as
.
If , then we can find
such that
and
is positive, and we conclude that
; thus we have the upper bound
A similar argument (with now chosen to be negative) similarly bounds
. This gives the claim for real-valued trigonometric polynomials
. (Indeed, this argument even gives the slightly sharper bound
.)
To amplify this to complex valued polynomials, we take advantage of phase rotation invariance. If is a complex trigonometric polynomial, then by applying Bernstein’s inequality to the real part
we have
But then we can multiply by any complex phase
and conclude that
Taking suprema in , one obtains the claim for complex polynomials
.
The analogue of Bernstein’s inequality for the unit interval is known as Markov’s inequality for polynomials:
Lemma 11 (Markov’s inequality for polynomials) Let
be a polynomial of degree
. Then
This bound is sharp, as is seen by inspecting the Chebyshev polynomial , defined as the unique polynomial giving the trigonometric identity
Differentiating (6) using the chain rule, we see that
the right-hand side approaches as
, demonstrating that the
factor here is sharp.
Proof: We again use an argument of Boas. We may normalise so that
The function is a trigonometric polynomial of degree at most
, so by Bernstein’s inequality and the chain rule we have
for all . This already gives Markov’s inequality except in the edge regions
(since
). By reflection symmetry, it then suffices to verify Markov’s inequality in the region
.
From (6), the Chebyshev polynomial attains the values
alternately at the
different points
. Thus, if
, the polynomial
changes sign at least
times on
, and thus must have all
zeroes inside this interval by the intermediate value theorem; furthermore,
of these zeroes will lie to the left of
. By Rolle’s theorem, the derivative
then has all
zeroes in the interval
, and at least
of these will lie to the left of
. In particular, the derivative
can have at most one zero to once to the right of
.
Since ,
is positive at
, and hence positive as
since there are no zeroes outside of
. Thus the leading coefficient of
is positive, which implies the same for its derivative
. Thus
is positive when
.
From (9) one has , hence by (7) we see that
is also positive at
. Thus
cannot become negative for
, as this would create at least two zeroes to the right of
. We conclude that in this region we have
From (7) we have , and the claim follows.
Remark 12 The following slightly shorter argument gives the slightly weaker bound
. We again use the normalisation (8). By two applications of Bernstein’s inequality, the function
has first derivative bounded in magnitude by
, and second derivative bounded in magnitude by
. As this function also has vanishing first derivative at
, we conclude the bounds
and thus by the chain rule
For
, one easily checks that the right-hand side is at most
, giving the claim.
This implies a result of Ehlich-Zeller and of Rivlin-Cheney:
Corollary 13 (Discretised Markov inequality) Let
be a polynomial of degree
. If
Proof: We use an argument of Nisan and Szegedy. Assume for sake of contradiction that , so in particular
. From the fundamental theorem of calculus and the triangle inequality one has
By a rescaled and translated version of Markov’s inequality we have
which when inserted into the preceding inequality gives after some rearranging
and then after a second application of (11) gives
Comparing with (10), we conclude that
and the claim follows after some rearranging.
Nisan and Szegedy observed that this one-dimensional degree bound can be lifted to the hypercube by a symmetrisation argument:
Corollary 14 (Multidimensional Markov inequality bound) Let
be such that
and
for
. Then
Proof: By averaging over all permutations of the indices (which can decrease the degree of
, but not increase it), we may assume that
is a symmetric function of the inputs
. Using the Newton identities, we can then write
for some real polynomial of degree at most
, where
is the Hamming length of . By hypothesis,
,
, and
, hence by the mean-value theorem
. Applying Corollary 13 with
, we obtain the claim.
Define the block sensitivity of a Boolean function
to be the largest number for which there is an
such that there are at least
disjoint subsets
of
with
for
. We have
Theorem 15 (Sensitivity conjecture) One has
More precisely, the sensitivity conjecture of Nisan and Szegedy asserted the bound ; Huang’s result thus gives explicit values for the exponents. It is still open whether the exponent
in this theorem can be improved; it is known that one cannot improve it to below
, by analysing a variant of the Chung-Furedi-Graham-Seymour example (see these notes of Regan for details). Proof: The lower bound for
is immediate from the definitions, since the sensitivity arises by restricting the
in the definition of block sensitivity to singleton sets. To prove the upper bound, it suffices from Proposition 9 to establish the bound
Let . By hypothesis, there are
and
disjoint subsets
of
such that
for
. We may normalise
,
and
. If we then define the pullback boolean function
by the formula
then it is easy to see that ,
, and
for
. The claim now follows from Corollary 14.
Remark 16 The following slightly shorter variant of the argument lets one remove the factor
. Let
be as above. We again may normalise
and
. For any
, let
be iid Bernoulli random variable that equal
with probability
and
with probability
. The quantity
is a trigonometric polynomial of degree at most
that is bounded in magnitude by
, so by two applications of Bernstein’s inequality
On the other hand, for small
, the random variable
is equal to zero with probability
and equal to each
with probability
, hence
and hence
. Combining these estimates we obtain
and hence
.
Remark 17 The sensitivity
of a Boolean function
can be split as
, where
is largest number for which there is an
such that there are at least
values of
with
. It is not difficult to use the
case of Remark 3 to improve Corollary 9 slightly to
. Combining this with the previous remark, we can thus improve the upper bound in Theorem 15 slightly to