Non-abelian combinatorics and communication complexity
Thoughts 2019-08-20
Below and here in pdf is a survey I am writing for SIGACT, due next week. Comments would be very helpful.
Finite groups provide an amazing wealth of problems of interest to complexity theory. And complexity theory also provides a useful viewpoint of group-theoretic notions, such as what it means for a group to be “far from abelian.” The general problem that we consider in this survey is that of computing a group product over a finite group . Several variants of this problem are considered in this survey and in the literature, including in [KMR66, Bar89, BC92, IL95, BGKL03, PRS97, Amb96, AL00, Raz00, MV13, Mil14, GVa].
Some specific, natural computational problems related to are, from hardest to easiest:
(1) Computing ,
(2) Deciding if , where is the identity element of , and
(3) Deciding if under the promise that either or for a fixed .
Problem (3) is from [MV13]. The focus of this survey is on (2) and (3).
We work in the model of communication complexity [Yao79], with which we assume familiarity. For background see [KN97, RY19]. Briefly, the terms in a product will be partitioned among collaborating parties – in several ways – and we shall bound the number of bits that the parties need to exchange to solve the problem.
Organization.
We begin in Section 2 with two-party communication complexity. In Section 3 we give a streamlined proof, except for a step that is only sketched, of a result of Gowers and the author [GV15, GVb] about interleaved group products. In particular we present an alternative proof, communicated to us by Will Sawin, of a lemma from [GVa]. We then consider two models of three-party communication. In Section 4 we consider number-in-hand protocols, and we relate the communication complexity to so-called quasirandom groups [Gow08, BNP08]. In Section 6 we consider number-in-hand protocols, and specifically the problem of separating deterministic and randomized communication. In Section 7 we give an exposition of a result by Austin [Aus16], and show that it implies a separation that matches the state-of-the-art [BDPW10] but applies to a different problem.
Some of the sections follow closely a set of lectures by the author [Vio17]; related material can also be found in the blog posts [Vioa, Viob]. One of the goals of this survey is to present this material in a more organized matter, in addition to including new material.
2 Two parties
Let be a group and let us start by considering the following basic communication task. Alice gets an element and Bob gets an element and their goal is to check if . How much communication do they need? Well, is equivalent to . Because Bob can compute without communication, this problem is just a rephrasing of the equality problem, which has a randomized protocol with constant communication. This holds for any group.
The same is true if Alice gets two elements and and they need to check if . Indeed, it is just checking equality of and , and again Alice can compute the latter without communication.
Things get more interesting if both Alice and Bob get two elements and they need to check if the interleaved product of the elements of Alice and Bob equals , that is, if
Now the previous transformations don’t help anymore. In fact, the complexity depends on the group. If it is abelian then the elements can be reordered and the problem is equivalent to checking if . Again, Alice can compute without communication, and Bob can compute without communication. So this is the same problem as before and it has a constant communication protocol.
For non-abelian groups this reordering cannot be done, and the problem seems hard. This can be formalized for a class of groups that are “far from abelian” – or we can take this result as a definition of being far from abelian. One of the groups that works best in this sense is the following, first constructed by Galois in the 1830’s.
Definition 1. The special linear group is the group of invertible matrices over the field with determinant .
The following result was asked in [MV13] and was proved in [GVa].
Theorem 1. Let and let . Suppose Alice receives and Bob receives . They are promised that either equals or . Deciding which case it is requires randomized communication .
This bound is tight as Alice can send her input, taking bits. We present the proof of this theorem in the next section.
Similar results are known for other groups as well, see [GVa] and [Sha16]. For example, one group that is “between” abelian groups and is the following.
If we work over instead of in Theorem 1 then the communication complexity is [Sha16]. The latter bound is tight [MV13]: with knowledge of , the parties can agree on an element such that . Hence they only need to keep track of the image . This takes communication because In more detail, the protocol is as follows. First Bob sends . Then Alice sends . Then Bob sends and finally Alice can check if .
Interestingly, to decide if without the promise a stronger lower bound can be proved for many groups, including , see Corollary 3 below.
In general, it seems an interesting open problem to try to understand for which groups Theorem 1 applies. For example, is the communication large for every quasirandom group [Gow08]?
Theorem 1 and the corresponding results for other groups also scale with the length of the product: for example deciding if over requires communication which is tight.
A strength of the above results is that they hold for any choice of in the promise. This makes them equivalent to certain results, discussed below in Section 5.0.1. Next we prove two other lower bounds that do not have this property and can be obtained by reduction from disjointness. First we show that for any non-abelian group there exists an element such that deciding if or requires communication linear in the length of the product. Interestingly, the proof works for any non-abelian group. The choice of is critical, as for some and the problem is easy. For example: take any group and consider where is the group of integers with addition modulo . Distinguishing between and amounts to computing the parity of (the components of) the input, which takes constant communication.
Theorem 2. Let be a non-abelian group. There exists such that the following holds. Suppose Alice receives and receives . They are promised that either equals or . Deciding which case it is requires randomized communication .
Proof. We reduce from unique set-disjointness, defined below. For the reduction we encode the And of two bits as a group product. This encoding is similar to the famous puzzle that asks to hang a picture on a wall with two nails in such a way that the picture falls if either one of the nails is removed. Since is non-abelian, there exist such that , and in particular with . We can use this fact to encode the And of and as
In the disjointness problem Alice and Bob get inputs respectively, and they wish to check if there exists an such that . If you think of as characteristic vectors of sets, this problem is asking if the sets have a common element or not. The communication of this problem is [KS92, Raz92]. Moreover, in the “unique” variant of this problem where the number of such ’s is 0 or 1, the same lower bound still applies. This follows from [KS92, Raz92] – see also Proposition 3.3 in [AMS99]. For more on disjointness see the surveys [She14, CP10].
We will reduce unique disjointness to group products. For we produce inputs for the group problem as follows:
The group product becomes
If there isn’t an such that , then for each the term is , and thus the whole product is 1.
Otherwise, there exists a unique such that and thus the product will be , with being in the -th position. If Alice and Bob can check if the above product is equal to 1, they can also solve the unique set disjointness problem, and thus the lower bound applies for the former.
We required the uniqueness property, because otherwise we might get a product that could be equal to 1 in some groups.
Next we prove a result for products of length just ; it applies to non-abelian groups of the form and not with the promise.
Theorem 3. Let be a non-abelian group and consider . Suppose Alice receives and Bob receives . Deciding if requires randomized communication .
Proof. The proof is similar to the proof of Theorem 2. We use coordinate of to encode bit of the disjointness instance. If there is no intersection in the latter, the product will be . Otherwise, at least some coordinate will be .
As a corollary we can prove a lower bound for .
Corollary 3. Theorem 3 holds for .
Proof. Note that contains and that is not abelian. Apply Theorem 3.
Theorem 3 is tight for constant-size . We do not know if Corollary 3 is tight. The trivial upper bound is .
3 Proof of Theorem 1
Several related proofs of this theorem exist, see [GV15, GVa, Sha16]. As in [GVa], the proof that we present can be broken down in three steps. First we reduce the problem to a statement about conjugacy classes. Second we reduce this to a statement about trace maps. Third we prove the latter. We present the first step in a way that is similar but slightly different from the presentation in [GVa]. The second step is only sketched, but relies on classical results about and can be found in [GVa]. For the third we present a proof that was communicated to us by Will Sawin. We thank him for his permission to include it here.
3.1 Step 1
We would like to rule out randomized protocols, but it is hard to reason about them directly. Instead, we are going to rule out deterministic protocols on random inputs. First, for any group element we define the distribution on quadruples , where are uniformly random elements. Note the product of the elements in is always .
Towards a contradiction, suppose we have a randomized protocol such that
This implies a deterministic protocol with the same gap, by fixing the randomness.
We reach a contradiction by showing that for every deterministic protocol using little communication, we have
We start with the following standard lemma, which describes a protocol using product sets.
Lemma 4. (The set of accepted inputs of) A deterministic -bit protocol for a function can be written as a disjoint union of rectangles, where a rectangle is a set of the form with and and where is constant.
Proof. (sketch) For every communication transcript , let be the set of inputs giving transcript . The sets are disjoint since an input gives only one transcript, and their number is : one for each communication transcript of the protocol. The rectangle property can be proven by induction on the protocol tree.
Next, we show that any rectangle cannot distinguish . The way we achieve this is by showing that for every the probability that is roughly the same for every , and is roughly the density of the rectangle. (Here we write for the characteristic function of the set .) Without loss of generality we set . Let have density and have density . We aim to bound above
where note the distribution of is the same as .
Because the distribution of is uniform in , the above can be rewritten as
The inequality is Cauchy-Schwarz, and the step after that is obtained by expanding the square and noting that is uniform in , so that the expectation of the term is .
Now we do several transformations to rewrite the distribution in the last expectation in a convenient form. First, right-multiplying by we can rewrite the distribution as the uniform distribution on tuples such that
The last equation is equivalent to .
We can now do a transformation setting to be to rewrite the distribution of the four-tuple as
where we use to denote a uniform element from the conjugacy class of , that is for a uniform .
Hence it is sufficient to bound
where all the variables are uniform and independent.
With a similar derivation as above, this can be rewritten as
Here each occurrence of denotes a uniform and independent conjugate. Hence it is sufficient to bound
We can now replace with Because has the same distribution of , it is sufficient to bound
For this, it is enough to show that with high probability over and , the distribution of , over the choice of the two independent conjugates, has statistical distance from uniform.
3.2 Step 2
In this step we use information on the conjugacy classes of the group to reduce the latter task to one about the equidistribution of the trace map. Let be the Trace map:
We state the lemma that we want to show.
Lemma 5. Let and . For all but values of and , the distribution of
is close to uniform over in statistical distance.
To give some context, in the conjugacy class of an element is essentially determined by the trace. Moreover, we can think of and as generic elements in . So the lemma can be interpreted as saying that for typical , taking a uniform element from the conjugacy class of and multiplying it by yields an element whose conjugacy class is uniform among the classes of . Using that essentially all conjugacy classes are equal, and some of the properties of the trace map, one can show that the above lemma implies that for typical the distribution of is close to uniform. For more on how this fits we refer the reader to [GVa].
3.3 Step 3
We now present a proof of Lemma 5. The high-level argument of the proof is the same as in [GVa] (Lemma 5.5), but the details may be more accessible and in particular the use of the Lang-Weil theorem [LW54] from algebraic geometry is replaced by a more elementary argument. For simplicity we shall only cover the case where is prime. We will show that for all but values of , the probability over that is within of , and for the others it is at most . Summing over gives the result.
We shall consider elements whose trace is unique to the conjugacy class of . (This holds for all but conjugacy classes – see for example [GVa] for details.) This means that the distribution of is that of a uniform element in conditioned on having trace . Hence, we can write the probability that as the number of solutions in to the following three equations (divided by the size of the group, which is ):
We use the second one to remove and the first one to remove from the last equation. This gives
This is an equation in two variables. Write and and use distributivity to rewrite the equation as
At least since Lagrange it has been known how to reduce this to a Pell equation . This is done by applying an invertible affine transformation, which does not change the number of solutions. First set . Then the equation becomes
Equivalently, the cross-term has disappeared and we have
Now one can add constants to and to remove the linear terms, changing the constant term. Specifically, let and set and . The equation becomes
The linear terms disappear, the coefficients of and do not change and the equation can be rewritten as
So this is now a Pell equation
where and
For all but values of we have that is non-zero. Moreover, for all but values of the term is a non-zero polynomial in . (Specifically, for any and any such that .) So we only consider the values of that make it non-zero. Those where give solutions, which is fine. We conclude with the following lemma.
Lemma 6. For and non-zero, and prime , the number of solutions over to the Pell equation
is within of .
This is a basic result from algebraic geometry that can be proved from first principles.
Proof. If for some , then we can replace with and we can count instead the solutions to the equation
Because we can set and , which preserves the number of solutions, and rewrite the equation as
Because , this has solutions: for every non-zero we have .
So now we can assume that for any . Because the number of squares is , the range of has size . Similarly, the range of also has size . Hence these two ranges intersect, and there is a solution .
We take a line passing through : for parameters we consider pairs . There is a bijection between such pairs with and the points with . Because the number of solutions with is , using that , it suffices to count the solutions with .
The intuition is that this line has two intersections with the curve . Because one of them, , lies in , the other has to lie as well there. Algebraically, we can plug the pair in the expression to obtain the equivalent equation
Using that is a solution this becomes
We can divide by . Obtaining
We can now divide by which is non-zero by the assumption . This yields
Hence for every value of there is a unique giving a solution. This gives solutions.
4 Three parties, number-in-hand
In this section we consider the following three-party number-in-hand problem: Alice gets , Bob gets , Charlie gets , and they want to know if . The communication depends on the group . We present next two efficient protocols for abelian groups, and then a communication lower bound for other groups.
4.1 A randomized protocol for the hypercube
We begin with the simplest setting. Let , that is -bit strings with bit-wise addition modulo 2. The parties want to check if . They can do so as follows. First, they pick a hash function that is linear: . Specifically, for a uniformly random define . Then, the protocol is as follows.
- Alice sends ,
- Bob send ,
- Charlie accepts if and only if .
The hash function outputs 1 bit, so the communication is constant. By linearity, the protocol accepts iff . If this is always the case, otherwise it happens with probability .
4.2 A randomized protocol for
This protocol is from [Vio14]. For simplicity we only consider the case here – the protocol for general is in [Vio14]. Again, the parties want to check if . For this group, there is no 100% linear hash function but there are almost linear hash functions that satisfy the following properties. Note that the inputs to are interpreted modulo and the outputs modulo .
- for all there is such that ,
- for all we have ,
- .
Assuming some random hash function that satisfies the above properties the protocol works similarly to the previous one:
- Alice sends ,
- Bob sends ,
- Charlie accepts if and only if .
We can set to achieve constant communication and constant error.
To prove correctness of the protocol, first note that for some . Then consider the following two cases:
- if then and the protocol is always correct.
- if then the probability that for some is at most the probability that which is ; so the protocol is correct with high probability.
The hash function..
For the hash function we can use a function analyzed in [DHKP97]. Let be a random odd number modulo . Define
where the product is integer multiplication, and is bit-shift. In other words we output the bits of the integer product .
We now verify that the above hash function family satisfies the three properties we required above.
Property (3) is trivially satisfied.
For property (1) we have the following. Let and and . To recap, by definition we have:
- ,
- .
Notice that if in the addition the carry into the bit is , then
otherwise
which concludes the proof for property (1).
Finally, we prove property (2). We start by writing where is odd. So the binary representation of looks like
The binary representation of the product for a uniformly random looks like
We consider the two following cases for the product :
- If , or equivalently , the output never lands in the bad set ;
- Otherwise, the hash function output has uniform bits. For any set , the probability that the output lands in is at most .
4.3 Quasirandom groups
What happens in other groups? The hash function used in the previous result was fairly non-trivial. Do we have an almost linear hash function for matrices? The answer is negative. For and the problem is hard, even under the promise. For a group the complexity can be expressed in terms of a parameter which comes from representation theory. We will not formally define this parameter here, but several qualitatively equivalent formulations can be found in [Gow08]. Instead the following table shows the ’s for the groups we’ve introduced.
:abelian
:
.
Theorem 1. Let be a group, and let . Let be the minimum dimension of any irreducible representation of . Suppose Alice, Bob, and Charlie receive , y, and respectively. They are promised that either equals or . Deciding which case it is requires randomized communication complexity .
This result is tight for the groups we have discussed so far. The arguments are the same as before. Specifically, for the communication is . This is tight up to constants, because Alice and Bob can send their elements. For the communication is . This is tight as well, as the parties can again just communicate the images of an element such that , as discussed in Section 1. This also gives a computational proof that cannot be too large for , i.e., it is at most . For abelian groups we get nothing, matching the efficient protocols given above.
5 Proof of Theorem 1
First we discuss several “mixing” lemmas for groups, then we come back to protocols and see how to apply one of them there.
5.0.1 mixing
We want to consider “high entropy” distributions over , and state a fact showing that the multiplication of two such distributions “mixes” or in other words increases the entropy. To define entropy we use the norms . Our notion of (non-)entropy will be . Note that is exactly the collision probability where is independent and identically distributed to . The smaller this quantity, the higher the entropy of . For the uniform distribution we have and so we can think of as maximum entropy. If is uniform over elements, we have and we think of as having “high” entropy.
Because the entropy of is small, we can think of the distance between and in the 2-norm as being essentially the entropy of :
Lemma 7. [Gow08, BNP08] If are independent over , then
where is the minimum dimension of an irreducible representation of .
By this lemma, for high entropy distributions and , we get . The factor allows us to pass to statistical distance using Cauchy-Schwarz:
This is the way in which we will use the lemma.
Another useful consequence of this lemma, which however we will not use directly, is this. Suppose now you have independent, high-entropy variables . Then for every we have
To show this, set without loss of generality and rewrite the left-hand-side as
By Cauchy-Schwarz this is at most
and we can conclude by Lemma 7. Hence the product of three high-entropy distributions is close to uniform in a point-wise sense: each group element is obtained with roughly probability .
At least over , there exists an alternative proof of this fact that does not mention representation theory (see [GVa] and [Vioa, Viob]).
With this notation in hand, we conclude by stating a “mixing” version of Theorem 2. For more on this perspective we refer the reader to [GVa].
Theorem 1. Let . Let and be two distributions over . Suppose is independent from . Let . We have
For example, when and have high entropy over (that is, are uniform over pairs), we have , and so . In particular, is close to uniform over in statistical distance.
5.0.2 Back to protocols
As in the beginning of Section 3, for any group element we define the distribution on triples , where are uniform and independent. Note the product of the elements in is always . Again as in Section 3, it suffices to show that for every deterministic protocols using little communication we have
Analogously to Lemma 4, the following lemma describes a protocol using rectangles. The proof is nearly identical and is omitted.
Lemma 8. (The set of accepted inputs of) A deterministic -bit number-in-hand protocol with three parties can be written as a disjoint union of “rectangles,” that is sets of the form .
Next we show that these product sets cannot distinguish these two distributions , via a straightforward application of lemma 7.
Proof. Pick any and let be the inputs of Alice, Bob, and Charlie respectively. Then
where is uniform in . If either or is small, that is or , then also and hence (??) is at most as well. This holds for every , so we also have We will choose later.
Otherwise, and are large: and . Let be the distribution of conditioned on . We have that and are independent and each is uniform over at least elements. By Lemma 7 this implies , where is the uniform distribution. As mentioned after the lemma, by Cauchy–Schwarz we obtain
where the last inequality follows from the fact that .
This implies that and , because taking inverses and multiplying by does not change the distance to uniform. These two last inequalities imply that
and thus we get that
Picking completes the proof.
Returning to arbitrary deterministic protocols (as opposed to rectangles), write as a union of disjoint rectangles by Lemma 8. Applying Lemma 9 and summing over all rectangles we get that the distinguishing advantage of is at most . For the advantage is at most , concluding the proof.
6 Three parties, number-on-forehead
In number-on-forehead (NOH) communication complexity [CFL83] with parties, the input is a -tuple and each party sees all of it except . For background, it is not known how to prove negative results for parties.
We mention that Theorem 1 can be extended to the multiparty setting, see [GVa]. Several questions arise here, such as whether this problem remains hard for , and what is the minimum length of an interleaved product that is hard for parties (the proof in 1 gives a large constant).
However in this survey we shall instead focus on the problem of separating deterministic and randomized communication. For , we know the optimal separation: The equality function requires communication for deterministic protocols, but can be solved using communication if we allow the protocols to use public coins. For , the best known separation between deterministic and randomized protocol is vs [BDPW10]. In the following we give a new proof of this result, for a different function: if and only if for . As is true for some functions in [BDPW10], a stronger separation could hold for . For context, let us state and prove the upper bound for randomized communication.
Proof. In the number-on-forehead model, computing reduces to two-party equality with no additional communication: Alice computes privately, then Alice and Bob check if .
To prove the lower bound for deterministic protocols we reduce the communication problem to a combinatorial problem.
For intuition, if is the abelian group of real numbers with addition, a corner becomes for , which are the coordinates of an isosceles triangle. We now state the theorem that connects corners and lower bounds.
Lemma 12. Let be a group and a real number. Suppose that every subset with contains a corner. Then the deterministic communication complexity of (defined as ) is .
It is known that implies a corner for certain abelian groups , see [LM07] for the best bound and pointers to the history of the problem. For a stronger result is known: implies a corner [Aus16]. This in turn implies communication .
Proof. We saw already twice that a number-in-hand -bit protocol can be written as a disjoint union of rectangles (Lemmas 4, 8). Likewise, a number-on-forehead -bit protocol can be written as a disjoint union of cylinder intersections for some :
The proof idea of the above fact is to consider the transcripts of , then one can see that the inputs giving a fixed transcript are a cylinder intersection.
Let be a -bit protocol. Consider the inputs on which accepts. Note that at least fraction of them are accepted by some cylinder intersection . Let . Since the first two elements in the tuple determine the last, we have .
Now suppose contains a corner . Then
This implies , which is a contradiction because and so .
7 The corners theorem for quasirandom groups
In this section we prove the corners theorem for quasirandom groups, following Austin [Aus16]. Our exposition has several minor differences with that in [Aus16], which may make it more computer-science friendly. Possibly a proof can also be obtained via certain local modifications and simplifications of Green’s exposition [Gre05b, Gre05a] of an earlier proof for the abelian case. We focus on the case for simplicity, but the proof immediately extends to other quasirandom groups (with corresponding parameters).
Theorem 1. Let . Every subset of density contains a corner .
7.1 Proof idea
For intuition, suppose is a product set, i.e., for . Let’s look at the quantity
where iff . Note that the random variable in the expectation is equal to exactly when form a corner in . We’ll show that this quantity is greater than , which implies that contains a corner (where ). Since we are taking , we can rewrite the above quantity as
where the last line follows by replacing with in the uniform distribution. If , then both |B|/|G| and . Condition on , , . Then the distribution is a product of three independent distributions, each uniform on a set of density . (In fact, two distributions would suffice for this.) By Lemma 7, is close to uniform in statistical distance. This implies that the above expectation equals
for for a small enough constant . Hence, product sets of density polynomial in contain corners.
Given the above, it is natural to try to decompose an arbitrary set into product sets. We will make use of a more general result.
7.2 Weak Regularity Lemma
Let be some universe (we will take ) and let be a function (for us, ). Let be some set of functions, which can be thought of as “easy functions” or “distinguishers” (these will be rectangles or closely related to them). The next theorem shows how to decompose into a linear combination of the up to an error which is polynomial in the length of the combination. More specifically, will be indistinguishable from by the .
Lemma 13. Let be a function and a set of functions. For all , there exists a function where , and such that for all
A different way to state the conclusion, which we will use, is to say that we can write so that is small.
The lemma is due to Frieze and Kannan [FK96]. It is called “weak” because it came after Szemerédi’s regularity lemma, which has a stronger distinguishing conclusion. However, the lemma is also “strong” in the sense that Szemerédi’s regularity lemma has as a tower of whereas here we have polynomial in . The weak regularity lemma is also simpler. There also exists a proof [Tao17] of Szemerédi’s theorem (on arithmetic progressions), which uses weak regularity as opposed to the full regularity lemma used initially.
Proof. We will construct the approximation through an iterative process producing functions . We will show that decreases by each iteration.
Start: Define (which can be realized setting ).
Iterate: If not done, there exists such that . Assume without loss of generality .
Update: where shall be picked later.
Let us analyze the progress made by the algorithm.
where the last line follows by taking . Therefore, there can only be iterations because .
7.3 Getting more for rectangles
Returning to the main proof, we will use the weak regularity lemma to approximate the indicator function for arbitrary by rectangles. That is, we take to be the collection of indicator functions for all sets of the form for . The weak regularity lemma shows how to decompose into a linear combination of rectangles. These rectangles may overlap. However, we ideally want to be a linear combination of non-overlapping rectangles. In other words, we want a partition of rectangles. It is possible to achieve this at the price of exponentiating the number of rectangles. Note that an exponential loss is necessary even if in every rectangle; or in other words in the uni-dimensional setting. This is one step where the terminology “rectangle” may be misleading – the set is not necessarily an interval. If it was, a polynomial rather than exponential blow-up would have sufficed to remove overlaps.
Claim 14. Given a decomposition of into rectangles from the weak regularity lemma with functions, there exists a decomposition with rectangles which don’t overlap.
Proof. Exercise.
In the above decomposition, note that it is natural to take the coefficients of rectangles to be the density of points in that are in the rectangle. This gives rise to the following claim.
Claim 15. The weights of the rectangles in the above claim can be the average of in the rectangle, at the cost of doubling the error.
Consequently, we have that , where is the sum of non-overlapping rectangles with coefficients .
Proof. Let be a partition decomposition with arbitrary weights. Let be a partition decomposition with weights being the average of . It is enough to show that for all rectangle distinguishers
By the triangle inequality, we have that
To bound , note that the error is maximized for a that respects the decomposition in non-overlapping rectangles, i.e., is the union of some non-overlapping rectangles from the decomposition. This can be argued using that, unlike , the value of and on a rectangle from the decomposition is fixed. But, from the point of “view” of such , ! More formally, . This gives
and concludes the proof.
We need to get still a little more from this decomposition. In our application of the weak regularity lemma above, we took the set of distinguishers to be characteristic functions of rectangles. That is, distinguishers that can be written as where and map . We will use that the same guarantee holds for and with range , up to a constant factor loss in the error. Indeed, let and have range . Write where and have range , and the same for . The error for distinguisher is at most the sum of the errors for distinguishers , , , and . So we can restrict our attention to distinguishers where and have range . In turn, a function with range can be written as an expectation for functions with range , and the same for . We conclude by observing that
7.4 Proof
Let us now finish the proof by showing a corner exists for sufficiently dense sets . We’ll use three types of decompositions for , with respect to the following three types of distinguishers, where and have range :
- ,
- ,
- .
The first type is just rectangles, what we have been discussing until now. The distinguishers in the last two classes can be visualized over as parallelograms with a 45-degree angle. The same extra properties we discussed for rectangles can be verified hold for them too.
Recall that we want to show
We’ll decompose the -th occurrence of via the -th decomposition listed above. We’ll write this decomposition as . We apply this in a certain order to produce sums of products of three functions. The inputs to the functions don’t change, so to avoid clutter we do not write them, and it is understood that in each product of three functions the inputs are, in order . The decomposition is:
We first show that the expectation of the first term is big. This takes the next two claims. Then we show that the expectations of the other terms are small.
Proof. We just need to get error for any product of three functions for the three decomposition types. We have:
This is similar to what we discussed in the overview, and is where we use mixing. Specifically, if or are at most for a small enough constant than we are done. Otherwise, conditioned on , the distribution on is uniform over a set of density , and the same holds for , and the result follows by Lemma 7.
Recall that we start with a set of density .
Proof. We will relate the expectation over to using the Hölder inequality: For random variables ,
To apply this inequality in our setting, write
By the Hölder inequality the expectation of the right-hand side is
The last three terms equal to because
where is the set in the partition that contains . Putting the above together we obtain
Finally, because the functions are positive, we have that . This concludes the proof.
It remains to show the other terms are small. Let be the error in the weak regularity lemma with respect to distinguishers with range . Recall that this implies error with respect to distinguishers with range . We give the proof for one of the terms and then we say little about the other two.
The proof involves changing names of variables and doing Cauchy-Schwarz to remove the terms with and bound the expectation above by , which is small by the regularity lemma.
Proof. Replace with in the uniform distribution to get
where the first inequality is by Cauchy-Schwarz.
Now replace and reason in the same way:
Replace to rewrite the expectation as
We want to view the last three terms as a distinguisher . First, note that has range . This is because and has range , where recall that is the set in the partition that contains . Fix . The last term in the expectation becomes a constant . The second term only depends on , and the third only on . Hence for appropriate functions and with range this expectation can be rewritten as
which concludes the proof.
There are similar proofs to show the remaining terms are small. For , we can perform simple manipulations and then reduce to the above case. For , we have a slightly easier proof than above.
7.4.1 Parameters
Suppose our set has density , and the error in the regularity lemma is . By the above results we can bound
where the terms in the right-hand size come, left-to-right from Claim 17, 16, and 18. Picking the proof is completed for sufficiently small .
References
[AL00] Andris Ambainis and Satyanarayana V. Lokam. Imroved upper bounds on the simultaneous messages complexity of the generalized addressing function. In Latin American Symposium on Theoretical Informatics (LATIN), pages 207–216, 2000.
[Amb96] Andris Ambainis. Upper bounds on multiparty communication complexity of shifts. In Symp. on Theoretical Aspects of Computer Science (STACS), pages 631–642, 1996.
[AMS99] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. J. of Computer and System Sciences, 58(1, part 2):137–147, 1999.
[Aus16] Tim Austin. Ajtai-Szemerédi theorems over quasirandom groups. In Recent trends in combinatorics, volume 159 of IMA Vol. Math. Appl., pages 453–484. Springer, [Cham], 2016.
[Bar89] David A. Mix Barrington. Bounded-width polynomial-size branching programs recognize exactly those languages in NC. J. of Computer and System Sciences, 38(1):150–164, 1989.
[BC92] Michael Ben-Or and Richard Cleve. Computing algebraic formulas using a constant number of registers. SIAM J. on Computing, 21(1):54–58, 1992.
[BDPW10] Paul Beame, Matei David, Toniann Pitassi, and Philipp Woelfel. Separating deterministic from randomized multiparty communication complexity. Theory of Computing, 6(1):201–225, 2010.
[BGKL03] László Babai, Anna Gál, Peter G. Kimmel, and Satyanarayana V. Lokam. Communication complexity of simultaneous messages. SIAM J. on Computing, 33(1):137–166, 2003.
[BNP08] László Babai, Nikolay Nikolov, and László Pyber. Product growth and mixing in finite groups. In ACM-SIAM Symp. on Discrete Algorithms (SODA), pages 248–257, 2008.
[CFL83] Ashok K. Chandra, Merrick L. Furst, and Richard J. Lipton. Multi-party protocols. In 15th ACM Symp. on the Theory of Computing (STOC), pages 94–99, 1983.
[CP10] Arkadev Chattopadhyay and Toniann Pitassi. The story of set disjointness. SIGACT News, 41(3):59–85, 2010.
[DHKP97] Martin Dietzfelbinger, Torben Hagerup, Jyrki Katajainen, and Martti Penttonen. A reliable randomized algorithm for the closest-pair problem. J. Algorithms, 25(1):19–51, 1997.
[FK96] Alan M. Frieze and Ravi Kannan. The regularity lemma and approximation schemes for dense problems. In IEEE Symp. on Foundations of Computer Science (FOCS), pages 12–20, 1996.
[Gow08] W. T. Gowers. Quasirandom groups. Combinatorics, Probability & Computing, 17(3):363–387, 2008.
[Gre05a] Ben Green. An argument of Shkredov in the finite field setting, 2005. Available at people.maths.ox.ac.uk/greenbj/papers/corners.pdf.
[Gre05b] Ben Green. Finite field models in additive combinatorics. Surveys in Combinatorics, London Math. Soc. Lecture Notes 327, 1-27, 2005.
[GVa] W. T. Gowers and Emanuele Viola. Interleaved group products. SIAM J. on Computing.
[GVb] W. T. Gowers and Emanuele Viola. The multiparty communication complexity of interleaved group products. SIAM J. on Computing.
[GV15] W. T. Gowers and Emanuele Viola. The communication complexity of interleaved group products. In ACM Symp. on the Theory of Computing (STOC), 2015.
[IL95] Neil Immerman and Susan Landau. The complexity of iterated multiplication. Inf. Comput., 116(1):103–116, 1995.
[KMR66] Kenneth Krohn, W. D. Maurer, and John Rhodes. Realizing complex Boolean functions with simple groups. Information and Control, 9:190–195, 1966.
[KN97] Eyal Kushilevitz and Noam Nisan. Communication complexity. Cambridge University Press, 1997.
[KS92] Bala Kalyanasundaram and Georg Schnitger. The probabilistic communication complexity of set intersection. SIAM J. Discrete Math., 5(4):545–557, 1992.
[LM07] Michael T. Lacey and William McClain. On an argument of Shkredov on two-dimensional corners. Online J. Anal. Comb., (2):Art. 2, 21, 2007.
[LW54] Serge Lang and André Weil. Number of points of varieties in finite fields. American Journal of Mathematics, 76:819–827, 1954.
[Mil14] Eric Miles. Iterated group products and leakage resilience against . In ACM Innovations in Theoretical Computer Science conf. (ITCS), 2014.
[MV13] Eric Miles and Emanuele Viola. Shielding circuits with groups. In ACM Symp. on the Theory of Computing (STOC), 2013.
[PRS97] Pavel Pudlák, Vojtěch Rödl, and Jiří Sgall. Boolean circuits, tensor ranks, and communication complexity. SIAM J. on Computing, 26(3):605–633, 1997.
[Raz92] Alexander A. Razborov. On the distributional complexity of disjointness. Theor. Comput. Sci., 106(2):385–390, 1992.
[Raz00] Ran Raz. The BNS-Chung criterion for multi-party communication complexity. Computational Complexity, 9(2):113–122, 2000.
[RY19] Anup Rao and Amir Yehudayoff. Communication complexity. 2019. https://homes.cs.washington.edu/ anuprao/pubs/book.pdf.
[Sha16] Aner Shalev. Mixing, communication complexity and conjectures of Gowers and Viola. Combinatorics, Probability and Computing, pages 1–13, 6 2016. arXiv:1601.00795.
[She14] Alexander A. Sherstov. Communication complexity theory: Thirty-five years of set disjointness. In Symp. on Math. Foundations of Computer Science (MFCS), pages 24–43, 2014.
[Tao17] Terence Tao. Szemerédiâs proof of Szemerédiâs theorem, 2017. https://terrytao.files.wordpress.com/2017/09/szemeredi-proof1.pdf.
[Vioa] Emanuele Viola. Thoughts: Mixing in groups. https://emanueleviola.wordpress.com/2016/10/21/mixing-in-groups/.
[Viob] Emanuele Viola. Thoughts: Mixing in groups ii. https://emanueleviola.wordpress.com/2016/11/15/mixing-in-groups-ii/.
[Vio14] Emanuele Viola. The communication complexity of addition. Combinatorica, pages 1–45, 2014.
[Vio17] Emanuele Viola. Special topics in complexity theory. Lecture notes of the class taught at Northeastern University. Available at http://www.ccs.neu.edu/home/viola/classes/spepf17.html, 2017.
[Yao79] Andrew Chi-Chih Yao. Some complexity questions related to distributive computing. In 11th ACM Symp. on the Theory of Computing (STOC), pages 209–213, 1979.