On the recent proof of the 2-to-2 conjecture

Windows On Theory 2018-03-12

As I posted before, recently Khot, Minzer and Safra posted a manuscript which is the culmination of a beautiful line of work, initiated by the same authors, and completes the proof of (the imperfect completeness variant of) Khot’s 2 to 2 conjecture.

An immediate corollary is establishing for every \epsilon>0, the NP hardness of distinguishing between a unique games instance of value 1/2-\epsilon vs an instance of value at most \epsilon. (The value of a constraint satisfaction problem is the maximum fraction of constraints one can satisfy.) This is interesting because in this parameter regime there is a subexponential time algorithm (of Arora, me and Steurer) and hence this is a proof of the intermediate complexity conjecture. It is also very strong evidence that the full unique games conjecture is true.

The proof of the 2 to 2 conjecture is obtained by combining (i) a reduction of Dinur, Khot, Kindler, Minzer and Safra (DKKMS), (ii) a (soon to be posted) manuscript of me with Kothari and Steurer showing the soundness conjecture for the DKKMS reduction works modulo a conjecture characterizing non-expanding sets on the Grassman graph (or equivalently, the degree two short code graph), and (iii) this latest KMS manuscript which proves the latter conjecture. (Most of the “heavy lifting” is done in the works (i) and (iii) of DKKMS and KMS respectively.)

On Friday I gave the first in a two part series of talks with an exposition of this result in our reading group. I plan to post here the full scribe notes of these talks, but thought I’d start with a short “teaser”. In particular, if I’m not mistaken (and it is possible I am) then one can describe the actual reduction underlying the proof in about 5 lines (see below). Of course the analysis takes much longer..

In my exposition I do not follow the approach of the above papers. Rather I present a combined version of the DKKMS paper and our manuscript which bypasses the need to talk about Grassman graphs altogether. Also, while the original DKKMS paper uses a particular reduction from 3XOR to label cover as a starting point, I tried to “abstract away” the properties that are really needed. I should note that the reduction I describe below is “morally related” but not identical to the one used by DKKMS, and I have not written down a full proof of its analysis, and so it is possible that I am making a mistake. Still I find it much easier to describe and understand, and so I prefer this exposition to the one in the papers.

 

The reduction

The Khot, Minzer and Safra manuscript completes the proof of the following theorem:

Theorem: For every \epsilon>0, it is NP hard to distinguish between a unique games instance where at least 1/2-\epsilon fraction of the constraints can be satisfied, and an instance where at most an \epsilon fraction can be satisfied.

Any NP hardness result is composed of three parts:

  1. A reduction from a previously known NP hard problem.
  2. A completeness analysis, showing that a “yes instance” of the original problem corresponds (in our case) to an instance with value at least 1/2-\epsilon of unique games.
  3. A soundness analysis showing that one can decode an assignent of value at least \epsilon of the unique games instance to an assignment to the instance of original problem certifying that it was a yes instance.

In this blog post I will show “two thirds” of the analysis by presenting 1 and 2. To get some sense of proportion, Part 1 took me about ten minutes to present in my talk, Part 2 about two minutes, and the rest of the six hours are dedicated to part 3 (though that is an underestimate, since I will probably not get to cover much of KMS’s proof that the combinatorial conjecture is true..).

The initial problem

The NP hard problem we start from is the label cover problem that has been used time and again for hardness of approximations results. Specifically, we consider the following game:

  • Verifier chooses a pair (i,j) of indices at random from some G \subseteq [n]\times [m].
  • She sends i to first prover and j to the second prover, and gets back answers x_i and y_j respectively.
  • She accepts the answers if y_j = f_{i,j}(x_i) for some particular function f_{i,j}.

We will look at the case of an  affine label cover where for every i,j, f_{i,j} is an affine function from GF(2)^D to GF(2)^{D-d} for some constant integers D>d>0. We can think of an instance to this problem as a graph G where an edge i\sim j is labeled by f_{i,j}. It is known that for every \delta>0, there are large enough D,d such that it is NP hard to distinguish between an instance where the provers can cause the verifier to accept with probability at least 1-\delta and an instance where every strategy would cause it to accept with probability at most \delta.

In our context we will need two extra properties of “smoothness” and “robust soundness” (or “soundness with respect to advice”) which I will not formally define here, but can be achieved by a “noisy variant” of the standard parallel repetition from the 3XOR problem.

The reduction

Given an instance \{ y_j = f_{i,j}(x_i) \}_{(i,j)\in G}, we construct a unique game instance over alphabet GF(2)^\ell as follows:

  • The verifier chooses (i,j) \in G and f_{i,j} as before.
  • She chooses a random affine function v:GF(2)^{D-d} \rightarrow GF(2)^\ell, a random rank one linear function e: GF(2)^D \rightarrow GF(2)^\ell and a random invertible affine function g:GF(2)^\ell \rightarrow GF(2)^\ell.
  • She sends j,v to the second prover and i,u to the first prover where u=g(vf_{i,j}+e) (we use here product notation for function composition, note that u is an affine function from GF(2)^D to GF(2)^\ell)
  • She gets back the answers \tilde{x} and \tilde{y} in GF(2)^\ell from the first and second prover respectively, and accepts if and only if \tilde{y} = g^{-1}(\tilde{x}). (Note that the constraint \tilde{y}= g^{-1}(\tilde{x}) is indeed a unique constraint.)

 

A rank one linear function e:GF(2)^D \rightarrow GF(2)^\ell is a function of the form e(x) = \langle e',x \rangle e'' where e' \in GF(2)^D and e'' \in GF(2)^\ell. (In matrix notation this is the rank one \ell\times D matrix e=e'(e'')^\top.)

 

As promised, completeness is not hard to show:

Lemma: If there is a prover strategy convincing the verifier with probability at least 1-\delta for the original label cover game, then there is a strategy convincing the verifier with probability at least 1/2-\delta/2 in the resulting unique game.

Proof: Suppose there was such a strategy \{ x_i,y_j \} for the original label cover game. The provers for the unique games will answer (i,u) with u(x_i) and (j,v) with v(y_j) respectively. Now suppose (as will be the case with probability at least 1-\delta) that y_j = f_{i,j}(x_i). For every fixed x_i\in GF(2)^D, if we choose e(x) = \langle e',x \rangle e'' to be a random rank one matrix then with probability 1/2 \langle e',x_i \rangle =0 in which case e(x_i) =0^\ell. In such a case u(x_i)=g(vf_{i,j}+e)(x_i)= gvf_{i,j}(x_i)=g v(y_j) under our assumption that y_j = f_{i,j}(x_i), and hence if we apply g^{-1} to the first prover’s answer we get the second prover’s answer.

(To get the full unique games conjecture we would need completeness close to 1, but unfortunately it’s not easy to construct the field GF(1.1)  or a rank 0.1 matrix..)

Two comments

There is plenty more to talk about this reduction its analysis and all the open questions and research directions that it opens, and I do hope to post the full lecture notes soon. But for now let me make two quick comments.

Quantity makes quality

One can think of this reduction as following the standard paradigm of taking an original label cover game with alphabet GF(2)^D and encoding each symbol using an error correcting code. The most common code for this is the long code which has codewords of length 2^{2^D}, and a more efficient version is the short code which has codewords of length 2^{D^{O(1)}}. The reduction of DKKMS can be thought of as using a “tensored Hadamard code” or “unbalanced degree two short code” over alphabet GF(2)^\ell with codewords of length 2^{\ell\cdot D+\ell} where a string x\in GF(2)^d is mapped to its evaluations by all affine functions v:GF(2)^D \rightarrow GF(2)^\ell. (More accurately, DKKMS use a “folded version” of this code where one has a coordinate for every \ell dimensional subspace L \subseteq GF(2)^D; this does not make much difference for the current discussion.) For constant \ell, this means the codewords are of length 2^{O(D)} rather than 2^{D^{O(1)}} as in the short code.

While this quasipolynomial difference between the short code and the “tensored Hadamard code” might seem mild, it turns out to be absolutely crucial for the reduction to go through. In fact, I believe that if finding a code that behaves similarly to the degree c shortcode but with codewords of length 2^{f(c)D} (as opposed to 2^{D^c}) would result in a proof of the full unique games conjecture.

Hardness from easiness

It turns out that the analysis of the reduction rests on characterizing the non-expanding subsets of the graph on affine functions u:GF(2)^D \rightarrow GF(2)^\ell where there is an edge between u and u' if u=u'+e for a rank one e. By now researchers have developed an intuition that if we stare at such natural graphs hard enough, we can figure out all the non-expanding small sets, and indeed this intuition was verified by the KMS manuscript for this particular graph. But this intuition might seem somewhat at odds with the competing intuition that the small set expansion problem (a close variant of the unique games problem) should be hard. One way to resolve this conundrum is that while the unique games problem may well be hard on the worst case, it is extremely hard to come up with actual hard instances for it. Like Trump supporters with Ph.D’s, such instances might exist, but are rarely seen in the wild.