Hardness amplification proofs require majority… and 15 years
Thoughts 2018-04-09
Aryeh Grinberg, Ronen Shaltiel, and myself have just posted a paper which proves conjectures I made 15 years ago (the historians want to consult the last paragraph of [2] and my Ph.D. thesis).
At that time, I was studying hardness amplification, a cool technique to take a function that is somewhat hard on average, and transform it into another function that is much harder on average. If you call a function -hard if it cannot be computed on a fraction of the inputs, you can start e.g. with that is -hard and obtain that is hard, or more. This is very important because functions with the latter hardness imply pseudorandom generators with Nisan’s design technique, and also “additional” lower bounds using the “discriminator lemma.”
The simplest and most famous technique is Yao’s XOR lemma, where
and the hardness of decays exponentially with . (So to achieve the parameters above it suffices to take .)
At the same time I was also interested in circuit lower bounds, so it was natural to try to use this technique for classes for which we do have lower bounds. So I tried, and… oops, it does not work! In all known techniques, the reduction circuit cannot be implemented in a class smaller than TC – a class for which we don’t have lower bounds and for which we think it will be hard to get them, also because of the Natural proofs barrier.
Eventually, I conjectured that this is inherent, namely that you can take any hardness amplification reduction, or proof, and use it to compute majority. To be clear, this conjecture applied to black-box proofs: decoding arguments which take anything that computes too well and turn it into something which computes too well. There were several partial results, but they all had to restrict the proof further, and did not capture all available techniques.
Should you have had any hope that black-box proofs might do the job, in this paper we prove the full conjecture (improving on a number of incomparable works in the literature, including a 10-year-anniversary work by Shaltiel and myself which proved the conjecture for non-adaptive proofs).
Indistinguishability
One thing that comes up in the proof is the following basic problem. You have a distribution on bits that has large entropy, very close to . A classic result shows that most bits of are close to uniform. We needed an adaptive version of this, showing that a decision tree making few queries cannot distinguish from uniform, as long as the tree does not query a certain small forbidden set of variables. This also follows from recent and independent work of Or Meir and Avi Wigderson.
Turns out this natural extension is not enough for us. In a nutshell, it is difficult to understand what queries an arbitrary reduction is making, and so it is hard to guarantee that the reduction does not query the forbidden set. So we prove a variant, where the variables are not forbidden, but are fixed. Basically, you condition on some fixing of few variables, and then the resulting distribution is indistinguishable from the distribution where is uniform. Now the queries are not forbidden but have a fixed answer, and this makes things much easier. (Incidentally, you can’t get this simply by fixing the forbidden set.)
Fine, so what?
One great question remains. Can you think of a counter-example to the XOR lemma for a class such as constant-depth circuits with parity gates?
But there is something more why I am interested in this. Proving average-case hardness results for restricted classes “just” beyond AC is more than a long-standing open question in lower bounds: It is necessary even for worst-case lower bounds, both in circuit and communication complexity, as we discussed earlier. And here’s hardness amplification, which intuitively should provide such hardness results. It was given many different proofs, see e.g. [1]. However, none can be applied as we just saw. I don’t know, someone taking results at face value may even start thinking that such average-case hardness results are actually false.
References
[1] Oded Goldreich, Noam Nisan, and Avi Wigderson. On Yao’s XOR lemma. Technical Report TR95–050, Electronic Colloquium on Computational Complexity, March 1995. http://www.eccc.uni-trier.de/.
[2] Emanuele Viola. The complexity of constructing pseudorandom generators from hard functions. Computational Complexity, 13(3-4):147–188, 2004.