NP is Hard
Computational Complexity 2018-03-12
You don't get much press by stating conventional beliefs--there is no "round earth society". Nevertheless there are serious researchers out there trying to say that it is reasonably possible if not likely that P = NP. Don't believe it.
Emanuele Viola just outright states I Believe P = NP. He starts with a similar argument that we've seen from Richard Lipton: We've had surprising results in the past so why should the convention wisdom that P ≠ NP be true? I have two major problems with this line of reasoning:
- These arguments cherry pick certain results and ignore the vast majority of theorems that went the way we expected. It's akin to saying that because there have been lottery winners in the past, the ticket I hold in my hand is likely to be a winner.
- All the major surprises in complexity have occurred early in first two decades of the P v NP era. We've seen no surprises at that level since the early 90's. We have seen surprise proofs of results like Primes in P, Undirected Connectivity in Log Space and NEXP not in ACC0 but the theorems themselves surprised no one.
Viola also makes the argument that we've seen fewer significant lower bound results than upper bounds. This doesn't surprise me--an upper bound requires a single algorithm, a lower bound requires showing an infinite number of algorithms can't work. I fully agree that we're unlikely to see a proof that P ≠ NP in the near future. But this isn't a reason to think they are the same. And we've seen no non-trivial algorithms for general NP problems like circuit-SAT.
Which takes me to Ryan Williams Some Estimated Likelihoods For Computational Complexity where he gives 20% probability (which I consider nearly 20% too high) that P = NP based on similar logic. Ryan surprises me even more by giving a whopping 80% chance that NEXP = coNEXP. That would imply every easily describable tautology has a short proof of being a tautology (for the appropriate definitions of describable and short) which I find hard to fathom.
If I would guess collapses that might happen, I would suggest L = NL (directed connectivity in log space) or one that Ryan gives even odds, TC0 = NC1. TC0 captures constant-depth neural networks and as we've seen from the ML community, those neural nets look awfully powerful.