Four striking papers

Shtetl-Optimized 2020-05-13

In the past week or two, four striking papers appeared on quant-ph. Rather than doing my usual thing—envisioning a huge, meaty blog post about each paper, but then procrastinating on writing them until the posts are no longer even relevant—I thought I’d just write a paragraph about each paper and then open things up for discussion.

(1) Matt Hastings has announced the first provable superpolynomial black-box speedup for the quantum adiabatic algorithm (in its original, stoquastic version). The speedup is only quasipolynomial (nlog(n)) rather than exponential, and it’s for a contrived example (involving winding numbers, just like in the important earlier work by Freedman and Hastings), and there are no obvious near-term practical implications. But still! Twenty years after Farhi and his collaborators wrote the first paper on the quantum adiabatic algorithm, and 13 years after D-Wave made its first hype-laden announcement, this is (to my mind) the first strong theoretical indication that adiabatic evolution with no sign problem can ever get a superpolynomial speedup over not only simulated annealing, not only over Quantum Monte Carlo, but over all possible classical algorithms. (This had previously been shown only for a variant of the adiabatic algorithm that jumps up to the first excited state, by Nagaj, Somma, and Kieferova.) As such, assuming the result holds up, Hastings resolves a central question that I (for one) had repeatedly asked about for almost 20 years. Indeed, if memory serves, at an Aspen quantum algorithms meeting a few years ago, I strongly urged Hastings to work on the problem. Congratulations to Matt!

(2) In my 2009 paper “Quantum Copy-Protection and Quantum Money,” I introduced the notion of copy-protected quantum software: a state |ψf⟩ that you could efficiently use to evaluate a function f, but not to produce more states (whether |ψf⟩ or anything else) that would let others evaluate f. I gave candidate constructions for quantumly copy-protecting the simple class of “point functions” (e.g., recognizing a password), and I sketched a proof that quantum copy-protection of arbitrary functions (except for those efficiently learnable from their input/output behavior) was possible relative to a quantum oracle. Building on an idea of Paul Christiano, a couple weeks ago my PhD student Jiahui Liu, Ruizhe Zhang, and I put a preprint on the arXiv improving that conclusion, to show that quantum copy-protection of arbitrary unlearnable functions is possible relative to a classical oracle. But my central open problem remained unanswered: is quantum copy-protection of arbitrary (unlearnable) functions possible in the real world, with no oracle? A couple days ago, Ananth and La Placa put up a preprint where they claim to show that the answer is no, assuming that there’s secure quantum Fully Homomorphic Encryption (FHE) of quantum circuits. I haven’t yet understood the construction, but it looks plausible, and indeed closely related to Barak et al.’s seminal proof of the impossibility of obfuscating arbitrary programs in the classical world.

(3) Speaking of Boaz Barak: he, Chi-Ning Chou, and Xun Gao have a new preprint about a fast classical way to spoof Google’s linear cross-entropy benchmark for shallow random quantum circuits (with a bias that degrades exponentially with the depth, remaining detectable up to a depth of say ~√log(n)). As the authors point out, this by no means refutes Google’s supremacy experiment, which involved a larger depth. But along with other recent results in the same direction (e.g. this one), it does show that some exploitable structure is present even in random quantum circuits. Barak et al. achieve their result by simply looking at the marginal distributions on the individual output qubits (although the analysis to show that this works gets rather hairy). Boaz had told me all about this work when I saw him in person—back when traveling and meeting people in person was a thing!—but it’s great to see it up on the arXiv.

(4) Peter and Raphaël Clifford have announced a faster classical algorithm to simulate BosonSampling. To be clear, their algorithm is still exponential-time, but for the special case of a Haar-random scattering matrix, n photons, and m=n input and output modes, it runs in only ~1.69n time, as opposed to the previous bound of ~2n. The upshot is that, if you want to achieve quantum supremacy using BosonSampling, then either you need more photons than previously thought (maybe 90 photons? 100?), or else you need a lot of modes (in our original paper, Arkhipov and I recommended at least m~n2 modes for several reasons, but naturally the experimentalists would like to cut any corners they can).

And what about my own “research program”? Well yesterday, having previously challenged my 7-year-old daughter Lily with instances of comparison sorting, Eulerian tours, undirected connectivity, bipartite perfect matching, stable marriage, factoring, graph isomorphism, unknottedness, 3-coloring, subset sum, and traveling salesman, I finally introduced her to the P vs. NP problem! Even though Lily can’t yet formally define “polynomial,” let alone “algorithm,” I’m satisfied that she understands something of what’s being asked. But, in an unintended echo of one of my more controversial recent posts, Lily insists on pronouncing NP as “nip.”