Halting Is Poly-Time Quantum Provable
Gödel’s Lost Letter and P=NP 2020-01-15
How does this intersect David Deutsch’s thoughts in 1985?
Composite crop from homepagesZhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen (JNVWY) have just posted a paper titled . The title means that multiple provers sharing quantum entanglement, given any Turing machine and string accepted by , can convince a polynomial-time bounded verifier with high probability that . The time is polynomial in regardless of how long takes to halt on .
Today we applaud this work and try to convey some short way of apprehending it.
Yoking a classic undecidable problem to a polynomial-time task is not the only surprise. The proof refutes a conjecture in classical functional analysis that had apparently been widely believed. Thus this story continues the theme of surprises and possibly working the wrong way on conjectures, as we also just mentioned in our previous post. The new work subsumes a major paper last year showing that contains nondeterministic double exponential time (), which proves it different from its classical counterpart , which László Babai, Lance Fortnow, and Carsten Lund proved equal to .
The developments have been covered by Scott Aaronson here, Lance here, Boaz Barak here, and in a personal way by Vidick here. The new paper weights in at 165 pages. We will give our own snap-summary and try to add a little from the side.
A Halting Attempt to Explain
The refuted conjecture was made by the Fields Medalist Alain Connes in a context having no overt involvement of quantum mechanics. In these 2013 eight–lecture course notes on the conjecture, the word “quantum” appears only once, to say on page 2 of lecture 1:
Other very recent discoveries include the fact that Connes’ embedding conjecture is related to an important problem in Quantum Information Theory, the so-called Tsirelson’s problem…
The problem of Boris Tsirelson ultimately harks back to the theorem of John Bell about correlations that are physically realizable using quantum entanglement but not by any classical physical system. In the CHSH game form of Bell’s theorem, our old friends Alice and Bob can win the game over 85% of the time using quantum, only 75% otherwise. They can get this with just one pair of entangled qubits per trial. In extensions of these games to larger-size cases, what are the gaps between quantum and classical?
Whether there is a gap of more than a fixed then feeds into interactive protocols. We can have parties trying to prove these gaps using their own entanglement. Where it gets tricky is when you allow Alice and Bob to use larger and larger quantum states and ask, can they achieve the gap with some large enough state? The limiting behavior of the gaps is complex. What JNVWY proved is that this becomes like a halting problem. Not just a halting problem, the Halting Problem. Yet two quantum provers, working for a given gap that is achievable, can prove this to a polynomial-time classical verifier. This is the magic of the theorem.
The reduction from halting to the problem about limits and gaps comes before introducing two-prover systems, as is reflected by JNVWY and also in the wonderful introduction of a 2017 paper by William Slofstra which they reference. In advance of saying more about it, we’ll remark that the new work may provide a new dictionary for translating between (i) issues of finite/infinite precision and other continuous matters, and (ii) possible evolutions of a system of finite size in discrete steps of time and size, where both are unbounded but (in positive cases) finite.
A Flashback?
The results strikes Dick and me as shedding new light on a principle stated by David Deutsch in a 1985 paper:
Every finitely realisable physical system can be perfectly simulated by a universal model computing machine operating by finite means.
I was a student at Oxford alongside Deutsch in 1984–1985, and I remember more going on than the searchable record seems to reflect. Deutsch believed that his model of a quantum computer could solve the Halting problem in finite time. He gave at least one talk at the Oxford Mathematical Institute on that claim. As far as I know the claim stayed local to Oxford and generated intense discussion led by Robin Gandy, Roger Penrose, and (if I recall) Angus Macintyre and at least one other person who was versed in random physical processes.
My recollection is that the nub of the technical argument turned on a property of infinite random sequences that, when hashed out, made some associated predicates decidable, so that Deutsch’s functions were classically total computable after all. Thus the hypercomputation claim was withdrawn.
Now, however, I wonder whether the two-prover system constitutes the kind of “machine” that Deutsch was intuitively thinking of. As I recall, his claim was not deemed wrong from first principles but from how theorems about random sequences interacted with machine model definitions. The theory of interactive provers as computational systems was then in its infancy. Could Deutsch have had some inkling of it?
Open Problems
Again we congratulate JNVWY on this achievement of a long-term research goal. Looking at the past, does it relate to the discussion of hypercomputation stemming from the 1980s? We mean a stronger connection than treated here or in this 2018 paper. Is it much different from ones where “the mystery … vanishes when the level of precision is explicitly taken into account” (quoting this). Looking ahead, are there any connection to the physical issues of infinity in finite time that we recently discussed here?