Local Hams in La Jolla

Gödel’s Lost Letter and P=NP 2018-08-02

A workshop on quantum computing at UCSD

AharonovGossettVidickCropped from workshop poster

Dorit Aharonov, David Gosset, and Thomas Vidick did standup for three-and-a-half days in La Jolla earlier this year. They are not listed among the many distinguished actors who have performed at the La Jolla Playhouse on the UCSD campus. Nor were they observed by any of the Hollywood talent-search agencies in town, at least not that we know. But they were applauded by numerous computer science and physics students attending the the March 19–22 UCSD “Spring School on Quantum Computation” which they led.

Today we report on the meeting and discuss an important problem springing from condensed matter physics: the Local Hamiltonian (LH) problem.

Local Hams were proved to be “quantum {\mathsf{NP}}-hard” by Alexei Kitaev almost 20 years ago. Intervening years have shown even greater resonances between problems about quantum Hamiltonians and complexity classes. William Hamilton didn’t know about quantum but he did know about energy, which his Hamiltonian operators capture in both the classical and quantum worlds. He is of course the person we CS people know for Hamiltonian cycles and paths, and his quaternions have even figured into CS books like this on computer graphics. But he meant a lot more to physics.

The lectures seemed tailored toward physics students—but perhaps that is because our reporter who attended the workshop is a computer science PhD student. He is Chaowen Guan, being supervised by Ken at Buffalo. This is our introduction of Chaowen writing for the blog, and we turn to him for the main body of this post.

Putting Energy Into NP

Given a witness predicate for a language in {\mathsf{NP}} we can make a big diagonal matrix {M} whose rows and columns correspond to potential witnesses {y}. We put a {1} in position {[y,y]} if {y} is a witness, else {0}. For any potential witness {z \in \{0,1\}^m} let {e_z} denote the corresponding binary unit vector of length {2^m}. Then we get

\displaystyle  M e_z = e_z \qquad\text{ if } z \text{ is a witness}; \qquad M e_z = 0 \qquad\text{otherwise.}

Now suppose we have {m} witness predicates {R}—or maybe we consider {R(x,\cdot)} for {m} different {x}-es. Let us simply add up the matrices for each one: {M = M_1 + \cdots + M_m}. Suppose {z} is a witness for {r} different cases. Then we get

\displaystyle  M e_z = r\cdot e_z.

This makes {e_z} an eigenvector of {M} with eigenvalue {r}. The more witness predicates {z} satisfies, the higher {r}. In Hamilton’s terms, the system gets more “excited” by such {z} and {r} is the energy of the resonance.

Note that when we just had one witness predicate the eigenvalue was {1} for all witnesses. So we could just add up {e_z} for a bunch of witnesses {z} and the resulting vector {\mathbf{z}} still satisfies {M \mathbf{z} = 1\cdot \mathbf{z}} with the eigenvalue {1}. But with multiple {r} we can only do this for the same {r}, so we have to stratify the space according to each {r}.

Abstractly, this is just rehashing basic facts from linear algebra about eigenvalues and their corresponding eigenspaces. But we gain relevance to complexity, even though {M} is exponentially big, when {M} is succinct—and when we work with “witnesses” of more-piecemeal things that are local. For instance, for each clause {C_j} in a 3CNF Boolean formula {C_1 \wedge \cdots \wedge C_\ell}, make {M_j} by putting in a {1} for each assignment {z} that satisfies {C_j}. Since only the three literals in {C_j} matter, {M_j} is the tensor product of an {8 \times 8} diagonal matrix {H_j} with the identity on the other variables. If we add up {M = \sum_{j=1}^\ell M_j} and {M \mathbf{z} = r \mathbf{z}}, then {r} tells us the number of satisfied clauses.

We can play the same game with unsatisfying assignments instead—then with {k}-CNF we put in only one {1} per clause regardless of {k}. So the matrices {H_j} are super-sparse as well as “{k}“-local. If we think of {H_j} as an operator on those {k} variables then we can sum them up without the technical crutch of saying we are really summing the matrices {M_j}. Now to do this when qubits stand for the variables we need to fix what “NP” means in the quantum world.

Quantum NP

The class most regarded as the quantum analog of {\mathsf{NP}} is named {\mathsf{QMA}}, which stands for “Quantum Merlin Arthur.” It works like a classical {\mathsf{MA}} protocol except that quantum randomness is involved. A language L is in {\mathsf{QMA}} if there exists a quantum polynomial time verifier {Q} and a polynomial {p} such that:

  • {\forall x \in L, \exists |{\phi}\rangle}, {\mathsf{Pr}}(Q accepts {(|{x}\rangle, |{\phi}\rangle)}) {\geq \frac{2}{3}};

  • {\forall x \notin L, \forall |{\phi}\rangle}, {\mathsf{Pr}}(Q accepts {(|{x}\rangle, |{\phi}\rangle)}) {\leq \frac{1}{3}}.

Here {|{\phi}\rangle} is a quantum state of size at most {p(\|x\|)}. If we take {|{\phi}\rangle} as a classical witness but leave {Q} to be a quantum verifier, the class Quantum Classical MA ({\mathsf{QCMA}}) is defined.

Local Hamiltonians

The matrices {H_j} are Hermitian operators—meaning that {H_j} equals its conjugate {H^*_j}—and positive semi-definite. The operator norm {\| H_j \|} is the supremum of {||H_j v||} over all unit vectors {v}. Their eigenvalues correspond to the allowable energies. The “local” property states that {H_j} only operates on some small constant number of the {n} qubits. Let this number be {k}. An operator {H} on {n} qubits is a {k}-local Hamiltonian if {H = \sum^m_{j=1} H_j} where each {H_j} acts on at most {k} qubits. Then the LH problem can be defined as following “promise problem” for any {k}:

Given: A set {H_1, \cdots, H_m} of {k}-local Hamiltonian operators, each of operator norm at most {1} with entries specifiable by {\mathsf{poly}(n)} bits, and real numbers {a} and {b} such that {b - a > 1/ \mathrm{poly}(n)}.

Question: Is the minimum eigenvalue of the Hamiltonian {H = \sum^m_{j=1} H_j} on {n} qubits smaller than {a}, or are all eigenvalues larger than {b}?

For any {k}, this can be thought of as the quantum analog of {k}-SAT. From the above discussion and MAX-2-SAT being {\mathsf{NP}}-hard we can already see that the {k}-LH problem is {\mathsf{NP}}-hard for any {k \geq 2}: By representing the {n} variables by {n} qubits and each clause by an {H_j} acting on {k} qubits, the vector of a satisfying assignment to any one {H_j} will have eigenvalue 0 while an unsatisfying assignment corresponds to eigenvalue 1. Therefore, the lowest eigenvalue of the sum {H} (recall that we really mean the sum of the corresponding {M_j} matrices) corresponds to the maximum number of simultaneously satisfied clauses. Getting hardness for {\mathsf{QMA}} requires some more work, however.

QMA-completeness

The following was originally proved for the constant value of {k = 5}, but {k} was brought down to 2 in this paper by Kitaev with Julia Kempe and Oded Regev.

The {k}-local Hamiltonian problem is {\mathsf{QMA}}-complete.

To prove it is in {\mathsf{QMA}}, an obvious witness for a “yes” instance would be simply an eigenstate with eigenvalue smaller than {a}. For a “no” instance {H}, the amplification theorem can be used.

The proof for {\mathsf{QMA}}-hardness is more complicated, so we will discuss only the main difference from the standard Cook-Levin proof. A natural witness for the history of the computation would be the sequence of states {|{x}\rangle, U_1|{x}\rangle, \cdots, U_s\cdots U_1 |{x}\rangle} and we want to verify the consistency of two consecutive states via local means as with the Cook-Levin proof. The issue is that it is possible in quantum for two vastly different states to give the same local appearance on all subsets of {k} qubits. A simple example for {n = 2} and {k=1} is that the entangled state {|{\alpha}\rangle = \frac{1}{\sqrt{2}}(|{00}\rangle + |{11}\rangle)} gives the same coin-flip on one individual qubit line as {|{\beta}\rangle =} the unentangled superposition of all four basis states. In general the issue is that the scalars {\langle{\phi}| U_j |{\phi}\rangle} and {\langle{\phi'}| U_j |{\phi'}\rangle} will come out the same when {\phi} and {\phi'} have the same reduced density matrices on {k}-qubit sets and yet could be different states.

However, if the history of the computation is given in superposition, we can fight fire with fire by using entanglement to help local Hamiltonians verify the consistency. Consider the following superposition:

\displaystyle  \frac{1}{\sqrt{2}}(|{0}\rangle \otimes |{\alpha}\rangle + |{1}\rangle \otimes |{\beta}\rangle).

The resulting reduced density matrix of the first qubit will give us

\displaystyle  \frac{1}{2}(|{0}\rangle \langle{0}| + \overline{\langle{\alpha|\beta}\rangle} |{0}\rangle \langle{1}| + \langle{\alpha|\beta}\rangle |{1}\rangle \langle{0}| + |{1}\rangle \langle{1}|.

Hence, this can facilitate the comparison between {|{\alpha}\rangle} and {|{\beta}\rangle}.

One can refer to the survey mentioned above for a detailed proof of this theorem. This paper gives a list of other known {\mathsf{QMA}}-complete problems.

To prove {\mathsf{QMA}}-completeness for {k=2,3}, a technical tool named projection lemma had been used, which enables approximating a non-local Hamiltonian by a local Hamiltonian. The same paper also provided another self-contained proof for {k=2} using a more advanced tool, perturbation theory, to analyze sums of Hamiltonians.

The Talks

Below is a list summarizing the talks over the meeting:

  • Day 1: The talks covered the basic background on quantum mechanics and quantum computation, mainly including the physical principles of quantum computation, the quantum circuit model, some basic quantum algorithms such as factoring, and the introduction to LH problem.

  • Day 2:

    • Dorit presented the quantum version of Cook-Levin theorem, proving LH is {\mathsf{QMA}}-complete and gave an overview on quantum error-correcting codes.

    • Thomas spoke on delegating computation of encrypted quantum data to “quantum cloud”.

    • David spoke on the adiabatic model of quantum computation.

  • Day 3:

    • Thomas discussed the topics on quantum multiplayer games.

    • David spoke on the advantage of shallow quantum circuits over classical computers and topics on more restricted models of quantum computation.

    • Dorit discussed the quantum PCP conjecture and the NLTS (standing for no low-energy trivial states) conjecture. See also this paper which came out just before the workshop.

  • Day 4:

    • Thomas discussed more on the formulations of quantum PCP conjecture and the entangled-prover interactive proof systems.

    • David gave an overview of stoquastic Hamiltonian problems.

After lunch there was a discussion of open equations and challenges. More details and materials of the talks can be found on this page.

Open Problems

Besides the main lecture format, there were sessions of open problems. Here are some of them:

  1. Quantum unique games conjecture: The wonderful classical Unique Games Conjecture (UGC) has motivated and enabled many work on computational complexity since its advent. Is there a quantum version of UGC (related to {\mathsf{QMA}}) that can help advance our understanding in quantum computational complexity as well? (Updated 07/06/2018: This paper proved the classical UGC to be easy with provers sharing entanglement.)

  2. Classical verifier in quantum interactive proof systems: It requires exponential resources (as we know) to simulate the quantum mechanical description in general. Hence, an interesting and important open question is whether an entirely classical system is able to verify the correctness of quantum evolutions or the “quantum-ness” of some given data. Some work on interactive proofs having classical verifier with only a few qubits have been done. (Updated 07/06/2018: Shortly after the workshop, this work made a major progress.)

  3. Fault-tolerant quantum computation with constant overhead: Currently there is a poly-logarithmic overhead. A paper by Daniel Gottesman showed the possibility of creating fault-tolerant quantum computation with constant error rate and constant overhead by using a family of low-density parity check (LDPC) codes. A recent paper motivated by Gottesman’s work claims that the family of quantum expander codes can meet the requirements (stated here), but this paper also mentions that additional analysis on syndrome errors needs to be done before these codes being employed in the construction in Gottesman’s paper. Hence, the construction of such a family of quantum LDPC codes still remains open.