Quantum Switch-Em

Gödel’s Lost Letter and P=NP 2019-09-27

A recipe for changing the objectives of problems

Composite crop of src1, src2, src3

Aram Harrow, Avinatan Hassidim, and Seth Lloyd are quantum stars who have done many other things as well. They are jointly famous for their 2009 paper, “Quantum algorithm for linear systems of equations.”

Today Ken and I discuss an aspect of their paper that speaks to a wider context.

The paper by Harrow, Hassidim, and Lloyd addresses a fundamental problem: given an {N \times N} matrix {A} and an {N}-vector {b}, solve {Ax = b}. This is called the Linear Systems Problem (LSP). They say in their abstract:

Here, we exhibit a quantum algorithm for this task that runs in {poly(log N)} time, an exponential improvement over the best classical algorithm.

What strikes us is what HHL meant by “this task” in their abstract. It is not LSP. Indeed, it can’t be LSP. They want to do it with a number {n} of qubits that is {\ll N}. But the solution {x} is a length-{N} vector, which in general has at least {N} bits of information. No matter how much you entangle and wrangle, you cannot extract more than {n} bits of information out of {n} qubits. So even if {A} is {O(n)}-sparse in some sense, and ignoring the issue of the time it would take to output the {N} entries of {x}, you can’t solve LSP with fewer qubits to get all of {x} in the first place.

The problem they solve has been given its own name in subsequent treatments, including a 2015 paper by Andrew Childs, Robin Kothari, and Rolando Somma that greatly improved the time to achieve {\epsilon} precision:

Quantum Linear Systems Problem (QLSP): Given a succinct representation of an invertible Hermitian matrix {A}, code for a unitary operator {B} that maps {0^n} to a quantum state {|b\rangle}, and an error tolerance {\epsilon}, output a quantum state {|\xi\rangle} such that

\displaystyle  ||\;|\xi\rangle - |\frac{x}{||x||_2}\rangle\;||_2 < \epsilon, \qquad\text{where}\qquad x = A^{-1}b.

The {|\cdot\rangle} symbol is called a ket in Paul Dirac’s bra-ket notation. “Ketting” a problem to our mind means accepting not only an approximate answer but also a loss of information that might apply to inputs as well as outputs.

It’s a Changed Problem

What a solution to QLSP gives you is not {x} but a quantum state {|\xi\rangle} that approximates the unit-vector multiple of {x}. As HHL say earlier in their abstract, this changed problem is useful if what you really want is not {x} but some other value {f(x)} derived from {x}. They say for example that desired information could come from a quantum measurement of {x}—that is, of {|\xi\rangle}.

This blog did have a 2012 post on “lazy problem solving” that included the quip:

The idea is that when we solve a system {Ax = b} for {x}, it is likely that we do not really want {x}. We want some information about {x}. For example, we might wish to know the norm of {x}. The question we plan to address is, are there algorithms that can do this faster than solving the entire linear system?

Ironically, the problem we suggested there, which is computing {\sum_i x_i^2} up to multiplicative error {1 \pm \epsilon}, is destroyed by normalizing {x} into a quantum state {|\xi\rangle}. Oh well. But other posts have considered changing the objectives of algorithms. But the general idea involved in the formulation of QLSP strikes us anew—and in ways different from Scott Aaronson’s caveats in a wonderful review he wrote of HHL titled, “Quantum Machine Learning Algorithms: Read the Fine Print.”

Going from LSP to QLSP switches the problem and enables showing that a quantum algorithm can solve the switched problem much faster than before. Very nice: HHL and its followups deserving of myriad citations. Scott’s review frames the question of whether the switch is fair from a practical perspective, including the restrictions on {A} and {b}. Perhaps the jury is still out on this: we’ve mentioned several times the nifty result by Ewin Tang that found a quick classical solution to a problem downstream of HHL that had been thought to require quantum. But switching a problem for research is fair—we do this all time in theory. We hope that the modified problem helps us understand the original problem. Let’s look at some examples.

Ketted Problems

We think the general recipe will be clear from a couple more examples. The first one seems not to be the same as what is called “QSAT” here or here.

{\bullet} “Ketted SAT”: Given a CNF formula {\Phi} with {2^n} variables but {n^{O(1)}}-sized clauses, and {\epsilon > 0}, output a quantum state {|\xi\rangle} such that for some satisfying assignment {x}, {||\;|\xi\rangle - |\frac{x}{||x||_2}\rangle ||_2 \leq \epsilon}.

We still suspect this is known at least implicitly in the context of interactive proofs and PCPs and error-correcting codes. In that context, the idea of getting an assignment within distance {\epsilon} of a satisfying one might make sense. For other purposes it is not so useful—e.g., it might not help utiliize the self-reducibility structure of SAT. It may simply be equivalent to known succinct-SAT problems after applying the coding used to obtain holographic proofs, but we don’t immediately see it.

Note that the fact of {\Phi} being in CNF with small clause size (which can even be constant) mirrors the sparseness condition on {A} that is usually assumed in cases of HHL. So the analogy to QLSP holds up there. At least “ketted SAT” is a different problem.

{\bullet} “Ketted Factoring”: Given a really large integer {M}, output a quantum state {|\pi\rangle} such that for some nontrivial (prime) factor {p} of {M}, {||\;|\pi\rangle - k(p)|| < \epsilon}, where {k(p) = \sum_{i: bin(p,i) = 1} |i\rangle}.

We can suppose that we have oracle lookup to bits of {M}, so that all the bits can be placed into quantum superposition. We could alternatively encode {M} as the quantum state {k(M) = \sum_{i: bin(N,i) = 1} |i\rangle}, but then the problem might not be well-defined because {k(M)} cannot retain all info about {M}.

This is not in the shadow of Shor’s algorithm because that needs {M} to have {O(n)} digits in order eventually to get a factor exactly. The quantumized version applies when {M} is really big. Again, it is not clear that the solutions {|\pi\rangle} would be useful for things like breaking cryptosystems. They might represent points of cryptographic weakness in some detailed sense. In any event, it’s a different problem.

More General Changed Problems

With all of these there are questions about how far the change reflects on the original problem, but at least the change generated substantial research and interesting new angles.

The Goldbach problem

Every even number is the sum of two primes {~\rightarrow~} Every even number is the sum of two almost-primes.

A number is an almost prime if it is a prime or a product of at most two primes. This is a quite useful result, but not what we really believe is true.

P=NP

{\mathsf{P}} not equal to {\mathsf{NP}} {~\rightarrow~} {\mathsf{P}^{A}} not equal to {\mathsf{NP}^{A}}.

This says that if we relativize to some oracle {A}, then {\mathsf{P}} and {\mathsf{NP}} are not equal. This is a famous result of Theodore Baker, John Gill, and Robert Solovay. It sheds light on the structure of algorithms, but does not rule out that {\mathsf{P}} could still equal {\mathsf{NP}}.

The Collatz Problem

The {3x+1} problem.

Let {Col(N)} be {N/2} when {N} is even, and {(3N+1)/2} when {N} is odd. The problem is whether the orbit

\displaystyle  N \rightarrow Col(N) \rightarrow Col^{2}(N) \rightarrow \dots

reaches {1} for all positive integers {N}. The change is:

All positive integers {~\rightarrow~} Almost all.

Actually, Terence Tao got a cool result this year only by making a second change. In his paper “Almost All Orbits Of The Collatz Map Attain Almost Bounded Values” he proves that {Colmin (N) < f(n) } for almost all {N} and any {f(n)} that tends to infinity. Thus

\displaystyle  f(n) = \log \log \log \log N

is fine. Here {Colmin(N)} is the smallest integer that is reached in the orbit of {N}. It is scary how hard this is to prove.

Explicit Hard Boolean Functions

Find explicit boolean functions with non-linear boolean complexity.

Boolean complexity with negation {~\rightarrow~} Boolean complexity without negation.

These are the seminal results on monotone boolean complexity. They came up two years ago in an abortive attempt to prove {\mathsf{P} \neq \mathsf{NP}}, which we covered here.

Quantum Supremacy

We come full circle back to quantum computing. This week there has been much hubbub over a prematurely-released announcement that Google researchers have built a quantum device able to complete probabilistic searches that are not feasible by any classical computer. If one goes back to origins of the term “quantum supremacy” in 2012, before Google’s approach was conceived in 2015, one can say the change is:

specific physical simulation or complexity-based problems {~\rightarrow~} randomized problems.

Scott has a great post with preliminary descriptions and evaluations, and we confess to adapting the following telegraphic evocation of how it works from Ryan O’Donnell and others in the comments section: Given a randomly-generated probability distribution {D} on {\{0,1\}^n}, call {x}{D}-heavy” if {D} gives it probability {b/2^n} for some fixed {b > 1}. Even given {D} in white-box form via an {n}-qubit quantum circuit {C} with randomly placed gates, it is evidently hard (compare this) for a classical computer to find heavy strings with frequency {> 1/e + \delta}, concretely for {n = 53}. Google’s quantum machine, which is effectively programmed by the user presenting the random {C}, can however find heavy strings with frequency {1/e + \Delta} with a highly significant separation between {\Delta} and {\delta}. We surmise that exhaustive tests over classical circuits trying the search at smaller values of {n} have witnessed the smaller {\delta} value at those {n}.

Again, this approach is different from demonstrating a specific computable function to separate classical and quantum and from the instances considered in the concluding post, with “supremacy” in its title, of our eight-part series in 2012 between Gil Kalai and, yes, Aram Harrow.

Open Problems

Is our general notion of “ketted problems” useful? Ketted SAT and Factoring lack the natural continuity in (Q)LSP, but at least for SAT it can be grafted on by composing the formula with an error-correcting code. There is still the issue of being able to extract less classical information from the ketted approximations.

We note a theory workshop being held this Saturday at UW Seattle in honor of Paul Beame’s 60th birthday. We congratulate Paul on both.

[added Aug. 2019 CACM link for Ewin Tang’s theorem]