Turn down the quantum volume
Shtetl-Optimized 2020-03-05
Several people asked me to comment on the recent announcement by Honeywell that they’ll soon have what they call “the most powerful” quantum computer (see here for press release, here for Forbes article, here for paper).
I’m glad that Honeywell, which many people might know as an air-conditioner manufacturer, has entered the race for trapped-ion QC. I wish them success. I’ve known about what they were doing in part because Drew Potter, my friend and colleague in UT Austin’s physics department, took a one-year leave from UT to contribute to their effort.
Here I wanted to comment about one detail in Honeywell’s announcement: namely, the huge emphasis on “quantum volume” as the central metric for judging quantum computing progress, and the basis for calling their own planned device the “most powerful.” One journalist asked me to explain why quantum volume is such an important metric. I had to give her an honest answer: I don’t know if it is.
Quantum volume was invented a few years ago by a group at IBM. According to one of their papers, it can be defined roughly as 2k, where k is the largest number such that you can run a k-qubit random quantum circuit, with depth k and with any-to-any connectivity, and have at least (say) 2/3 probability of measuring an answer that passes some statistical test. (In the paper, they use what Lijie Chen and I called Heavy Output Generation, though Google’s Linear Cross-Entropy Benchmark is similar.)
I don’t know why IBM takes the “volume” to be 2k rather than k itself. Leaving that aside, though, the idea was to invent a single “goodness measure” for quantum computers that can’t be gamed either by building a huge number of qubits that don’t maintain nearly enough coherence (what one might call “the D-Wave approach”), or by building just one perfect qubit, or by building qubits that behave well in isolation but don’t interact easily. Note that the any-to-any connectivity requirement makes things harder for architectures with nearest-neighbor interactions only, like the 2D superconducting chips being built by Google, Rigetti, or IBM itself.
You know the notion of a researcher’s h-index—defined as the largest h such that she’s published h papers that garnered h citations each? Quantum volume is basically an h-index for quantum computers. It’s an attempt to take several different yardsticks of experimental progress, none terribly useful in isolation, and combine them into one “consumer index.”
Certainly I sympathize with the goal of broadening people’s focus beyond the “but how many qubits does it have?” question—since the answer to that is meaningless without further information about what the qubits can do. From that standpoint, quantum volume seems like a clear step in the right direction.
Alas, Goodhart’s Law states that “as soon as a measure becomes a target, it ceases to be a good measure.” That happened years ago with the h-index, which now regularly pollutes academic hiring and promotion decisions, to the point where its inventor expressed regrets. Quantum volume is now looking to me like another example of Goodhart’s Law at work.
The position of Honeywell’s PR seems to be that, if they can build a device that can apply 6 layers of gates to 6 qubits, with full connectivity and good fidelity, that will then count as “the world’s most powerful quantum computer,” since it will have the largest volume. One problem here is that such a device could be simulated by maintaining a vector of only 26=64 amplitudes. This is nowhere near quantum supremacy (i.e., beating classical computers at some well-defined task), which is a necessary though not sufficient condition for doing anything useful.
Think of a university that achieves an average faculty-to-student ratio of infinity by holding one class with zero students. It gets the “best score” only by exploiting an obvious defect in the scoring system.
So what’s the alternative? The policy that I prefer is simply to tell the world all your system specs, as clearly as you can, with no attempts made to bury the lede. How many qubits do you have? With what coherence times? With what connectivity? What are the 1- and 2-qubit gate fidelities? What depth of circuit can you do? What resources do the standard classical algorithms need to simulate your system? Most importantly: what’s the main drawback of your system, the spec that’s the least good, that you most need to improve? What prevents you from having a scalable quantum computer right now? And are you going to tell me in your public announcement, or will you make me search in Appendix III.B of your paper, or worse yet, ask one of your competitors?
I confess that the answers to the above questions are harder to summarize in a single number (unless we, like, concatenated binary encodings of the answers or something). But they can be ineffably combined, to produce a progress metric that one of my postdocs suggested calling “quantum scottness,” and which roughly equals the number of expressions of wide-eyed surprise minus the number of groans.