Debate about “quantum supremacy” . . . whatever that is!

Statistical Modeling, Causal Inference, and Social Science 2025-03-14

Combinatorist Gil Kalai points to these papers where he and his colleagues cast doubt on claims of “quantum supremacy” by some Google researchers:

  1. Y. Rinott, T. Shoham, and G. Kalai, Statistical Aspects of the Quantum Supremacy Demonstration, Statistical Science  (2022)
  2. G. Kalai, Y. Rinott and T. Shoham, Google’s 2019 “Quantum Supremacy” Claims: Data, Documentation, & Discussion .
  3. G. Kalai, Y. Rinott and T. Shoham, Questions and Concerns About Google’s Quantum Supremacy Claim .  (see this post .)
  4. G. Kalai, Y. Rinott and T. Shoham, Random circuit sampling: Fourier expansion and statistics.

Kalai adds:

An earlier paper that I wrote : G. Kalai, The argument against quantum computers, the quantum laws of nature, and Google’s supremacy claims, discusses quantum computers, my general theory regarding quantum computers, and some aspects of the Google experiment.

I know nothing about this, but it seemed like the sort of thing that might interest some of you, so I’m sharing it.

There’s also a connection to the replication crisis, as the claim about quantum supremacy turns upon the evaluation of stochastic data.

Why not just simulate the process a billion times and find out (almost) surely?

The thing I didn’t get from the above description is: Why can’t the original researchers just run their algorithm a billion times, and then any stochastic claim would become essentially deterministic?

I asked Kalai this question and here’s his response:

Quick background: The setting of the “quantum supremacy” claims is based on a sampling task. The probability space has size M=2^n where n is the number of qubits.

Based on samples (usually of length 500,000) produced by the quantum computer (each sample is 0-1 vector of length n) the researcher concluded that the sample is “sufficiently close” to the target distribution and achieving this is something not accessible by a classical computer when n is large.

Now, to your question:

On the practical level: one of the early concerns was that the samples are not large enough for us to understand the empirical distribution. (For small values of n we overcame this difficulty and managed to show that the empirical distribution is quite different from any proposed specific model.)

So far, the researcher did not provide larger samples (and in a few replications the researchers provided smaller samples).

So the practical answer to the question “why the original researchers can’t just run their algorithm a billion times, and then any stochastic claim would become essentially deterministic?” is simply that the original researchers are not willing to do it.

On the theoretical level: my theory (the general theory about quantum computers and not the work scrutinizing specific claims) is that samples coming from quantum computers are “noise sensitive.” This means that the empirical behavior is nonstationary and even inherently unpredictable. (Probably even more than experiments on humans or animals :) )

This claim is at the heart of matters and it contradicts the clear intuition of many experts.

Interesting. This does suggest that the Google researchers don’t really believe in their result. If you believe what you have, you’ll try to replicate it as strongly as possible. Conversely, if you treat your result as a fragile flower, this suggests you don’t completely think it’s real. Or maybe you think it’s real on some sort of deep level, but you’re not convinced you have the evidence for it.

But this is just me reasoning from Kalai’s arguments. As noted above, I don’t know anything about the topic, so it would be interesting to hear from some other experts.