Robert Alicki, Michel Dyakonov, Leonid Levin, Oded Goldreich, and Others – A Summary of Some Skeptical Views On Quantum Computing.

Combinatorics and more 2025-02-17

In this post, I provide links, references, and a brief discussion of the skeptical views regarding quantum computing held by Robert Alicki, Michel Dyakonov, Leonid Levin, Oded Goldreich, and a few others. In the next post I will briefly describe my own skeptical view, and move on to give counter arguments by proponents of quantum computing. These pair of posts are spinoffs of a planned post that deals with a variety of quantum mysteries many of which intersect with quantum computation and quantum information.

The debate on quantum computing

The question of whether quantum computation is possible is among the most fascinating open scientific questions of our time. I hold the view that building scalable quantum computers—and even achieving some early milestones on the path to quantum computing—is impossible. Skeptical perspectives on quantum computing have existed since the inception of the field. For instance, in the early 1990s, concerns raised by Rolf Landauer, William Unruh and others helped motivate the development of quantum error correction and the “threshold theorem”, and in 1996 shortly after the threshold theorem was proved,  Serge Haroche, and Jean-Michel Raimond wrote a paper “Quantum computing: dream or nightmare? that raised concerns about the feasibility of quantum error-correction and fault tolerance quantum computation. 

Personally, I am very interested not only in uncovering the “ground truth” about our physical world but also in understanding the diverse perspectives on this issue. Over the years, I have had engaging discussions with numerous colleagues. In this post, I primarily give the floor to my fellow skeptics. Without further ado, let us begin with Robert Alicki.

Robert Alicki’s papers and viewpoint

The earliest paper by Alicki that I am aware of that is skeptical towards quantum computing is from 2000.
 

a) Alicki’s papers from 2000 and the connection to the Heisenberg Energy-Time uncertainty

R. Alicki, “Can quantum computer perform better than classical?”  In this paper Alicli argued (among other things) that the Heisenberg energy-time uncertainty may put some limits on quantum computation.  There is a subsequent paper  “On Non Efficiency of Quantum Computer” which  further developed this connection.

It is interesting to note that the 2016 paper “Fast-forwarding of Hamiltonians and Exponentially Precise Measurements” by Yosi Atia and Dorit Aharonov presents a very strong violation of the energy-time uncertainty relation (referred to by Yosi and Dorit as a “misconception”) based on Shor’s factorization algorithm. 

 

b) Critique of the mathematical assumptions of the “threshold theorem”

There are several papers where Alicki analyzed several models of decoherence and criticized the mathematical assumptions for the threshold theorem as physically unrealistic. In  a 2004 paper “Quantum error correction fails for Hamiltonian models“, Robert argued that “the existing schemes of fault-tolerant quantum computation designed for discrete-time models and based on quantum error correction fail for continuous-time Hamiltonian models even with Markovian noise.” The 2005 paper Internal Consistency of Fault-Tolerant Quantum Error Correction in Light of Rigorous Derivations of the Quantum Markovian Limit (Authors: R. Alicki, D. A. Lidar, P. Zanardi) raises concerns about the physical consistency of the mathematical assumptions of the threshold theorem. There is a special section listing possible objections for the inconsistencies. (Thermodynamics plays a crucial role in Alicki, Lidar, and Zanardi’s analysis.) There is an additional one-page 2007 paper by Robert with a critique of the assumptions of two recent (then) fault-tolerance papers. Robert had several other related papers which are relevant to criticism of quantum computers, proposing models for quantum noise,  and some papers (with Michal Horodecki) analyzing Kitaev’s proposed states. (I will come back to some of these papers and interesting related discussions toward the end of this post.)
 

c) A “no go theorem” based on thermodynamics.

I am aware of two additional papers by Alicki that give a thermodynamic argument for the impossibility of  quantum computing. The first paper is: R. Alicki and M. Horodecki, Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems (2006). The paper  asserted that thermodynamics implies that the set of effectively stabilized states for a large system  possesses always a classical structure (simplex) and hence cannot support quantum information. It was motivated by the theory of KMS states

In the next paper Alicki acknowledged that many physicists don’t find the argument convincing and some physicists doubt the relevance of thermodynamics altogether: Quantum memory as a perpetuum mobile? Stability v.s. reversibility of information processing (2009). In the paper there is an interesting discussion about the relevance of thermodynamics to quantum information processing. 

d)   “Critique of fault-tolerant quantum information processing

The paper summarized some of Alicki’s earlier arguments, and proposed some new points of view. It appeared in a 2012 book  edited by Lidar and Brun. The introduction asserts that quantum computing “presents a bold challenge to the rather well established principles called the Strong Church-Turing Thesis and the Bohr Correspondence Principle.” The paper goes on to critically discuss the threshold theorems for fault-tolerant quantum computation, and the idea of quantum memory based on topological degrees of freedom.
 

e) Alicki’s reaction to the 2019 “quantum supremacy experiment”.

In September 2019 The Google quantum supremacy claims became publicly known (the paper was leaked from NASA servers a month before publication). A few days later I wrote about my early impressions from the paper and and its fantastic claims and I also asked Robert what he thought about the excitements, Robert responded (September 2019):

Generally, I am more pessimistic than you. I think that QC is a new type of postmodern science based on hype, self-promotion and lack of serious criticism.

Robert relied on his own bad experience with a Nature 2009 paper by Ansmann, M. et al. on “Violation of Bell’s inequality in Josephson phase qubits.” Robert claimed to have found serious flaws in that paper and wrote “I could not believe that respectable scientists were able to do such things”. “For example,” Robert wrote “they claim the 244(!!!) standard deviation violation of Bell inequality while in fact the violation was of the order of experimental error.”

On the technical issues Robert wrote:

I still believe that I have strong arguments against fault-tolerance in QC which exclude e.g. useful implementations of Shor algorithm. I cannot exclude some temporal “windows of quantum supremacy” for certain very specific tasks similarly to the windows of “analog supremacy” in the 60-ties.  However, I do not believe that this is the case.  My doubts are based on the experience with gates implemented by quantum dots. I have written a paper (PRA 2004) with Horodecki’s and some experts in quantum dots from Wroclaw showing that the error per 1-qubit gate cannot be smaller than 0.1%. I am sure that superconducting qubits cannot do better because the ultimate physical mechanisms of decoherence are based on the same electromagnetic interactions. Taking into account that multiqubit errors are more malicious I think that such accuracy (claimed to be reached by Google QC) is too low for quantum supremacy.

This 2000 paper by Alicki proposed a classical analog demonstration of “quantum supremacy” following the Google’s 2019 quantum supremacy paper. 

skeptics-and-friends

Top left with Michel Dyakonov (Jerusalem, 2014), Top right with Robert Alicki (Gdynia, 2014), bottom left Oded Goldreich, bottom middle, with Michael Ben Or, Dorit Aharonov, and Robert Alicki (Jerusalem, 2005), bottom right Leonid Levin. (Click to enlarge.)

 

Leonid Levin’s point of view

Leonid Levin wrote a short (very dense and sarcastic) essay Polynomial Time and Extravagant Models (Section 2 of his 2003 article The Tale of One-Way Functions) Polynomial Time and Extravagant Models. 

Here is a quote from Leonid’s paper: 

The major problem is the requirement that basic quantum equations hold to multi-hundredth if not millionth decimal positions where the significant digits of the relevant quantum amplitudes reside. We have never seen a physical law valid to over a dozen decimals. Typically, every few new decimal places require major rethinking of most basic concepts. Are quantum amplitudes still complex numbers to such accuracies or do they become quaternions, colored graphs, or sick-humored gremlins? I suspect physicists would doubt even the laws of arithmetic pushed that far. In fact, we know that the most basic laws cannot all be correct to hundreds of decimals: this is where they stop being consistent with each other!

Leonid concludes this part of his paper with the following statement: “The rest of the article ignores any extravagant models and stands fully committed to the Polynomial Overhead Church–Turing Thesis: Any computation that takes t steps on an s-bit device can be simulated by a Turing Machine in sO(1)t steps within sO(1) cells.”

And here is  what Leonid wrote to me:

(2013) “Your conjectures are interesting, and refreshing on the background of mindless wishful thinking of most QC community. The question of WHAT EXACTLY is the obstacle to quantum computing is very interesting (for physics, not for CS).

Yet, if your conjectures are refuted or circumvented, other obstacles would come instead.”

 
(2015) “My skepticism is not based on specific obstacles. When I read Harry Potter or Tolkien, I do not bother with finding specific obstacles for implementing the tools envisioned in these wonderful books. Simple computations show that QC ideas are far beyond the realms for which we have any evidence. So this is Sci.Fi in my view with no need of specific obstacles.

And yet it is an interesting thought experiment that may provoke deeper understanding of QM.”

See also this 1994(!) posting by Leonid.

Oded Goldreich’s point of view

Oded Goldreich, holds the view (On Quantum Computing, 2005, and an earlier version from 2004) that quantum computers represent a fantastical possibility in a wondrous universe that is unlikely to be credible.

Oded starts his essay by making the following claims: “Firstly, the Laws of QM are merely a refutable model (or assumption) about the real world, they are not the reality itself nor can they ever be proved to provide a full description of reality.” And, “Secondly, as far as I know (and here I may be wrong), QM says that certain things are not impossible, but it does not say that every thing that is not impossible is indeed possible.” 

Here is another quote of Oded:

From my perspective/understanding of TOC (theory of computing), the QC is weird model unworthy of study. It tells us nothing about what we are really interested in. (Still, at rare times, it may be of help (like result is area A may sometimes allow to solve a problem in area B, or just inspire thinking).)

Oded also writes: “I wish to stress that I am not concerned with the question of whether or not QC that factor integers of the size used in commercial applications of RSA will be built in our life-time. I am concerned of the question of whether the (simplified, in my mind) model of QC (with unit cost operations) is an adequate model of reality. Needless to say, this question seems to lie more in the domain of physics than in that of computer science.”

He also argues about the analogy between randomized computation and quantum computation: “At times, I heard the believers of QC draw an analogy between QC and randomized computation. In my opinion, this analogy is highly misleading. The key point about QC is the ability to actually maintain a huge vector of (exponentially many) “possibilities”, whereas a randomized algorithm merely maintains one possibility (indeed selected at random) in each time. The probability space of the execution of a probabilistic algorithm is a mental experiment taking place in the analysis (not in reality).”

Here is a post about Oded’s 60th birthday conference.

Michel Dyakonov’s point of view

Michel Dyakonov describes his point of view in his 2020 book Will We Ever Have a Quantum Computer?  Dyakonov regards quantum computers as a ridiculously hard technological task and as an absurd idea from the physics point of view. Dyakonov mentioned that in his view the progress so far is very limited: quantum computers cannot honestly factor 21 to 7×3 and he describes his main point as follows:

  1. The hopes for scalable quantum computing are founded entirely on the “threshold theorem”: once the error per qubit per gate is below a certain value, indefinitely long computations are possible.
  2. The mathematical proof of the threshold theorem heavily relies on a number of assumptions supposed to be fulfilled exactly, as axioms.
  3. In the physical world nothing can be exact when one deals with continuous quantities. For example, it is not possible to have zero interaction between qubits, it can only be small
  4. Since the basic assumptions cannot be fulfilled exactly, the question is: What is the required precision with which each assumption should be fulfilled?
  5. Until this crucial question is answered, the prospects of scalable quantum computing will remain very doubtful.”

The book followed several earlier articles which already raised these main points:

Dyakonov regards the task of building quantum computers as ridiculously difficult and compares it to building houses with one-atom thin walls and  teaching donkeys to speak English.  

Wolfram, t’ Hooft’, Baez, Laughlin, Leggett…

Among other scientists who expressed skepticism about quantum computing  are Stephen Wolfram, Gerard ‘t Hooft, Robert B. Laughlin, John Baez, and others. (Anthony Leggett proposed an explanation for impossibility of related quantum states.) I will add here some links to their views if those will become available to me. 

More recent quantum skepticism: Svozil, Gourianov,  McGuinness, and Vardi.

In the 2017 article Delivering on a quantum promise by Karl Svozil in “Physicsword”, Svozil writes  “While many of the quantum manifesto’s short- and medium-term goals appear feasible, some of the long-term goals might not be achievable even in principle.” (In contrast, my own skeptical view  is that some crucial short-term goals, primarily creating the necessary quantum error correcting codes, are infeasible.)

Nikita Gourianov’s Financial Times article The Quantum Computing Bubble is discussed in this Shtetl Optimized post.

Moshe Vardi  gives a very nice description of  the two sides of the debate and concludes with a declaration “Count me a quantum skeptic!” in his 2019 article Quantum Hype and Quantum Skepticism for Communications of the ACM.  (My reading is that Moshe does not endorse the skeptical point of view that quantum computing is infeasible but does not dismiss it either. He expresses skepticism about other aspects of the grand technological and commercial vision of quantum computers.)

Liam McGuinness’s skepticism is related to the Heisenberg limit in quantum metrology and related quantum physics laws.  To present McGuinness’s opinion let me offer a very brief introduction to quantum sensing: In quantum sensing, using N independent measurements based on N particles offer an improvement of a factor \sqrt n in the precision. This improvement is referred to as the “standard quantum limit”.  There are further claims, both theoretical and empirical,  that if these N particles are entangled, an improvement by a factor of N compared to a single particle can be reached.  This factor N improvement using entanglement is referred to as the “Heisenberg limit.” 

McGuinness argues (against the prevalent view as well as a large body of experimental claims), that entanglement does not offer substantial advantage for sensing and does not allow an improvement of the “standard quantum limit” (towards the “Heisenberg bound”). He also argues that quantum computation is out of reach, as it requires (in Liam’s opinion) breaking (exponentially) even the Heisenberg bound. McGuinness’s reasoning does not refer to noise, see his paper The case against entanglement improved measurement precision.

Pictures!

My 2014 post Pictures from Recent Quantum Months has some pictures with Robert Alicki, Michel Dyakonov, Michal Horodecki, Yuri Gurevich, John Sidles, Connie Sidles, and Rico Picone. Here is a picture with Scott Aaronson and Guy Kindler (2015). There are two pictures of Aharonov-Alicki-Ben-Or-Kalai from 2005 and 2013

The claim of intractable engineering barriers. (Possible in principle and not in practice.)

Is it possible that something works in theory but not in practice? This is a cool philosophical question. 

Immanuel Kant argued that this is not possible in his paper: “Über den Gemeinspruch: Das mag in der Theorie richtig sein, taugt aber nicht für die Praxis” (1793) (English translation); Scott Aaronson made a similar argument in his post Mistake of the Week: “X works on paper, but not in the real world”. Aram Harrow offered quite the opposite view and he wrote in our debate:

“There are many reasons why quantum computers may never be built. We may fail to bring the raw error rate below the required threshold while controlling and interacting large numbers of qubits. We may find it technically possible to do this, but too costly to justify the economic value of the algorithms we are able to implement. We might even find an efficient classical simulation of quantum mechanics, or efficient classical algorithms for problems such as factoring. The one thing I am confident of is that we are unlikely to find any obstacle in principle to building a quantum computer.”

Current optimism in the community (written before the Willow announcement).

My impression is that people in the quantum computing community are quite optimistic about the future of quantum computers. Here is a nice recent post from SO: Quantum Computing: Between Hope and Hype (and my comment), and here is a post that I wrote (among other things) about Dorit Aharonov’s lecture who expressed optimism that skeptics (and specifically me) will have to change their mind in view of recent experimental progress.

Some technological breakthroughs from other areas that may give reasons for hope

Two recent technological breakthroughs that can serve as “role models” for quantum computing are: a) The idea to move from CPU to GPU did wonders for neural networks and deep learning. b) The success of anti missile missiles, and perhaps also laser-based anti-missiles systems. (Apropos anti missiles missiles, see this post about Chomskian linguistics.) 

Chaos and Quantum mechanics

Both Alicki and Dyakonov mentioned in their writings that the tension between chaos in classical physics and the behavior of (ideal) quantum systems could be related to an  explanation for the impossibility of quantum computing. Chaos is also related to my theory: Noise sensitivity that plays a role in my argument is a mathematical theory related to chaotic phenomena in physics. (A theory which is different from Lorenz’ mathematical theory of chaos, and has roots in an early work of Weiner about chaos; see the section Weiner Chaos and Lorenz Chaos in this post.)