Preliminary Thoughts on Fault-Tolerant Quantum Computing, Quantum Error Correction, and Logical Qubits

Jack Krupansky
153 min readFeb 24, 2021

--

NISQ has been a great way to make a lot of rapid progress in quantum computing, but its limitations, particularly its noisiness and lack of robust, automatic, and transparent error correction, preclude it from being a viable path to true, dramatic, and compelling quantum advantage for compute-intensive applications which can be developed by non-elite developers which would simply not be feasible using classical computing. Absent perfect qubits, automatic and transparent quantum error correction (QEC) is needed to achieve fault-tolerant qubitslogical qubits — to support fault-tolerant quantum computing (FTQC.) This informal paper will explore many of the issues and questions involved with achieving fault-tolerant logical qubits, but will stop short of diving too deep into the very arcane technical details of quantum error correction itself.

The emphasis in this informal paper is on preliminary thoughts, with no attempt to be completely comprehensive and fully exhaustive of all areas of logical qubits, quantum error correction, and fault-tolerant quantum computing. I have endeavored to cover all of the issues and questions of concern to me, but others may have additional issues and questions.

To be clear, this informal paper won’t endeavor to do a deep dive on the technical details of quantum error correction. The focus here is the impact of QEC and logical qubits on algorithms and applications. That said, various details of QEC will invariably crop up from time to time. Great detail about QEC can be found in the technical papers listed in the References and bibliography section of this paper.

This is a very long paper (over 100 pages), but the first 15to 20 pages or so, including the In a nutshell section can be consulted for a brief high-level view of the topic of this paper.

Topics to be covered in this paper:

  1. The problem in a nutshell
  2. The problem
  3. NISQ has served us well, but much more is needed
  4. The basic story for fault-tolerant quantum computing
  5. Putting the magnitude of the problem in perspective
  6. My motivation
  7. I’m still not expert on quantum error correction theory
  8. My intentions
  9. Apology — we’re just not there yet, or even close
  10. In a nutshell
  11. Quantum advantage is the real goal and logical qubits are the only way to get there
  12. Dramatic and compelling quantum advantage is needed
  13. Requirements to enable dramatic quantum advantage
  14. Quantum error correction is needed to realize the unfulfilled promise of quantum computing
  15. The raw horsepower of advanced algorithmic building blocks such as quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve quantum advantage, but fault-free logical qubits are needed to get there
  16. Qubit reliability
  17. Qubit fidelity
  18. Error rate
  19. Types of errors
  20. NISQ — Noisy Intermediate-Scale Quantum devices
  21. Technically quantum computers with fewer than 50 qubits are not NISQ devices
  22. But people tend to refer to all current quantum computers as NISQ devices
  23. NSSQ is a better term for current small-scale quantum computers
  24. Fault-tolerant quantum computing and fault-tolerant qubits
  25. Fault-free vs. fault-tolerant
  26. FTQC — fault-tolerant quantum computing
  27. Logical qubit
  28. Fault-tolerant logical qubit
  29. Qubit as abstract information vs. qubit as a physical device or physical representation of that abstract information
  30. No logical qubits on NISQ devices
  31. Fault-free logical qubits
  32. Near-perfect qubits
  33. Virtually perfect qubits
  34. Fault tolerance vs. quantum error correction vs. logical qubits
  35. To my mind, progress in perfecting qubits is the best way to go
  36. Classical ECC
  37. Metaphor of ECC for classical computers
  38. Stabilized qubit
  39. Stable qubit
  40. Data qubit
  41. Stabilizer qubit
  42. Coherence extension
  43. Quantum memory
  44. Technical prerequisites for quantum error correction and logical qubits
  45. Technical requirements for quantum error correction and logical qubits
  46. Theory vs. design and architecture vs. implementation vs. idiosyncrasies for quantum error correction of each particular quantum computer
  47. I’ve changed my view of quantum error correction
  48. My own preference is for near-perfect qubits over overly-complex quantum error correction
  49. Manual error mitigation and correction just won’t cut it
  50. Quantum error mitigation vs. quantum error correction
  51. Manual, explicit “error correction” (error mitigation)
  52. Automatic quantum error correction
  53. Quantum error correction is inherently automatic, implied, and hidden (transparent) while error mitigation is inherently manual, explicit, and visible
  54. Noise-resilient and noise-aware techniques
  55. Quantum error correction is still a very active research area — not even yet a laboratory curiosity
  56. Twin progressions — research on quantum error correction and improvements to physical qubits
  57. Quantum threshold theorem
  58. Focus on simulators to accelerate development of critical applications that will be able to exploit logical qubit hardware when it becomes available to achieve dramatic quantum advantage
  59. Still need to advance algorithms to 30–40 qubits using ideal simulators
  60. NISQ vs. fault-tolerant and near-perfect, small-scale, and large-scale
  61. NSSQ — Noisy Small-Scale Quantum devices
  62. NISQ — Noisy Intermediate-Scale Quantum devices
  63. NLSQ — Noisy Large-Scale Quantum devices
  64. NPSSQ — Near-Perfect Small-Scale Quantum devices
  65. NPISQ — Near-Perfect Intermediate-Scale Quantum devices
  66. NPLSQ — Near-Perfect Large-Scale Quantum devices
  67. FTSSQ — Fault-Tolerant Small-Scale Quantum devices
  68. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices
  69. FTLSQ — Fault-Tolerant Large-Scale Quantum devices
  70. What is post-NISQ?
  71. When will post-NISQ begin?
  72. Post-noisy is a more accurate term than post-NISQ
  73. But for most uses post-NISQ will refer to post-noisy
  74. Vendors need to publish roadmaps for quantum error correction
  75. Vendors need to publish roadmaps for near-perfect qubits
  76. Likely that 32-qubit machines can achieve near-perfect qubits for relatively short algorithms within a couple of years
  77. Unlikely to achieve 32 logical qubits for at least five years
  78. Levels of qubit quality
  79. Possible need for co-design to achieve optimal hardware design for quantum error correction
  80. Top 10 questions
  81. Additional important questions
  82. Kinds of questions beyond the scope or depth of this paper
  83. Top question #1: When will quantum error correction and logical qubits be practical?
  84. Top question #2: How much will hardware have to advance before quantum error correction becomes practical?
  85. Top question #3: Will quantum error correction be truly 100% transparent to quantum algorithms and applications?
  86. Top question #4: How many physical qubits will be needed for each logical qubit?
  87. Citations for various numbers of physical qubits per logical qubit
  88. Formulas from IBM paper for physical qubits per logical qubit
  89. For now, 65 physical qubits per logical qubit is as good an estimate as any
  90. Top question #5: Does quantum error correction guarantee absolute 100% perfect qubits?
  91. Top question #6: Does quantum error correction guarantee infinite coherence?
  92. Top question #7: Does quantum error correction guarantee to eliminate 100% of gate errors, or just a moderate improvement?
  93. Top question #8: Does quantum error correction guarantee to eliminate 100% of measurement errors, or just a moderate improvement?
  94. Top question #9: What degree of external, environmental interference can be readily and 100% corrected by quantum error correction?
  95. Top question #10: How exactly does quantum error correction work for multiple, entangled qubits — multi-qubit product states?
  96. Do we really need quantum error correction if we can achieve near-perfect qubits?
  97. Will qubits eventually become good enough that they don’t necessarily need quantum error correction?
  98. Which will win the race, quantum error correction or near-perfect qubits?
  99. When will logical qubits be ready to move beyond the laboratory curiosity stage of development?
  100. How close to perfect is a near-perfect qubit?
  101. How close to perfect must near-perfect qubits be to enable logical qubits?
  102. How close to perfect must near-perfect qubits be to enable logical qubits for 2-qubit gates?
  103. When can we expect near-perfect qubits?
  104. Are perfect qubits possible?
  105. How close to perfect will logical qubits really be?
  106. But doesn’t IonQ claim to have perfect qubits?
  107. When can we expect logical qubits of various capacities?
  108. When can we expect even a single logical qubit?
  109. When can we expect 32 logical qubits?
  110. What is quantum error correction?
  111. What is a quantum error correcting code?
  112. Is NISQ a distraction and causing more harm than good?
  113. NISQ as a stepping stone to quantum error correction and logical qubits
  114. What is Riggeti doing about quantum error correction?
  115. Is it likely that large-scale logical qubits can be implemented using current technology?
  116. Is quantum error correction fixed for a particular quantum computer or selectable and configurable for each algorithm or application?
  117. What parameters or configuration settings should algorithm designers and application developers be able to tune for logical qubits?
  118. What do the wave functions of logical qubits look like?
  119. Are all of the physical qubits of a single logical qubit entangled together?
  120. How many wave functions are there for a single logical qubit?
  121. For a Hadamard transform of n qubits to generate 2^n simultaneous (product) states, how exactly are logical qubits handling all of those product states?
  122. What is the performance cost of quantum error correction?
  123. What is the performance of logical qubit gates and measurements relative to NISQ?
  124. How is a logical qubit initialized, to 0?
  125. What happens to connectivity under quantum error correction?
  126. How useful are logical qubits if still only weak connectivity?
  127. Are SWAP networks still needed under quantum error correction?
  128. How does a SWAP network work under quantum error correction?
  129. How efficient are SWAP networks for logical qubits?
  130. What are the technical risks for achieving logical qubits?
  131. How perfectly can a logical qubit match the probability amplitudes for a physical qubit?
  132. Can probability amplitude probabilities of logical qubits ever be exactly 0.0 or 1.0 or is there some tiny, Planck-level epsilon?
  133. What is the precision or granularity of probability amplitudes and phase of the product states of entangled logical qubits?
  134. Does the stability of a logical qubit imply greater precision or granularity of quantum state?
  135. Is there a proposal for quantum error correction for trapped-ion qubits, or are surface code and other approaches focused on the specific peculiarities of superconducting transmon qubits?
  136. Do trapped-ion qubits need quantum error correction?
  137. Can simulation of even an ideal quantum computer be the same as an absolutely perfect classical quantum simulator since there may be some residual epsilon uncertainty down at the Planck level for even a perfect qubit?
  138. How small must single-qubit error (physical or logical) be before nobody will notice?
  139. What is the impact of quantum error correction on quantum phase estimation (QPE) and quantum Fourier transform (QFT)?
  140. What is the impact of quantum error correction on granularity of phase and probability amplitude?
  141. What are the effects of quantum error correction on phase precision?
  142. What are the effects of quantum error correction on probability amplitude precision?
  143. What is the impact of quantum error correction on probability amplitudes of multi-qubit entangled product states?
  144. How are multi-qubit product states realized under quantum error correction?
  145. What is the impact of quantum error correction on probability amplitudes of Bell, GHZ, and W states?
  146. At which stage(s) of the IBM quantum roadmap will logical qubits be operational?
  147. Does the Bloch sphere have any meaning or utility under quantum error correction?
  148. Is there a prospect of a poor man’s quantum error correction, short of perfection but close enough?
  149. Is quantum error correction all or nothing or varying degrees or levels of correctness and cost?
  150. Will we need classical quantum simulators beyond 50 qubits once we have true error-corrected logical qubits?
  151. Do we really need logical qubits before we have algorithms which can exploit 40 to 60 qubits to achieve true quantum advantage for practical real-world problems?
  152. How are gates executed for all data qubits of a single logical qubit?
  153. How are 2-qubit (or 3-qubit) gates executed for non-nearest neighbor physical qubits?
  154. Can we leave NISQ behind as soon as we get quantum error correction and logical qubits?
  155. How exactly does quantum error correction actually address gate errors — since they have more to do with external factors outside of the qubit?
  156. How exactly does quantum error correction actually address measurement errors?
  157. Does quantum error correction really protect against gate errors or even measurement errors?
  158. Will quantum error correction approaches vary based on the physical qubit technology?
  159. Is the quantum volume metric still valid for quantum error correction and logical qubits?
  160. Is the quantum volume metric relevant to perfect logical qubits?
  161. What will it mean, from a practical perspective, once quantum error correction and logical qubits arrive?
  162. Which algorithms, applications, and application categories will most immediately benefit the most from quantum error correction and logical qubits?
  163. Which algorithms, applications or classes of algorithms and applications are in most critical need of logical qubits?
  164. How is quantum error correction not a violation of the no-cloning theorem?
  165. Is quantum error correction too much like magic?
  166. Who’s closest to real quantum error correction?
  167. Does quantum error correction necessarily mean that the qubit will have a very long or even infinite coherence?
  168. Are logical qubits guaranteed to have infinite coherence?
  169. What is the specific mechanism of quantum error correction that causes longer coherence — since decoherence is not an “error” per se?
  170. Is there a cost associated with quantum error correction extending coherence or is it actually free and a side effect of basic error correction?
  171. Is there a possible tradeoff, that various degrees of coherence extension have different resource requirements?
  172. Could a more modest degree of coherence extension be provided significantly more cheaply than full, infinite coherence extension?
  173. Will evolution of quantum error correction over time incrementally reduce errors and increase precision and coherence, or is it an all or nothing proposition?
  174. Does quantum error correction imply that the overall QPU is any less noisy, or just that logical qubits mitigate that noise?
  175. What are the potential tradeoffs for quantum error correction and logical qubits?
  176. How severely does quantum error correction impact gate execution performance?
  177. How does the performance hit on gate execution scale based on the number of physical qubits per logical qubit?
  178. Are there other approaches to logical qubits than strict quantum error correction?
  179. How many logical qubits are needed to achieve quantum advantage for practical applications?
  180. Is it any accident that IBM’s latest machine has 65 qubits?
  181. What is a surface code?
  182. Background on surface codes
  183. What is the Steane code?
  184. How might quantum tomography, quantum state tomography, quantum process tomography, and matrix product state tomography relate to quantum error correction and measurement?
  185. What is magic state distillation?
  186. Depth d is the square root of physical qubits per logical qubit in a surface code
  187. What are typical values of d for a surface code?
  188. Is d = 5 really optimal for surface codes?
  189. What error threshold or logical error rate is needed to achieve acceptable quality quantum error correction for logical qubit results?
  190. Prospects for logical qubits
  191. Google and IBM have factored quantum error correction into the designs of their recent machines
  192. NISQ simulators vs. post-NISQ simulators
  193. Need for a paper showing how logical qubit gates work on physical qubits
  194. Need detailed elaboration of basic logical qubit logic gate execution
  195. Need animation of what happens between the physical qubits during correction
  196. Even with logical qubits, some applications may benefit from the higher performance of near-perfect physical qubits
  197. Near-perfect physical qubits may be sufficient to achieve the ENIAC moment for niche applications
  198. Likely need logical qubits to achieve the FORTRAN moment
  199. Irony: By the time qubits get good enough for efficient error correction, they may be good enough for many applications without the need for error correction
  200. Readers should suggest dates for various hardware and application milestones
  201. Call for applications to plant stakes at various logical qubit milestones
  202. Reasonable postures to take on quantum error correction and logical qubits
  203. Hardware fabrication challenges are the critical near-term driver, not algorithms
  204. Need to prioritize basic research in algorithm design
  205. Need for algorithms to be scalable
  206. Need for algorithms which are provably scalable
  207. How scalable is your quantum algorithm?
  208. Classical simulation is not possible for post-NISQ algorithms and applications
  209. Quantum error correction does not eliminate the probabilistic nature of quantum computing
  210. Shot count (circuit repetitions) is still needed even with error-free logical qubits — to develop probabilistic expectation values
  211. Use shot count (circuit repetitions) for mission-critical applications on the off chance of once in a blue moon errors
  212. We need nicknames for logical qubit and physical qubit
  213. Competing approaches to quantum error correction will continue to evolve even after initial implementations become available
  214. I care about the effects and any side effects or collateral effects that may be visible in algorithm results or visible to applications
  215. Need for a much higher-level programming model
  216. What Caltech Prof. John Preskill has to say about quantum error correction
  217. Getting beyond the hype
  218. I know I’m way ahead of the game, but that’s what I do, and what interests me
  219. Conclusions
  220. What’s next?
  221. Glossary
  222. References and bibliography
  223. Some interesting notes

The problem in a nutshell

In short, quantum computation is plagued by three problems:

  1. Quantum state of qubits dissipates rapidly — circuit depth is very limited.
  2. Operations on qubits (quantum logic gates) are imperfect.
  3. Measurement of qubits is imperfect.

Without addressing these problems, quantum computing will not be able to achieve dramatic and compelling quantum advantage, which is the only reason to pursue quantum computing over classical computing.

The problem

Computing, of any type, classical or quantum, requires reliability and some degree of determinism or predictability.

Unfortunately, qubits are notoriously finicky, especially for NISQ devices, which are by definition noisy — that is, prone to frequent errors.

The noisiness of NISQ devices results in lots of errors and not getting valid or consistent results often enough.

Various techniques can be used to compensate for errors, such as shots or circuit repetitions, where the same exact quantum circuit is run repeatedly and the results analyzed statistically to determine the most likely result.

And the error rate gradually drops as quantum hardware continues to be refined.

Eventually, the error rate of quantum hardware may be low enough to dramatically minimize invalid and inconsistent results, but the results will continue to be problematic for many applications.

There are two broad areas of errors:

  1. Errors which occur within individual qubits, even when completely idle.
  2. Errors which occur when operations are performed on qubits. Qubits in action.

There are many types of errors, the most common being:

  1. Decoherence. Gradual decay of values over time.
  2. Gate errors. Each operation on a qubit introduces another degree of error.
  3. Measurement errors. Simply measuring a qubit has some chance of failure.

There are many sources of errors, the most common being:

  1. Environmental interference. Even despite the best available shielding.
  2. Crosstalk between qubits. Also between signal lines. Absolute isolation is not assured.
  3. Spectator errors. Synonymous with crosstalk.
  4. Noise in control circuitry. Noise in the classical digital and analog circuitry which controls execution of gates on qubits.
  5. Imperfections in the manufacture of qubits.

As long as these errors continue to plague quantum computing, it will not be possible to achieve true, dramatic, and compelling quantum advantage, which is the only reason for pursuing quantum computing over classical computing.

Until these errors can be adequately addressed, quantum computing will remain a mere laboratory curiosity, suitable only for experimentation, demonstrations, and use cases too small to come even close to production-scale applications capable of solving practical real-word problems in a dramatic and compelling manner which will deliver substantial real-world value.

Advances in hardware design and engineering will incrementally reduce errors, but there is no credible prospect for the complete and total elimination of all errors.

Circuit repetitions and other manual forms of error mitigation are viable stopgap measures for quantum circuits of relatively modest size, but are not viable for large circuits of significant complexity.

The whole point of the “IS” in NISQ — intermediate scale — is to indicate that large and complex circuits are beyond the scope of NISQ devices.

Exactly where the threshold is for intermediate scale devices is unclear and will vary based on the needs of the application, the quality of the qubit hardware, and the complexity of the quantum circuit, but regardless, ultimately there will indeed be a threshold which separates algorithms which can be computed successfully on NISQ devices and those which cannot be computed successfully on NISQ devices.

What’s the solution? Logical qubits, which combine near-perfect qubits with a significant degree of clever quantum error correction logic which can be executed at the firmware level in an automatic and transparent manner so that error correction is completely hidden from algorithm designers and application developers.

In practice, both higher-quality qubits and quantum error correction are needed. Higher-quality qubits reduce the burden on quantum error correction, and then quantum error correction fills in the quality gaps. Heavy quantum error correction may be needed in the medium term, but as qubit quality improves, lighter quantum error correction will be sufficient.

NISQ has served us well, but much more is needed

NISQ devices have served us well to experiment with quantum computing on a small scale and demonstrate its essential capabilities, but much higher-quality qubits are needed to make the leap to production-scale quantum applications which actually do deliver substantial real-world value far beyond that which is achievable using classical computing, eventually getting us to true, dramatic, and compelling quantum advantage over even the best classical computing.

Quantum error correction and fault-tolerant logical qubits are not yet here or anytime soon, but it’s worth getting the ball rolling and contemplating the potential, implications, consequences, and limitations of fault-tolerant quantum computing.

The basic story for fault-tolerant quantum computing

There is a lot of excitement over the potential for quantum computing, and a lot of activity with algorithms, even on real hardware, but true quantum advantage is nowhere to be found, and unlikely to ever be found on NISQ devices.

More qubits are needed, but higher-quality qubits are needed as well.

Ideal, perfect physical qubits are unlikely to ever be achieved.

Incremental advances in the quality of physical qubits has been the norm and likely to continue indefinitely.

The question or opportunity is whether clever error correction techniques can be used in conjunction with better physical qubits to effectively achieve perfect or at least near-perfect logical qubits.

The basic idea being that if you aggregate enough reasonably high-quality physical qubits with the right algorithmic techniques, the resulting logical qubits will be of sufficient quality that average algorithm designers and application developers can code and utilize quantum algorithms using logical qubits as if they were using perfect qubits.

Further, without near-perfect qubits, algorithms won’t be able to exploit a sufficient number of qubits and a sufficient depth of quantum circuits to actually achieve true quantum advantage over classical computing.

Putting the magnitude of the problem in perspective

Framing the magnitude of the problem, contrasting the limited potential of NISQ computing and logical qubits, I would put it as people are struggling to build the quantum equivalent of mud huts when they want and need to be building the quantum equivalent of skyscrapers. They’re struggling to fashion primitive dugout canoes when they need advanced nuclear-powered aircraft carriers. Seriously, that succinctly expresses the magnitude of the problem.

A dramatic level of effort, including much research and much engineering, is required, not a mere modest or meager level of effort.

My motivation

That’s the basic story for quantum error correction and fault-tolerant quantum computing. My problem is that there seem to be lots of open issues and questions — at least in my own mind.

My focus in this paper is to highlight the uncertainties, open issues, and questions as I see them.

I’m still not an expert in quantum error correction, fault-tolerant quantum computing, or even quantum computing overall, but the more I read in this area, I accumulate more questions and issues than I get answers to my previous questions and issues.

I’m still not expert on quantum error correction theory

Not only am I still not an expert on the theory of quantum error correction, I may never be, and I’m okay with that. That said, I am anxious to learn more about the theory, to understand its limits and issues.

My intentions

My interest is mostly at the application level, but also concerned about how or whether quantum error correction really works and any possible negative application-level consequences.

I seek to highlight:

  1. What the technology can do.
  2. What the technology can’t do or might not do.
  3. What limitations the technology might have.
  4. Where the technology needs more work.
  5. The potential timing of when the technology might become available.
  6. What actions algorithm designers and application developers might consider taking to exploit this new technology.
  7. Any questions I have.
  8. Any issues that I have identified.

Apology — we’re just not there yet, or even close

In truth, at present, my interest in quantum error correction and logical qubits is much more aspirational than imminent.

I wish I could give definitive details on the technology, but we’re just not there yet.

Major players may be getting tantalizingly close, but still not close enough. Hardware still needs at least a few more iterations. Okay, more than a few iterations are needed.

Some of the major players pursuing quantum error correction and/or near-perfect qubits:

  1. IBM
  2. Google
  3. IonQ
  4. Honeywell
  5. Rigetti

But are they actually getting close, or is that really just an illusion and wishful thinking? Hard to say in any definitive manner!

But none of that deters my interest in the potential and prospects for the technology.

In a nutshell

The key points of this informal paper:

  1. NISQ is better than nothing, but…
  2. NISQ is not enough.
  3. Noisy qubit algorithms won’t be scalable to significant circuit depth.
  4. Perfect qubits would be best, but…
  5. If we can’t have perfect qubits, logical qubits would be good enough.
  6. Quantum error correction is the critical wave of the future.
  7. Quantum error correction is needed to unlock and realize the unfulfilled promise and full potential of quantum computing.
  8. But no time soon — it could be 2–7 years before we see even a modest number of logical qubits.
  9. Focus on logical qubits where algorithms and applications see only logical qubits — error correction is automatic, implicit, and hidden, rather than algorithm or application-driven error mitigation which is manual, explicit, and visible.
  10. The FORTRAN moment for quantum computing may not be possible without quantum error correction and logical qubits.
  11. But near-perfect physical qubits may be sufficient to achieve the ENIAC moment for niche applications, at least for some of the more elite users, even though not for most average users.
  12. Quantum advantage may not be possible without quantum error correction.
  13. The raw horsepower of advanced algorithmic building blocks such as quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve quantum advantage, but fault-free logical qubits are needed to get there.
  14. Achieving near-perfect qubits would obviate some not insignificant fraction of the need for full quantum error correction. More-perfect qubits are a win-win — better for NISQ and result in more efficient quantum error correction.
  15. It’s a real race — quantum error correction vs. near-perfect qubits — the outcome is unclear.
  16. Do we really need quantum error correction if we can achieve near-perfect qubits?
  17. It’s not clear if quantum error correction means logical qubits are absolutely perfect and without any errors, or just a lot better even if not perfect. Maybe there are levels of quantum error correction. Or stages of evolution towards full, perfect quantum error correction.
  18. Quantum error correction must be automatic and transparent — anything less is really just error mitigation, not true error correction, and won’t fully enable widespread development of production-scale applications achieving true quantum advantage.
  19. It may be five years or even seven years before we see widespread availability of quantum error correction and logical qubits. Maybe we might see partial implementations in 2–3 years.
  20. It’s not yet clear what number of physical qubits will be needed for each logical qubit. Maybe 7, 9, 13, 25, 57, 65, hundreds, thousands, or even millions.
  21. There may be degrees of fault tolerance or levels of logical qubit quality rather than one size fits all.
  22. Need to focus on simulators to accelerate development of critical applications that will be able to exploit logical qubit hardware when it becomes available, to achieve dramatic quantum advantage.
  23. Definition of logical qubit.
  24. Plenty of open questions and issues.
  25. Still a very active area of research.
  26. Still a laboratory curiosity — or not even yet a laboratory curiosity as many proposals are still only on paper, nowhere near close to being ready for production-scale use for practical real-world applications.
  27. Great progress is indeed being made — IBM and Google. Lots of research papers.
  28. Quantum error correction does not eliminate the probabilistic nature of quantum computing — shot count (circuit repetitions) is still needed to collect enough probabilistic results to provide an accurate probability distribution.
  29. Even with quantum error correction, we still need to see the development of a rich portfolio of algorithms which exploit 40 to 60 qubits of logical qubits to achieve true quantum advantage for practical real-world problems.
  30. It’s unclear whether we will need classical quantum simulators beyond 40–50 qubits once we have true quantum error correction and logical qubits. After all, we don’t simulate most of what we do on a regular basis on modern classical computers.
  31. Should quantum applications even need to be aware of qubits, even logical qubits, or should higher level abstractions (ala classical data types) be much more appropriate for an application-level quantum programming model?

Quantum advantage is the real goal and logical qubits are the only way to get there

Quantum advantage — a dramatic performance benefit over classical computing — is the only reason anyone should be interested in quantum computing. And logical qubits appear to be required to achieve widespread and dramatic quantum advantage. Noisy qubits just won’t cut it.

There may be some niche applications which can be accomplished without transparent and automatic quantum error correction, but most algorithms and applications will need logical qubits to achieve a dramatic and compelling level of quantum advantage.

Explicit, manual error mitigation and error correction may be feasible for some applications and some elite staff, but that’s too great a burden to place on average, non-elite algorithm designers and application developers. Implicit, automatic, and transparent quantum error correction is what most algorithm designers and application developers will need.

For more discussion of quantum advantage and quantum supremacy, see my informal paper:

Dramatic and compelling quantum advantage is needed

Quantum advantage doesn’t mean just a marginal advantage over classical computing, but a dramatic and compelling advantage.

A mere marginal advantage could still be considered quantum advantage, but I’d classify that as a mere basic quantum advantage. Nothing special to write home about.

To be worth the extra effort and cost, quantum advantage needs to really impress people — to be dramatic and compelling.

The real idea is that a true, dramatic, and compelling quantum advantage must be more than just better — it needs to unlock new business opportunities, opportunities that simply wouldn’t be accessible with only classical computing.

Requirements to enable dramatic quantum advantage

In order to achieve dramatic quantum advantage, a quantum computer and quantum algorithm must satisfy two requirements:

  1. More than 50 qubits in a single quantum computation. In a single Hadamard transform of 2⁵⁰ quantum states to be operated on in parallel.
  2. More than shallow circuit depth.

Anything less can probably be accomplished on a classical computer.

Achieving 50 logical qubits will be a major quantum hardware accomplishment. It may require over 3,000 physical qubits.

Quantum error correction is needed to realize the unfulfilled promise of quantum computing

I just see any way out of it — we need more accurate qubits to achieve true and dramatic quantum advantage, and quantum error correction, giving us virtually perfect qubits, is the only way to go.

Technically, perfect physical qubits would do the trick, but nobody is suggesting that perfect physical qubits are achievable. So, absent perfect physical qubits, we need quantum error correction to implement virtually perfect qubits, logical qubits.

The raw horsepower of advanced algorithmic building blocks such as quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve quantum advantage, but fault-free logical qubits are needed to get there

The concepts of quantum phase estimation (QPE) and quantum Fourier transform (QFT) have been around for over twenty years now, but they aren’t practical at scale for noisy NISQ devices.

To be clear, quantum phase estimation and quantum Fourier transform are definitely needed to achieve dramatic quantum advantage.

To be super-clear, quantum phase estimation and quantum Fourier transform are effectively useless, mere toys, without quantum error correction and logical qubits.

Qubit reliability

The reliability of a qubit is characterized as the percentage of error-free operations.

The degree of perfection of a qubit can be measured using so-called nines — 9’s, which is the reliability of a qubit measured as a percentage of error-free operation, such as:

  1. One nine. Such as 90%, 98%, 97%, or maybe even 95%. One error in 10, 50, 33, or 20 operations.
  2. Two nines. Such as 99%, 99.5%, or even 99.8%. One error in 100 operations.
  3. Three nines. Such as 99.9%, 99.95%, or even 99.98%. One error in 1,000 operations.
  4. Four nines. Such as 99.99%, 99.995%, or even 99.998%. One error in 10,000 operations.
  5. Five nines. Such as 99.999%, 99.9995%, or even 99.9998%. One error in 100,000 operations.
  6. Six nines. Such as 99.9999%, 99.99995%, or even 99.99998%. One error in one million operations.
  7. Seven nines. Such as 99.99999%, 99.999995%, or even 99.999998%. One error in ten million operations.
  8. And so on. As many nines as you wish.

Whether more than seven nines can be achieved or how much further than seven nines can be achieved is unknown at this time.

Qubit fidelity

Qubit fidelity and qubit reliability are close synonyms.

Error rate

The error rate for qubit operations is expressed either as a fraction of 1.0 or as a percentage and represents the fraction of operations which fail.

This is the qubit reliability subtracted from 100%.

For example,

  1. 1.0 or 100% — all operations fail. 0% qubit reliability.
  2. 0.50 or 50% — half of operations fail. 50% qubit reliability.
  3. 0.10 or 10% — one in ten operations fail. 90% qubit reliability.
  4. 0.05 or 5% — one in twenty operations fail. 95% qubit reliability.
  5. 0.02 or 2% — one in fifty operations fail. 98% qubit reliability.
  6. 0.01 or 10E-2 or 1% — one in a hundred operations fail. 99% qubit reliability.
  7. 0.001 or 10E-3 or 0.1% — one in a thousand operations fail. 99.9% (3 9’s) qubit reliability.
  8. 0.0001 or 10E-4 or 0.01% — one in ten thousand operations fail. 99.99% (4 9’s) qubit reliability.
  9. 0.00001 or 10E-5 or 0.001% — one a hundred thousand operations fail. 99.999% (5 9’s) qubit reliability.
  10. 0.000001 or 10E-6 or 0.0001% — one in a million operations fail. 99.9999% (6 9’s) qubit reliability.
  11. 0.0000001 or 10E-7 or 0.00001% — one in ten million operations fail. 99.99999% (7 9’s) qubit reliability.
  12. 0.00000001 or 10E-8 or 0.000001% — one in a hundred million operations fail. 99.999999% (8 9’s) qubit reliability.
  13. 0.000000001 or 10E-9 or 0.0000001% — one in a billion operations fail. 99.9999999% (9 9’s) qubit reliability.

Types of errors

There are two broad areas of errors:

  1. Errors which occur within individual qubits, even when completely idle.
  2. Errors which occur when operations are performed on qubits. Qubits in action.

There are many types of errors, the most common being:

  1. Decoherence. Gradual decay of values over time. Even when idle.
  2. Gate errors. Each operation on a qubit introduces another potential degree of error.
  3. Measurement errors. Simply measuring a qubit has some chance of failure.

There are many sources of errors, the most common being:

  1. Environmental interference. Even despite the best available shielding.
  2. Crosstalk between devices. Absolute isolation is not assured.
  3. Noise in control circuitry. Noise in the classical digital and analog circuitry which controls execution of gates on qubits.
  4. Imperfections in the manufacture of qubits.

NISQ — Noisy Intermediate-Scale Quantum devices

The acronym NISQ was coined by Caltech Professor John Preskill in 2018. In his words:

  • This stands for Noisy IntermediateScale Quantum. Here “intermediate scale” refers to the size of quantum computers which will be available in the next few years, with a number of qubits ranging from 50 to a few hundred. 50 qubits is a significant milestone, because that’s beyond what can be simulated by brute force using the most powerful existing digital supercomputers. “Noisy” emphasizes that we’ll have imperfect control over those qubits; the noise will place serious limitations on what quantum devices can achieve in the near term.
  • Quantum Computing in the NISQ era and beyond
  • John Preskill
  • https://arxiv.org/abs/1801.00862

Technically quantum computers with fewer than 50 qubits are not NISQ devices

As can be seen from Preskill’s own definition above, 50 bits is the starting point for NISQ. The term intermediate-scale does not appear to be intended to refer to small-scale devices.

Technically, at the time of the writing of this paper (February 2021) there are only three general-purpose quantum computers which qualify as NISQ devices:

  • Google — 53 qubits.
  • IBM — 53 qubits.
  • IBM — 65 qubits

I exclude D-WAVE systems since their machines are not universal, gate-based, general-purpose quantum computers.

But people tend to refer to all current quantum computers as NISQ devices

Despite Preskill’s definition, it is common for people to refer to all current quantum computers as NISQ devices. Technically, it’s not proper, but that’s what people do.

I accept this state of affairs, even if it is not technically correct usage.

And I myself have tended to misuse the term as well — even in this paper. And I’ll probably continue to do so, unfortunately!

NSSQ is a better term for current small-scale quantum computers

If it was up to me, I would start using the term (contrived by me) NSSQ for Noisy Small-Scale Quantum device for quantum computers with less than 50 noisy qubits, and reserve NISQ for machines with 50 to a few hundred qubits.

I’d also use the term NLSQ for Noisy Large-Scale Quantum device for quantum computers with more than a few hundred noisy qubits.

Fault-tolerant quantum computing and fault-tolerant qubits

The overall goal is fault-tolerant quantum computing, which means that algorithms and applications will no longer have to be concerned in the slightest with errors in qubits and gates.

The essential requirement for fault-tolerant quantum computing is fault-tolerant qubits, which essentially means the qubits no longer suffer from decoherence, gate errors, and measurement errors, that quantum qubits reliably and consistently maintain their state for an indefinite period of time and that logic operations are reliably and consistently performed on qubits without errors, including measurement operations.

Fault-free vs. fault-tolerant

Technically, if we had perfect, ideal qubits, they would not need to be fault-tolerant per se. Absolute perfection is an impossible goal, but ultimately quantum computing may be able to achieve what classical computing already has — that even though no hardware is absolutely 100% perfect, classical computing hardware has been able to achieve close enough to perfection that the vast majority of uses never even notice occasional errors.

Put simply, we will no longer need fault-tolerant qubits once ideal, perfect qubits or at least near-perfect qubits are readily available.

All the algorithms and applications really need is fault-free qubits, which practically means either near-perfect qubits or fault-tolerant qubits — if perfect qubits are available, use them, but if they are not available, resort to quantum error correction.

Once near-perfect qubits are readily available, most algorithms and applications won’t actually need true fault-tolerant qubits since there won’t be any faults to tolerate.

This will be similar to the situation with classical hardware, where classical ECC error correction code hardware is available for classical high-end servers and workstations, but isn’t needed for most commodity applications or on consumer devices.

This paper could have been rewritten to use fault-free quantum computing and fault-free qubits everywhere that fault-tolerant quantum computing and fault-tolerant qubits are references, but most people are much more familiar with fault-tolerance than fault-free.

FTQC — fault-tolerant quantum computing

FTQC is the initialism (abbreviation) for either fault-tolerant quantum computing or fault-tolerant quantum computation, which are roughly synonyms.

Logical qubit

There are two distinct angles to approach logical qubits:

  1. What the algorithm designer or application developer sees in the programming model.
  2. What the hardware implements.

With NISQ, the only kind of qubits are physical qubits, which are implemented directly in the hardware.

You could call NISQ qubits logical qubits since they are indeed what algorithm designers and application developers see in the programming model, but since logical qubits and physical qubits would be absolutely identical, there would be no useful distinction.

But once we move beyond NISQ to the realm of fault-tolerant qubits using quantum error correction, then it makes sense to distinguish logical qubits from physical qubits — the latter not being tolerant of errors while the former would be tolerant of errors.

Technically, once we achieve near-perfect qubits there may no longer be a need for fault-tolerance per se (except for very high-end applications) so that the logical qubit vs. physical qubit distinction is once again meaningless, but as long as we will still be in a world with both near-perfect and fault-tolerant qubits, it would be helpful to consistently tell algorithm designers and application developers that they are working with logical qubits, even if in some situations the logical qubits may indeed be physical qubits.

Back to a definition:

  • logical qubit. The fault-free qubits which are directly referenced by algorithm designers and application developers, usually to distinguish them from the physical qubits and quantum error correction algorithms which may be needed to implement each logical qubit in order to provide them with fault-tolerance.

Fault-tolerant logical qubit

Fault-tolerant logical qubit is a perfectly reasonable term, but technically it’s redundant since a logical qubit is by definition fault-tolerant. Still, it is a useful rhetorical device since it emphasizes the fault-tolerance and how it is achieved.

It’s a wordy term, so it shouldn’t be used too often, but it is helpful when the concept of a logical qubit is being introduced.

Qubit as abstract information vs. qubit as a physical device or physical representation of that abstract information

There is a significant difference in nomenclature between classical and quantum computing when it comes to bits and qubits:

  1. A classical bit is a unit of abstract information that can be represented on a storage medium or in an electronic device, such as a flip flop, memory cell, or logic gate such as a flip flop. A 0 or 1, distinct from how a 0 or 1 is represented physically.
  2. A quantum bit or qubit is a hardware device which can store the representation of quantum state in that hardware device. Quantum state being two basis states, |0> and |1>, two probability amplitudes, and a phase difference. But that abstract notion of quantum state is distinct from the hardware device representing that abstract state information.

So, in classical computing a bit is abstract information while in quantum computing a qubit is a hardware device, analogous to a classical register, memory cell, or flip flop or other classical logic gate.

Quantum state, in the form of two basis states, |0> and |1>, two probability amplitudes, and a phase difference, is the more proper analog of the abstract information of a classical bit.

That said, when it comes to logical qubits, the emphasis shifts back to the abstract information of the quantum state of the logical qubit, as distinct from the physical state of each physical qubit.

Each physical qubit will have its own quantum state, or actually the shared quantum state of each subset of physical qubits which are entangled to represent the overall logical qubit, but ultimately the algorithm designer and application developer is concerned with the abstract logical qubit quantum state rather than any quantum state of any individual physical qubit or collections of physical qubits.

No logical qubits on NISQ devices

Just to be clear, this paper does not advocate the use of the term logical qubit for NISQ devices — except to the degree that quantum error correction may be implemented on the NISQ device in a robust, automatic, and transparent manner.

Fault-free logical qubits

Technically, the term is redundant, but fault-free logical qubit does reinforce the notion that by definition a logical qubit must be fault-free.

The shorter term should ultimately be used once we actually get to the world in which logical qubits are common and the norm, but until then, it is helpful to reinforce the fault-free nature of the logical qubits of the future which do not exist today.

Near-perfect qubits

I’ll use the term near-perfect qubits (or nearly perfect qubits) to refer to the best quality of physical qubits which the engineers are able to produce, and which satisfy one of the the following two criteria:

  1. Good enough to enable quantum error correction to produce logical qubits.
  2. Good enough for some interesting range of quantum applications so that they are able to produce acceptable results even without quantum error correction.

The real goal is to enable quantum error correction, but a side effect of achieving that goal will likely be physical qubits which are actually high-enough quality for some interesting applications.

Virtually perfect qubits

I’ll use the term virtually perfect qubits to refer to logical qubits on occasion, to emphasize that logical qubits are as close to perfect as is practically possible.

Technically, we may eventually reach a stage where physical qubits really are close to being perfect, not actually perfect, but close enough. We could refer to those nearly-perfect physical qubits as virtually perfect qubits as well, and maybe even use them directly as logical qubits without any need for full-blown quantum error correction, but we’re not anywhere near that stage and nobody is suggesting that we are likely to be there in the coming years or even the next decade. Still, it is a theoretical possibility, and a remote engineering possibility.

Absent such nearly-perfect physical qubits, I’ll consider virtually perfect qubits to simply be a synonym for logical qubits — imperfect physical qubits coupled with quantum error correction.

But, I’ll do so with the caveat that nearly-perfect qubits might also be considered virtually perfect qubits should they ever come into existence.

And all of this begs the question of how tiny or large an epsilon short of perfection is to be tolerated as being close enough to perfection. Who exactly would notice the difference? Some might, but for many or most algorithms and applications the difference might not even be noticeable — especially since most quantum algorithms and applications will produce probabilistic results anyway.

Fault tolerance vs. quantum error correction vs. logical qubits

Just to clarify and contrast these three distinct terms:

  1. Fault tolerance means that the computer is capable of successfully completing a computation even in the presence of errors. Typically via quantum error correction.
  2. Quantum error correction is a method for restoring qubits to their correct quantum state should they be corrupted somehow. A method for achieving fault tolerance.
  3. Logical qubits are qubits which are fault tolerant. Typically via quantum error correction.

For all three cases, the algorithm designer or application developer need not worry about errors.

To my mind, progress in perfecting qubits is the best way to go

Rather than insisting that full quantum error correction is the only acceptable end state, I would argue that pushing for progress in perfecting qubits is the better focus to maintain as the highest priority.

By the time quantum error correction is actually here and actually living up to expectations of perfection — and available in sufficient capacity for production-scale applications, commodity qubits may already be near-perfect, close enough to perfection that true quantum error correction doesn’t add that much additional value — and adds a lot of cost and reduces qubit capacity.

Besides, initial quantum error correction might have too many negative tradeoffs so that it either doesn’t fully live up to its promise, or negative side effects (performance, limited capacity) leave it less than fully desirable.

Every increment of reduction in error rates will unlock another wave of potential applications, even if full quantum error correction has not yet been achieved.

Besides, closer to perfect qubits means a lower error threshold, which simplifies quantum error correction — fewer physical qubits are needed for each logical qubit.

Ultimately we do want both — perfect logical qubits and near-perfect physical qubits, but it may be many years before pure quantum error correction is the hands-down preference for all applications.

Granted, the pace of improvements of raw physical qubits can be painfully slow, but the future of quantum error correction is predicated on near-perfect physical qubits anyway. And sometimes, occasionally, a quantum leap of progress occurs.

So, it’s a win-win to keep pushing towards more-perfect (near-perfect) qubits.

Classical ECC

The earliest classical computers used a simple form of error detection but not correction — a so-called parity bit. If a single bit of a word (12 to 36 bits) was flipped, from a 0 to a 1 or from a 1 to a 0, the hardware would immediately halt and report the error. That was great for detecting errors so that computational results would not be silently corrupted, but automatic correction was not attempted, so that the user (operator) would simply rerun the application (job.)

That worked fine for most applications where rerunning the application (job) was easy, but wasn’t acceptable for mission-critical real-time systems which could not simply be rerun at will.

So-called Hamming codes were developed to attempt to actually correct single-bit errors. By the late 1950’s, ECC memory (error correcting code) was developed for commercial computers, which both automatically corrected single-bit errors and detected and reported two-bit errors.

Early computer memories were very prone to errors, much as all quantum computers are today, but gradually the quality of memory production improved to the point where many applications no longer needed the automatic correction since errors occurred so infrequently that it was easier to simply rerun the application on those rare occasions. But high-end mission-critical systems, such as real-time financial transactions, didn’t have the luxury of rerunning the application, so ECC memory remains essential for those high-end mission-critical applications.

It is also worth noting that classical computers have used ECC memory primarily for main memory, not for internal CPU registers which are much less prone to errors. Quantum computers on the other hand don’t have anything comparable to large memories. The small number of dozens or hundreds of qubits are more comparable to the internal CPU registers of a classical computer. Still the concept is comparable.

Metaphor of ECC for classical computers

We also have the metaphor of ECC for classical computers — error correct code (ECC) memory is certainly preferred for very high-end classical computing systems, but raw, un-corrected memory in cheap commodity systems is plenty good enough for the vast majority of common applications.

I suspect that the same may become true for quantum computing, eventually.

But even if many or most quantum applications can compute successfully with near-perfect qubits, some applications may in fact need the higher certainty of quantum error correction.

Stabilized qubit

A stabilized qubit is really just an alternative term for logical qubit, but emphasizes its operational characteristic — to stabilize the quantum state of a qubit.

Stabilization implies that the qubit is inherently unstable, but can be stabilized using quantum error correction.

Not to be confused with a stabilizer qubit — see below.

Stable qubit

A qubit could be stable either due to quantum error correction — it has been stabilized — or because it is a near-perfect qubit and is inherently reasonably stable. So it is either a stabilized qubit or a near-perfect qubit.

Generally, this is just an alternative term for stabilized qubit, comparable to logical qubit, but emphasizing its operational characteristic — to stabilize the quantum state of a qubit.

Not to be confused with a stabilizer qubit, see below.

Data qubit

The term data qubit isn’t typically used or even relevant for NISQ devices — all qubits are essentially data qubits, so the term adds no real value for NISQ, but for quantum error correction and logical qubits, there will be some number of data qubits — physical qubits — for each logical qubit, each of which maintains a copy of the quantum state of the logical qubit.

The logical qubit may have other physical qubits as well, such as stabilizer qubits and flag qubits, designed to stabilize the quantum state of the data qubits, and to enable measurement of the quantum state of the logical qubit.

Algorithm designers and application developers will generally have no reason to be concerned with the data qubits which underlie the logical qubits used by their algorithms.

Stabilizer qubit

A stabilizer qubit is an extra qubit which is used in quantum error correction to assure that the quantum state of a logical qubit is maintained (stabilized.) There will generally be a stabilizer qubit for each of the data qubits which comprise a logical qubit.

At least in the surface code approach to quantum error correction, a logical qubit is a combination of a number of data qubits and stabilizer qubits which collectively assure that the logical qubit maintains its quantum state.

Not to be confused with a stabilized qubit or stable qubit — see above.

Coherence extension

Coherence extension is a term which I contrived here in this paper to refer to any efforts to extend the coherence of qubits. Each physical qubit has its own coherence — the time that can elapse become the qubit may lose some aspect of its quantum state. Coherence extension would be any method for increasing the coherence of the qubit, particularly logical qubits, so that the coherence time of a logical qubit would be substantially longer than the coherence of any one of the underlying physical qubits, if not infinite or indefinite (as long as the machine is powered up.)

Coherence extension could be expressed in one of four forms:

  1. A factor or ratio relative to the coherence time of a physical qubit. 1.25 for a 25% increase, 2.00 for a doubling or 100% increase, 10 for a tenfold increase, 100 for a hundredfold increase, 1,000,000 for a millionfold increase.
  2. A percentage of the coherence time of a physical qubit. 125% for a 25% increase, 200% for a doubling or 100% increase, 1000% for a tenfold increase, 10000% for a hundredfold increase, 100000000% for a millionfold increase.
  3. Indefinite. As long as the machine has power.
  4. Infinite or persistent. Persists even if the machine is powered off.

Quantum memory

Quantum computers don’t have any memory per se that is analogous to the main memory or mass storage of a classical computer. All they have are qubits, which are analogous to registers or flip flops of classical hardware. In particular, they don’t have any persistence beyond the immediate calculation. Decoherence or very limited coherence time assures that current qubits cannot function as a form of long-term memory.

But once we achieve coherence extension using quantum error correction and logical qubits, the possibility of quantum memory becomes technically feasible.

A simple definition:

  • quantum memory. One or more qubits which are capable of maintaining their quantum state for an indefinite if not infinite period of time. Indefinite meaning as long as the machine has power, and infinite or persistent meaning even if the machine no longer has power. The former analogous to the main memory of a classical computer, and the latter analogous to the mass storage of a classical computer.

Currently, the concept of quantum memory has no relevance to quantum computing as currently envisioned — a coprocessor to perform a single computation. As such, the concept of quantum memory is technically beyond the scope of this paper. Nonetheless, it is an intriguing and promising possibility for future research, and a prospect for a future vision of quantum computing.

Technical prerequisites for quantum error correction and logical qubits

There are only four technical prerequisites to enable the implementation of quantum error correction and logical qubits:

  1. Complete theoretical basis. All of the details of the theory behind both quantum error correction and logical qubits need to be worked out in excruciating detail before the hardware can be designed and implemented.
  2. Lower-error qubits. Better, higher-quality hardware for qubits. The exact required error rate is unknown.
  3. Many physical qubits. The exact number of physical qubits per logical qubit is unknown.
  4. Fast qubit control to handle many physical qubits. Execution of a single quantum logic gate will affect not simply one or two qubits, but many dozens, hundreds, or even thousands of physical qubits.

Technical requirements for quantum error correction and logical qubits

In a paper posted in 2015 and published in 2017, IBM suggested three technical requirements for implementing quantum error correction and logical qubits, or more generally, requirements to build a usable quantum computer.

From the IBM paper, “… to build a quantum computer we require:

  • A physical qubit that is well isolated from the environment and is capable of being addressed and coupled to more than one extra qubit in a controllable manner,
  • A fault-tolerant architecture supporting reliable logical qubits, and
  • Universal gates, initialization, and measurement of logical qubits

The paper:

Some quotes from IBM’s 2015/2017 paper on logical qubits

The IBM paper, from the preceding section:

  1. Building logical qubits in a superconducting quantum computing system
  2. Jay M. Gambetta, Jerry M. Chow, Matthias Steffen
  3. https://arxiv.org/abs/1510.04375
  4. https://www.nature.com/articles/s41534-016-0004-0

Some insightful quotes:

  1. Scalable fault-tolerant quantum computers — “Overall, the progress in this exciting field has been astounding, but we are at an important turning point where it will be critical to incorporate engineering solutions with quantum architectural considerations, laying the foundation towards scalable fault-tolerant quantum computers in the near future.
  2. Quantum conflict — “balancing just enough control and coupling, while preserving quantum coherence.
  3. Logical qubits — “The essential idea in quantum error correction (QEC) is to encode information in subsystems of a larger physical space that are immune to noise.
  4. Fault-tolerant logical qubits — “QEC can be used to define fault-tolerant logical qubits, through employing a subtle redundancy in superpositions of entangled states and non-local measurements to extract entropy from the system without learning the state of the individual physical qubits.
  5. Surface code — “There are many approaches to achieving quantum fault-tolerance, one of the most promising is the two-dimensional (2D) surface code.
  6. Quantum memory, fault-tolerant quantum memory — “near term progress towards the monumental task of fully fault-tolerant universal quantum computing will hinge upon using QEC for demonstrating a quantum memory: a logical qubit that is sufficiently stable against local errors and ultimately allows essentially error-free storage.
  7. Fault-tolerant error correction architecture — “The particular arrangement of physical qubits is governed by selection of a fault-tolerant error correction architecture.

And some quotes from a 2020 IBM blog post:

  1. Surface code and the Bacon-Shor code, both of which are famous and widely studied examples in the quantum error correction community.
  2. Co-design of quantum hardware and error-correcting codes
  3. The tension between ideal requirements and physical constraints couples the abstract and the practical.
  4. as we move closer as a community to experimentally demonstrating fault-tolerant quantum error correction.

Theory vs. design and architecture vs. implementation vs. idiosyncrasies for quantum error correction of each particular quantum computer

It might be nice if quantum error correction and logical qubits were designed and implemented identically on each quantum computer, but there are probably a host of theoretical and practical reasons for differences on different quantum computers, such as:

  1. Theory. There may be multiple theoretical approaches or methods.
  2. Design and architecture. There could be multiple approaches to designing implementations of a given theory, plus multiple theories.
  3. Implementation. Even given a particular design and architecture, there may be practical reasons, challenges, or opportunities for implementing a particular design and architecture differently on a particular machine, such as resource constraints, technology constraints, and tradeoffs for balancing competing constraints, or even application-specific requirements.
  4. Idiosyncrasies. Every machine has its own quirks which can interfere with or enhance sophisticated algorithms such as quantum error correction.

It should be a goal to clearly document the details of all of these aspects of quantum error correction and logical qubits for each particular model of quantum computer. Such documentation should have two distinct parts:

  1. Technical details of design and implementation.
  2. The subset of details which algorithm designers and application developers need to be aware of to fully exploit the capabilities of the machine. Anything which can affect the function, performance, or limitations of algorithms and applications, but excludes under-the-hood details which have no impact on the design of algorithms and applications.

I’ve changed my view of quantum error correction

When I first got more than toe-deep in quantum computing (in 2018), I thought it very strange that there was so much research interest in quantum error correction. I mean, sure, maybe it was needed in the near-term since hardware was very unreliable, but surely, as with classical computing hardware, it shouldn’t be needed as quantum computing hardware advanced and matured.

Oddly, the situation is 180-degrees reversed from my thinking — quantum error correction wasn’t even feasible when hardware needed it the most, and it would be only when hardware was much more matured that quantum error correction would even be feasible. Strange, but true, how that is.

I came to two conclusions (back then):

  1. Quantum error correction proposals were way too complicated to be implemented any time soon, likely more than five years and maybe not even for ten years — or longer.
  2. Based on progress being reported by hardware vendors such as IBM and Rigetti, that quantum hardware reliability was improving quite rapidly, so rapidly that quantum hardware was likely to be much closer to 100% reliability well before quantum error correction was even feasible, so that many or most quantum algorithm designers and application developers would likely be able to get by if not thrive with the improved hardware likely within a few years without any real and pressing need for the fabled and promised but distant quantum error correction.

Now I have some revised conclusions:

  1. Quite a few algorithms and applications really will need the greater precision of absolute quantum error correction, even if many algorithms and applications do not. Experiments and prototypes may not need quantum error correction, but production-scale applications are likely to need full-blown quantum error correction.
  2. Algorithms and applications are likely to need quantum error correction to achieve dramatic quantum advantage.
  3. Dramatic quantum advantage is likely to require advanced algorithmic methods such as quantum phase estimation (QPE) and quantum Fourier transforms (QFT), which will in turn require the greater precision of quantum error correction.
  4. To be usable by average, non-elite staff, quantum error correction needs to be full, automatic, and transparent with true logical qubits, as opposed to manual and explicit error mitigation.
  5. To be clear, a quantum computer with 50 or more noisy qubits without quantum error correction is simply not going to be able to support applications capable of achieving dramatic quantum advantage.
  6. The intermediate hardware improvements (qubits with much lower error rates) I envisioned in my original second conclusion are actually also needed as the foundation of quantum error correction, so that achieving that better hardware would also enable quantum error correction to come to fruition forthwith.
  7. Hardware vendors, including IBM and Google, have already been designing aspects of their newer hardware to be much closer to what is required to support quantum error correction. Although quantum error correction is not imminent, it is a lot closer than I originally imagined. Maybe two to five years rather than five to ten years.

In short, there is a clear synergism between quantum error correction and improved NISQ — the more that NISQ is improved, the closer we come to quantum error correction.

NISQ qubits are still too scarce and too noisy for anything remotely resembling a practical application, but that is changing reasonably rapidly.

Within two to three years we should be in the realm of hundreds of (physical) qubits, and much more reliable qubits as well.

At that stage, preliminary implementations of quantum error correction, with at least 5, 8, or 12 logical qubits will be feasible.

It will take another two or three years to step up to 16 to 32 or maybe even 48 logical qubits.

That will take us to the five year stage, where 64, 72, 80, or even 92 logical qubits become feasible.

Then, finally, true, dramatic quantum advantage — and even true quantum supremacy — will become feasible, and even common, for practical applications.

Granted, 64 to 92 qubits, even perfect qubits, may still not be sufficient for the data requirements for production-scale real-world practical applications, but at least we’ll be on a good path to that goal. Without those perfect or near-perfect qubits, that goal will remain forever out of reach.

My own preference is for near-perfect qubits over overly-complex quantum error correction

Despite my current belief as expressed in the preceding section that full, automatic, and transparent quantum error correction is the only way to go in the long run and is coming sooner than many expected, I still prefer near-perfect qubits, both for the near and medium term before full quantum error correction becomes widely available, and even in the long run for very high-end applications which can significantly benefit from working with raw physical qubits.

Most average applications will indeed require true logical qubits and full quantum error correction, but my own personal interest is geared more towards high-end applications.

Capacity will be a significant limiting factor in transitioning to logical qubits. Initial systems supporting logical qubits will have very limited capacities — lots of physical qubits, but supporting only a few logical qubits. It may take a significant number of iterations before systems have a sufficient logical qubit capacity to enable full quantum advantage. But until then, it may be possible for a significant range of applications to achieve quantum advantage using near-perfect physical qubits.

Manual error mitigation and correction just won’t cut it

There may be some odd niche applications which can be accomplished without transparent and automatic quantum error correction and logical qubits, but most algorithms and applications will need logical qubits to achieve a compelling level of quantum advantage.

Explicit, manual error mitigation and error correction may be feasible for some applications, but that’s too great a burden to place on average algorithm designers and application developers. It’s called cognitive overload — being asked to juggle more balls (qubits) then is within one’s intellectual abilities. Implicit, automatic, and transparent quantum error correction is what most algorithm designers and application developers will need.

It’s hard enough to design and develop quantum algorithms and applications as it is without placing the extreme burden of explicitly and manually correcting for any and all errors which can occur in a quantum computer.

And explicit and manual error mitigation and correction is error-prone — it requires very careful attention to detail. Designers and developers are likely to be overwhelmed with cognitive overload.

Designers and developers need help, a lot of help, and the implicit, automatic, and transparent quantum error correction of logical qubits is just what the doctor ordered.

Quantum error mitigation vs. quantum error correction

Quantum error mitigation and quantum error correction are sometimes treated as exact synonyms. I consider this an improper equivalence and source of confusion, but that doesn’t seem to stop or bother some people. But I do recognize that it happens, we have to be aware of it, and we have to worry about compensating for the resulting confusion.

Generally, quantum error mitigation is explicit and manual. Algorithm designers and application developers must add code to accomplish quantum error mitigation.

Generally, quantum error correction (QEC) is implicit and automatic. And fully transparent. Algorithm designers and application developers don’t need to add any special code to accomplish quantum error correction using logical qubits. The hardware and firmware do all of the heavy lifting.

Manual, explicit “error correction” (error mitigation)

It’s beyond the scope of this paper, but the IBM Qiskit Textbook does discuss manual error mitigation:

And

Automatic quantum error correction

Just to reemphasize that manual, explicit, hand-coded error mitigation developed by the algorithm designer or application developer is a poor substitute for fully-automated quantum error correction.

I see the term automatic quantum error correction as redundant — if quantum error correction wasn’t automatic then it would be referred to as quantum error mitigation.

Quantum error correction is inherently automatic, implied, and hidden (transparent) while error mitigation is inherently manual, explicit, and visible

Just to reiterate one more time the essential distinctions between quantum error correction and quantum error mitigation. The distinctions:

  1. Automatic vs. manual.
  2. Implied vs. explicit.
  3. Hidden (transparent) vs. visible.

Noise-resilient and noise-aware techniques

The terms noise-resilience, noise-resilient, noise-awareness, and noise-aware, general synonyms, are somewhat ambiguous but not used too widely. They alternatively refer to:

  1. Hardware which handles noise well.
  2. Application techniques for coping with errors caused by noise. In other words, manual error mitigation.

The bottom line is to approximate error-free (fault-free) logical qubits or near-perfect physical qubits.

But this is not the same as full-blown automatic, transparent quantum error correction or logical qubits.

The concept implies explicit application knowledge (awareness) and actions. Application-explicit noise mitigation is really only for elite developers and unlikely to be usable by average application developers. There may be a 1 to 100 ratio of elite developers to average developers. Average developers really need logical qubits with their automatic and transparent quantum error correction.

Whether noise-resilient techniques can be used under the hood to approximate quantum error correction is beyond the scope of this informal paper.

One example is this paper:

Quantum error correction is still a very active research area — not even yet a laboratory curiosity

Most research in quantum error correction is still on paper, although recent machines from Google and IBM have claimed to have implemented aspects of designs for quantum error correction, but are short of actual support for automatic and transparent quantum error correction and true logical qubits.

As such, not only is quantum error correction (and hence logical qubits) not yet ready to move out of the lab and into the marketplace, but quantum error correction and logical qubits aren’t even close to being an actual laboratory curiosity which can even be demonstrated in a controlled laboratory environment.

Twin progressions — research on quantum error correction and improvements to physical qubits

Research into quantum error correction and logical qubits is really on two separate but parallel tracks:

  1. Discover and develop newer and better technologies for individual qubits, with the essential goals of dramatically reducing their error rate and increasing the physical capacity since even a single logical qubit requires a lot of physical qubits.
  2. Discover and develop newer and better schemes for encoding a logical qubit as a collection of physical qubits, as well as schemes for controlling and performing operations on one or more logical qubits as collections of physical qubits.

Both tracks can and should progress in parallel.

Note that:

  1. Better qubits are of value even without quantum error correction. A double benefit.
  2. Efficient and cost-effective quantum error correction requires better qubits. Qubits of lower quality (higher error rate) increase the cost and lower the capacity of logical qubits. Lower quality qubits mean more physical qubits are required per logical qubit, reducing the number of logical qubits which can be implemented for a given hardware implementation since physical qubits remain a relatively scarce commodity.

Focus on simulators to accelerate development of critical applications that will be able to exploit logical qubit hardware when it becomes available to achieve dramatic quantum advantage

I fear that far too much effort is being focused on trying to design algorithms that run on current NISQ hardware. My fear is that a lot of these algorithms won’t scale well to achieve quantum advantage. And a lot of hybrid algorithms simply won’t ever achieve quantum advantage since they aren’t focused on scaling to that level.

So, what I would like to see is that effort should be focused on scalable algorithms and that actual development and testing should occur on classical quantum simulators that more closely mimic logical qubits. There really isn’t any substantial benefit from running small algorithms on noisy qubits if the algorithms won’t scale to the regime of quantum advantage — 40 to 50 to 60 qubits.

Granted, classical quantum simulators won’t be able to simulate algorithms for 50 to 60 or more qubits, but that’s why there needs to be emphasis on scalability, so algorithms can be automatically tested — and mathematically proven — for scaling from 10 to 20 to 30 to 40 qubits, and then there can be a reasonable expectation that such algorithms will correctly scale to 50 to 60 and more qubits.

And, research and engineering should also be focused on pushing the limits of simulators as far as possible to 45, 50, and maybe even 55 qubits, so larger, near-production-scale algorithms can in fact be designed, developed, and tested well before actual quantum hardware becomes available.

Still need to advance algorithms to 30–40 qubits using ideal simulators

We have two scales of algorithms at present:

  1. Small-scale algorithms for 5 to 24 qubits. They can run on real machines, current hardware.
  2. Large-scale algorithms designed for hundreds to thousands of qubits. Academic papers. Purely theoretical. Nothing that runs on current hardware.

The critical need which is missing is algorithms in the 30 to 40-qubit range which could plausibly run on near-term hardware and on classical quantum simulators as well.

Such algorithms are closer to representing practical real-world applications.

Even if they can’t quite run on current hardware due to decoherence and errors, at least they can be simulated.

Much beyond 40 qubits becomes problematic for both current hardware and for simulation. Algorithms beyond roughly 50 qubits cannot be simulated at all.

So, 30 to 40 qubits represents a sweet spot for developing scalable algorithms. It’s enough to do real processing and to demonstrate how algorithms can be scaled. An algorithm could be developed using 16 to 20 qubits and then scaled to 30 and then 40 qubits — and still be testable using simulation, with the hope that if it really is scalable then testing at 40 qubits should “prove” that the algorithm could be scaled to 50, 60, 72, 80, 92, and 100 and more qubits.

But none of this is possible until researchers put some emphasis on scalable algorithms which can run on 30 to 40 qubits rather than the current focus on 10 to 24 qubits.

Quantum threshold theorem

Although the details are beyond the scope of this paper, the quantum threshold theorem simply says that the error rate for a single qubit or single gate must be below some specified threshold in order for a quantum error correction scheme to fully correct the vast majority of errors which can be expected on a given quantum computer.

That’s simply a formalized way of saying that physical qubits have to have a reasonably low error rate, such as being below 1% in order for logical qubits to be supported.

Whether 1% is even close to a viable and reasonable threshold is unknown at this time.

For more info:

NISQ vs. fault-tolerant and near-perfect, small-scale, and large-scale

NISQ quantum computers are by definition noisy and intermediate scale. That begs the question of what to call a quantum computer which is not noisy or not intermediate scale. I’ll stay out of proper naming for now, but simply follow the lead of the naming of NISQ.

First, what are the alternatives to noisy?

  1. Noisy — N. All current and near-term quantum computers.
  2. Near-perfect — NP. Any current, near-term, and longer-term quantum computers with more than a couple of 9’s in their qubit reliability, like 99.9%, 99.99%, 99.999%, and 99.9999% — using only raw physical qubits, no error correction or logical qubits. Close enough to perfection that quite a few applications can get respectable results without the need for quantum error correction and logical qubits.
  3. Fault-tolerant — FT. Quantum error correction and logical qubits with 100% reliability of qubits.

Second, what are the alternatives to intermediate scale?

  1. Small scale — SS. Under 50 qubits.
  2. Intermediate scale — IS. 50 to a few hundred qubits.
  3. Large scale — LS. More than a few hundred qubits.

Written as a regular expression, the combinations are {N,NP,FT}[SIL]SQ.

Three times three is nine, so here are the nine combinations:

  1. NSSQ — Noisy Small-Scale Quantum devices. Most of today’s quantum computers. Under 50 or so qubits.
  2. NISQ — Noisy Intermediate-Scale Quantum devices. 50 to a few hundred or so noisy qubits.
  3. NLSQ — Noisy Large-Scale Quantum devices. More than a few hundred or so to thousands or even millions of noisy qubits.
  4. NPSSQ — Near-Perfect Small-Scale Quantum devices. Less than 50 or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  5. NPISQ — Near-Perfect Intermediate-Scale Quantum devices. 50 to a few hundred or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  6. NPLSQ — Near-Perfect Large-Scale Quantum devices. More than a few hundred or so to thousands or even millions of near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  7. FTSSQ — Fault-Tolerant Small-Scale Quantum devices. Under 50 or so logical qubits. Perfect computation, but insufficient for quantum advantage.
  8. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices. Start of quantum advantage. Good place to start post-NISQ devices. 50 to a few hundred or so logical qubits.
  9. FTLSQ — Fault-Tolerant Large-Scale Quantum devices. Production-scale quantum advantage. More than a few hundred or so to thousands or even millions of logical qubits.

NSSQ — Noisy Small-Scale Quantum devices

Noisy Small-Scale Quantum device, abbreviated NSSQ, is a term I contrived to represent quantum computers with fewer than 50 or so qubits. That covers most of today’s quantum computers.

NISQ — Noisy Intermediate-Scale Quantum devices

Noisy Intermediate-Scale Quantum device, abbreviated NISQ, is an industry-standard term for a quantum computer with 50 to a few hundred or so noisy qubits. Despite its proper definition, it is commonly used to refer to all of today’s quantum computers (all with noisy qubits) regardless of the number of qubits.

NLSQ — Noisy Large-Scale Quantum devices

Noisy Large-Scale Quantum device, abbreviated NLSQ, is a term I contrived to represent quantum computers with more than a few hundred or so to thousands or even millions of noisy qubits.

NPSSQ — Near-Perfect Small-Scale Quantum devices

Near-Perfect Small-Scale Quantum device, abbreviated NPSSQ, is a term I contrived to represent quantum computers with less than 50 or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.

NPISQ — Near-Perfect Intermediate-Scale Quantum devices

Near-Perfect Intermediate-Scale Quantum device, abbreviated NPISQ, is a term I contrived to represent quantum computers with 50 to a few hundred or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.

NPLSQ — Near-Perfect Large-Scale Quantum devices

Near-Perfect Large-Scale Quantum device, abbreviated NPLSQ, is a term I contrived to represent quantum computers with more than a few hundred or so to thousands or even millions of near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.

FTSSQ — Fault-Tolerant Small-Scale Quantum devices

Fault-Tolerant Small-Scale Quantum device, abbreviated FTSSQ, is a term I contrived to represent quantum computers with fewer than 50 or so logical qubits with quantum error correction. Perfect computation, but insufficient capacity for quantum advantage.

FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices

Fault-Tolerant Intermediate-Scale Quantum device, abbreviated FTISQ, is a term I contrived to represent quantum computers with 50 to a few hundred or so logical qubits with quantum error correction. Start of quantum advantage. Good place to start post-NISQ devices.

FTLSQ — Fault-Tolerant Large-Scale Quantum devices

Fault-Tolerant Large-Scale Quantum device, abbreviated FTLSQ, is a term I contrived to represent quantum computers with more than a few hundred or so to thousands or even millions of logical qubits with quantum error correction. Production-scale quantum advantage.

What is post-NISQ?

There are two hurdles to clear to get beyond NISQ devices — to post-NISQ:

  1. Achieving fault tolerance, or at least near-perfect qubits.
  2. Getting beyond a few hundred fault-tolerant or near-perfect qubits.

Technically, that second criteria should be achieved to claim post-NISQ, but I’m willing to relax that arm of the criteria — that intermediate scale is sufficient provided that fault tolerance or near-perfect qubits are achieved.

So, I would say that four categories would qualify as post-NISQ devices:

  1. NPISQ — Near-Perfect Intermediate-Scale Quantum devices.
  2. NPLSQ — Near-Perfect Large-Scale Quantum devices.
  3. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices.
  4. FTLSQ — Fault-Tolerant Large-Scale Quantum devices.

Whether quantum advantage can be achieved with only near-perfect qubits is an interesting and open question.

Quantum advantage can only be achieved as a slam dunk with fault-tolerant logical qubits:

  1. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices — where quantum advantage starts.
  2. FTLSQ — Fault-Tolerant Large-Scale Quantum devices — where production-scale quantum advantage flourishes.

It’s an interesting question whether fault-tolerant qubits (logical qubits) or near-perfect qubits alone of any capacity, including FTSSQ — Fault-Tolerant Small-Scale Quantum devices and NPSSQ — Near-Perfect Small-Scale Quantum devices, should mark the beginning of post-NISQ. Ideally, probably not, especially since the capacity would be insufficient to enable quantum advantage. And quantum advantage — dramatic quantum advantage — is the real goal.

When will post-NISQ begin?

When will we get beyond NISQ, to post-NISQ? I have no idea.

Even if smaller configurations of logical qubits (8 to 48) are available within a few to five years, intermediate scale, even at a mere 50 logical qubits could take a somewhat longer.

And if you want to use production-scale as the hurdle, five to seven years might be a better bet.

Post-noisy is a more accurate term than post-NISQ

As we have seen in the discussion in the prior two sections, post-NISQ is still a somewhat vague and ambiguous term. For most uses, the term post-noisy would probably be more accurate than post-NISQ since it explicitly refers to simply getting past noisy qubits, to fault-tolerant and near-perfect qubits.

So post-noisy clearly refers to:

  1. NPSSQ — Near-Perfect Small-Scale Quantum devices. Less than 50 or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  2. NPISQ — Near-Perfect Intermediate-Scale Quantum devices. 50 to a few hundred or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  3. NPLSQ — Near-Perfect Large-Scale Quantum devices. More than a few hundred or so to thousands or even millions of near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  4. FTSSQ — Fault-Tolerant Small-Scale Quantum devices. Under 50 or so logical qubits. Perfect computation, but insufficient for quantum advantage.
  5. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices. Start of quantum advantage. Good place to start post-NISQ devices. 50 to a few hundred or so logical qubits.
  6. FTLSQ — Fault-Tolerant Large-Scale Quantum devices. Production-scale quantum advantage. More than a few hundred or so to thousands or even millions of logical qubits.

But for most uses post-NISQ will refer to post-noisy

Generally I prefer to use the most accurate terminology, but sometimes that can get tedious and confusing. So, for now, I’ll personally accept the usage of post-NISQ as being equivalent to post-noisy.

As always, context will be the deciding factor as to interpretation. The three main contextual meanings being:

  1. Getting past noisy qubits. To either near-perfect or fault tolerant qubits.
  2. True fault tolerance. With quantum error correction and logical qubits.
  3. Near-perfect is good enough. True fault tolerance is not needed.

Vendors need to publish roadmaps for quantum error correction

At present, no vendor of quantum computers has published a roadmap or timeline for how they expect to progress to achieving full, automatic, and transparent production-scale quantum error correction and logical qubits.

Seeing the timeline laid out, with clearly delineated stages, would help to focus attention on when and where greater research spending is needed.

Such a timeline would also focus organizations interested in using quantum computers in their own planning for when and where to invest in ramping up their own efforts to plan for, develop, test, and deploy quantum computing solutions.

Vendors need to publish roadmaps for near-perfect qubits

Near-perfect qubits are both necessary for production-scale quantum error correction and useful in their own right. But at present, no vendor of quantum computers has published a roadmap or timeline for how they expect to progress to achieving near-perfect qubits.

Seeing the timeline laid out, with clearly delineated stages, would help to focus attention on when and where greater research spending is needed.

Such a timeline would also focus organizations interested in using quantum computers in their own planning for when and where to invest in ramping up their own efforts to plan for, develop, test, and deploy quantum computing solutions.

Likely that 32-qubit machines can achieve near-perfect qubits for relatively short algorithms within a couple of years

Although raw qubit counts are rising, the critical issue is the error rate. But given the interest in quantum error correction, I predict that increasing attention will be focused on driving down raw physical error rates to enable efficient quantum error correction. This will have the desirable side effect that more-capable algorithms can be developed.

I predict that within a couple of years, two to three or maybe four, we will see machines with at least 32 qubits which have fairly low error rates — what I call near-perfect qubits. Still not logical qubits, but maybe close enough.

A machine with 32 near-perfect qubits will support much-more-capable algorithms than current so-called NISQ machines.

And 32 is just the start. Machines with 48 to 64 to 96 to 128 near-perfect qubits will quickly become widely available. Within another year or two after that (three to five years.)

Unlikely to achieve 32 logical qubits for at least five years

Once we have machines with 32 to 128 near-perfect physical qubits, then the big question becomes when we will see fault-tolerant quantum computers with 32 to 128 logical qubits.

Using a physical to logical qubit ratio of 65 (from an IBM paper), that would imply:

  1. 32 logical qubits require 32 * 65 = 2,080 physical qubits.
  2. 48 logical qubits require 48 * 65 = 3,120 physical qubits.
  3. 64 logical qubits require 64 * 65 = 4,160 physical qubits.
  4. 96 logical qubits require 96 * 65 = 6,240 physical qubits.
  5. 128 logical qubits require 128 * 65 = 8,320 physical qubits.

That’s a lot of physical qubits.

When could we see that many physical qubits? Five years would be a very optimistic aggressive forecast. Seven years might be more plausible.

In the meantime, many applications can probably make do with 32 to 128 near-perfect qubits.

But to be clear, 32 or even 48 qubits will not be sufficient to achieve true and dramatic quantum advantage. Even a 64-qubit machine will not guarantee quantum advantage — it depends on how particular algorithms utilize those qubits.

Levels of qubit quality

These are simply some informal categories so that we have some common terminology or language to talk about rough scenarios for error rates:

  1. Extremely noisy. Not usable. But possible during the earliest stages of developing a new qubit technology. May be partially usable for testing and development, but not generally usable.
  2. Very noisy. Not very reliable. Need significant shot count to develop a statistical average for results. Barely usable.
  3. Moderately noisy. Okay for experimentation and okay if rerunning a multiple times is needed. Not preferred, but workable.
  4. Modestly noisy. Frequently computes correctly. Occasionally needs to be rerun. Reasonably usable for NISQ prototyping, but not for production-scale real-world applications.
  5. Slightly noisy. Usually gives correct results. Very occasionally needs to be rerun.
  6. Near-perfect qubit. Just short of perfect qubit. Rare failures, but enough to spoil perfection.
  7. Perfect qubit. No detectable errors, or so infrequent to be unnoticeable by the vast majority of applications. Comparable to current classical computing, including ECC memory.
  8. Corrected logical qubit. Correct result at the logical level even though physical qubits are somewhat noisy. What level of quality is needed for physical qubits? Slightly or only modestly noisy is best. May or may not be possible with moderately noisy physical qubits.

Possible need for co-design to achieve optimal hardware design for quantum error correction

It may well be that quantum error correction will never be effective and efficient if it is an afterthought of designing physical qubits. Co-design may be necessary. Co-design means that the needs of the application are used to bias the hardware design to be more efficient for the application.

In fact, co-design is mentioned by IBM:

  • Co-design of quantum hardware and error-correcting codes

In their 2020 blog post:

It may be that a specialized hardware design is needed that might not even be usable at the raw physical qubit level since it is designed to be optimal for error-corrected logical qubits.

What might a co-design for quantum error correction and logical qubits look like? We could speculate, but the real point is that research is needed. Lots of research. Lots of money. A major commitment.

Maybe early versions of quantum error correction and logical qubits continue to rely on NISQ architectures, and only after getting actual experience with such logical qubits, including benchmarking, then a research plan can be designed to pursue co-design and/or specialized design of quantum hardware designed explicitly for logical qubits.

Top 10 questions

These are the essential, primary questions. Cutting directly to the chase, here are the Top 10 questions confronting fault-tolerant quantum computing, quantum error correction, and logical qubits. See discussion of these questions after the full lists of the questions.

Warning: Alas, they don’t have great definitive answers at this juncture — maybe in a couple of years.

  1. When will quantum error correction and logical qubits be practical?
  2. How much will hardware have to advance before quantum error correction becomes practical?
  3. Will quantum error correction be truly 100% transparent to quantum algorithms and applications?
  4. How many physical qubits will be needed for each logical qubit?
  5. Does quantum error correction guarantee absolute 100% perfect qubits?
  6. Does quantum error correction guarantee infinite coherence?
  7. Does quantum error correction guarantee to eliminate 100% of gate errors, or just a moderate improvement?
  8. Does quantum error correction guarantee to eliminate 100% of measurement errors, or just a moderate improvement?
  9. What degree of external, environmental interference can be readily and 100% corrected by quantum error correction?
  10. How exactly does quantum error correction work for multiple, entangled qubits — multi-qubit product states?

Of course, that’s just the very tip of the iceberg. There are many more questions…

Additional important questions

Beyond the Top 10 questions listed in the preceding section, there are many more questions that I have. Each question will be discussed in a separate section.

  1. Do we really need quantum error correction if we can achieve near-perfect qubits?
  2. Will qubits eventually become good enough that they don’t necessarily need quantum error correction?
  3. Which will win the race, quantum error correction or near-perfect qubits?
  4. When will logical qubits be ready to move beyond the laboratory curiosity stage of development?
  5. How close to perfect is a near-perfect qubit?
  6. How close to perfect must near-perfect qubits be to enable logical qubits?
  7. How close to perfect must near-perfect qubits be to enable logical qubits for 2-qubit gates?
  8. When can we expect near-perfect qubits?
  9. Are perfect qubits possible?
  10. How close to perfect will logical qubits really be?
  11. But doesn’t IonQ claim to have perfect qubits?
  12. When can we expect logical qubits of various capacities?
  13. When can we expect even a single logical qubit?
  14. When can we expect 32 logical qubits?
  15. What is quantum error correction?
  16. What is a quantum error correcting code?
  17. Is NISQ a distraction and causing more harm than good?
  18. NISQ as a stepping stone to quantum error correction and logical qubits
  19. What is Riggeti doing about quantum error correction?
  20. Is it likely that large-scale logical qubits can be implemented using current technology?
  21. Is quantum error correction fixed for a particular quantum computer or selectable and configurable for each algorithm or application?
  22. What parameters or configuration settings should algorithm designers and application developers be able to tune for logical qubits?
  23. What do the wave functions of logical qubits look like?
  24. Are all of the physical qubits of a single logical qubit entangled together?
  25. How many wave functions are there for a single logical qubit?
  26. For a Hadamard transform of n qubits to generate 2^n simultaneous (product) states, how exactly are logical qubits handling all of those product states?
  27. What is the performance cost of quantum error correction?
  28. What is the performance of logical qubit gates and measurements relative to NISQ?
  29. How is a logical qubit initialized, to 0?
  30. What happens to connectivity under quantum error correction?
  31. How useful are logical qubits if still only weak connectivity?
  32. Are SWAP networks still needed under quantum error correction?
  33. How does a SWAP network work under quantum error correction?
  34. How efficient are SWAP networks for logical qubits?
  35. What are the technical risks for achieving logical qubits?
  36. How scalable is your quantum algorithm?
  37. How perfectly can a logical qubit match the probability amplitudes for a physical qubit?
  38. Can probability amplitude probabilities of logical qubits ever be exactly 0.0 or 1.0 or is there some tiny, Planck-level epsilon?
  39. What is the precision or granularity of probability amplitudes and phase of the product states of entangled logical qubits?
  40. Does the stability of a logical qubit imply greater precision or granularity of quantum state?
  41. Is there a proposal for quantum error correction for trapped-ion qubits, or are surface code and other approaches focused on the specific peculiarities of superconducting transmon qubits?
  42. Do trapped-ion qubits need quantum error correction?
  43. Can simulation of even an ideal quantum computer be the same as an absolutely perfect classical quantum simulator since there may be some residual epsilon uncertainty down at the Planck level for even a perfect qubit?
  44. How small must single-qubit error (physical or logical) be before nobody will notice?
  45. What is the impact of quantum error correction on quantum phase estimation (QPE) and quantum Fourier transform (QFT)?
  46. What is the impact of quantum error correction on granularity of phase and probability amplitude?
  47. What are the effects of quantum error correction on phase precision?
  48. What are the effects of quantum error correction on probability amplitude precision?
  49. What is the impact of quantum error correction on probability amplitudes of multi-qubit entangled product states?
  50. How are multi-qubit product states realized under quantum error correction?
  51. What is the impact of quantum error correction on probability amplitudes of Bell, GHZ, and W states?
  52. At which stage(s) of the IBM quantum roadmap will logical qubits be operational?
  53. Does the Bloch sphere have any meaning or utility under quantum error correction?
  54. Is there a prospect of a poor man’s quantum error correction, short of perfection but close enough?
  55. Is quantum error correction all or nothing or varying degrees or levels of correctness and cost?
  56. Will we need classical quantum simulators beyond 50 qubits once we have true error-corrected logical qubits?
  57. Do we really need logical qubits before we have algorithms which can exploit 40 to 60 qubits to achieve true quantum advantage for practical real-world problems?
  58. How are gates executed for all data qubits of a single logical qubit?
  59. How are 2-qubit (or 3-qubit) gates executed for non-nearest neighbor physical qubits?
  60. Can we leave NISQ behind as soon as we get quantum error correction and logical qubits?
  61. How exactly does quantum error correction actually address gate errors — since they have more to do with external factors outside of the qubit?
  62. How exactly does quantum error correction actually address measurement errors?
  63. Does quantum error correction really protect against gate errors or even measurement errors?
  64. Will quantum error correction approaches vary based on the physical qubit technology?
  65. Is the quantum volume metric still valid for quantum error correction and logical qubits?
  66. Is the quantum volume metric relevant to perfect logical qubits?
  67. What will it mean, from a practical perspective, once quantum error correction and logical qubits arrive?
  68. Which algorithms, applications, and application categories will most immediately benefit the most from quantum error correction and logical qubits?
  69. Which algorithms, applications or classes of algorithms and applications are in most critical need of logical qubits?
  70. How is quantum error correction not a violation of the no-cloning theorem?
  71. Is quantum error correction too much like magic?
  72. Who’s closest to real quantum error correction?
  73. Does quantum error correction necessarily mean that the qubit will have a very long or even infinite coherence?
  74. Are logical qubits guaranteed to have infinite coherence?
  75. What is the specific mechanism of quantum error correction that causes longer coherence — since decoherence is not an “error” per se?
  76. Is there a cost associated with quantum error correction extending coherence or is it actually free and a side effect of basic error correction?
  77. Is there a possible tradeoff, that various degrees of coherence extension have different resource requirements?
  78. Could a more modest degree of coherence extension be provided significantly more cheaply than full, infinite coherence extension?
  79. Will evolution of quantum error correction over time incrementally reduce errors and increase precision and coherence, or is it an all or nothing proposition?
  80. Does quantum error correction imply that the overall QPU is any less noisy, or just that logical qubits mitigate that noise?
  81. What are the potential tradeoffs for quantum error correction and logical qubits?
  82. How severely does quantum error correction impact gate execution performance?
  83. How does the performance hit on gate execution scale based on the number of physical qubits per logical qubit?
  84. Are there other approaches to logical qubits than strict quantum error correction?
  85. How many logical qubits are needed to achieve quantum advantage for practical applications?
  86. Is it any accident that IBM’s latest machine has 65 qubits?

Kinds of questions and issues beyond the scope or depth of this paper

These are all great questions and very relevant issues, but beyond the scope of this informal paper. Some may actually have short discussion sections later in this paper.

  1. Specific quantum error correction proposals.
  2. What is a surface code?
  3. Background on surface codes
  4. What is the Steane code?
  5. How might quantum tomography, quantum state tomography, quantum process tomography, and matrix product state tomography relate to quantum error correction and measurement?
  6. What is magic state distillation?
  7. What error threshold or logical error rate is needed to achieve acceptable quality quantum error correction for logical qubit results?
  8. What are typical values of d for a surface code?
  9. Is d = 5 really optimal for surface codes?
  10. What is a stabilizer qubit?
  11. What is a data qubit?
  12. What is a flag qubit?
  13. What is entanglement distillation?
  14. What is virtual distillation?
  15. What is topological quantum error correction?
  16. What is Shor code?
  17. What is Shor-Bacon code?
  18. What is Reed-Muller code?
  19. What is quantum error mitigation?
  20. What is a gate error?
  21. What is a bit flip error?
  22. What is a phase flip error?
  23. What is a measurement error?
  24. What is quantum error mitigation?

Top question #1: When will quantum error correction and logical qubits be practical?

In truth, I can’t even realistically speculate since there are so many open questions.

But if you press me, I’d suggest maybe five years or even seven years out, with partial implementations in 2–3 years, such as a relatively modest number of qubits — 5, 8, 12, 16, 20, 24, 28, 32.

Full intermediate scale (ala NISQ with 50 to a few hundred logical qubits) will take another two to four years after smaller configurations are available. That’s really just a guess on my part.

And if you really want to get to post-NISQ with more than a few hundred logical qubits, that could be more than seven years.

But systems with a smaller number of logical qubits could be available much sooner.

Top question #2: How much will hardware have to advance before quantum error correction becomes practical?

One of the great unanswered questions. The superficial answer: A lot.

What level of qubit quality is needed to guarantee the full effectiveness of quantum error correction?

Ultimately, error rates will have to shrink by a factor somewhere in the range of ten to a hundred to a thousand to a million times or even more.

With only ten times fewer errors, a very large number of physical qubits would be needed for each logical qubit. Exact number unknown.

With a hundred times fewer errors still a large number of physical qubits would still be needed for each logical qubit. Again, exact number unknown.

Maybe with a thousand times fewer errors a more moderate and almost practical number of physical qubits would be needed for each logical qubit.

One would hope that a million times fewer errors would require only a modest number of physical qubits for each logical qubit.

And ultimately it may turn out that entirely new qubit technologies might be needed to fully and properly support quantum error correction and logical qubits.

Top question #3: Will quantum error correction be truly 100% transparent to quantum algorithms and applications?

That’s the expectation.

Manual error mitigation requires non-transparent effort by the algorithm designer or application developer, although there might be compilers or libraries which make the job easier.

True, transparent, and automatic quantum error correction will indeed be… transparent to algorithm designers and application developers alike.

Whether and when true, transparent, and automatic — and efficient — quantum error correction becomes widely available is a separate question.

Top question #4: How many physical qubits will be needed for each logical qubit?

The simple answer is that we just don’t know at this time how many physical qubits will be needed to implement logical qubits.

I’ve seen numbers all over the map, from under a dozen, to dozens, to hundreds, to thousands, to tens of thousands, to a million, and even to millions of physical qubits for a single logical qubit.

The other simple answer is that it all depends on many factors, such as the degree of perfection desireded for logical qubits, the error rate for physical qubits, and how many logical qubits you need for a particular application.

The next section will provide some citations.

But for now, 65 physical qubits per logical qubit seems to be as good an estimate as any.

Citations for various numbers of physical qubits per logical qubit

The ranges that I have seen for the number of physical qubits per logical qubit include:

  1. Less than a dozen.
  2. A dozen or so.
  3. Dozens.
  4. Hundreds.
  5. Several thousand.
  6. Tens of thousands.
  7. A million.
  8. Millions.

Millions:

  • It is unclear if anyone is seriously suggesting millions of physical qubits per single logical qubit. I vaguely recall references that seemed to suggest millions of physical qubits per logical qubit, but upon reflection, I strongly suspect that most of the intentions were millions of physical qubits for the entire algorithm, so more likely simply thousands of physical qubits per logical qubit.
  • A fully fault-tolerant quantum computer based on the surface code assuming realistic error rates is predicted to require millions of physical qubits.
    https://www.ncbi.nlm.nih.gov/books/NBK538709/
  • How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
    https://arxiv.org/abs/1905.09749
    20 million physical qubits = 6,176 logical qubits with 3,238 physical qubits per logical qubit.

A million:

Thousands:

  • 3,238
    How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
    https://arxiv.org/abs/1905.09749
    Total 20 million physical qubits = 6,176 logical qubits with 3,238 physical qubits per logical qubit.
    Logical qubits = 2048*3+0.002 * 2048 * ln(2048) = 6175.23043937 = 6,176 logical qubits.
    Physical qubits per logical qubit = 20,000,000 / 6,176 = 3238.34196891 = 3,238 physical qubits per logical qubit.
  • 20,000
    Researchers think they can sidestep that problem if they can initialize all the qubits in their computer in particular “magic states” that, more or less, do half the work of the problematic gates. Unfortunately, still more qubits may be needed to produce those magic states. “If you want to perform something like Shor’s algorithm, probably 90% of the qubits would have to be dedicated to preparing these magic states,” Roffe says. So a full-fledged quantum computer, with 1000 logical qubits, might end up containing many millions of physical qubits.
    https://www.sciencemag.org/news/2020/07/biggest-flipping-challenge-quantum-computing
    Okay, that’s actually thousands rather than millions. Say, 20 million physical qubits and 1,000 logical qubits would require 20,000 physical qubits per logical qubit.
  • 14,500
    The number of physical qubits needed to define a logical qubit is strongly dependent on the error rate in the physical qubits. Error rates just below the threshold require larger numbers of physical qubits per logical qubit, while error rates substantially smaller than the threshold allow smaller numbers of physical qubits. Here we assume an error rate approximately one-tenth the threshold rate, which implies that we need about 14,500 physical qubits per logical qubit to give a sufficiently low logical error rate to successfully execute the algorithm.
    https://arxiv.org/abs/1208.0928
  • 1,000 to 10,000
    It takes a minimum of thirteen physical qubits to implement a single logical qubit. A reasonably fault-tolerant logical qubit that can be used effectively in a surface code takes of order 10³ to 10⁴ physical qubits.
    We find that nq increases rapidly as p approaches the threshold pth, so that a good target for the gate fidelity is above about 99.9% (p <~ 1/10³). In this case, a logical qubit will need to contain 10³ − 10⁴ physical qubits in order to achieve logical error rates below 1/10¹⁴ − 1/10¹⁵
    https://arxiv.org/abs/1208.0928
  • 1,000
    Although that may really mean “thousands” rather than only roughly 1,000.
    Day 1 opening keynote by Hartmut Neven (Google Quantum Summer Symposium 2020)
    https://www.youtube.com/watch?v=TJ6vBNEQReU&t=1231
    A logical qubit is a collection of a thousand physical qubits.” — The exact 1,000 number was stated by Eric Lucero, Lead Quantum Mechanic of the Google Quantum AI team on April 14, 2022.
    Google Quantum AI Update 2022
    https://youtu.be/ZQpVRIhKusY?t=1418

Hundreds:

  • None. I haven’t seen any references using hundreds of physical qubits for a single logical qubit.

Dozens:

A dozen of so:

Less than a dozen:

Formulas from IBM paper for physical qubits per logical qubit

IBM published a paper in 2019/2020 which contains some formulas for calculating physical qubits per logical qubit for a couple of approaches to quantum error correction. This is where the 57 and 65 numbers came from.

The 2019/2020 paper:

The IBM researchers evaluated two approaches:

  1. Heavy hexagon code. 57 physical qubits per logical qubit.
  2. Heavy square code. 65 physical qubits per logical qubit.

My goal here is not to delve into the details of the formula, but simply to show how the two numbers were derived.

Heavy hexagon code

The basic formula:

  • (5 * d² — 2 * d — 1) / 2

So for d = 5:

  • (5 * 5² — 2 * 5–1) / 2
  • = (125–10–1) / 2
  • = 114 / 2
  • = 57 physical qubits.

From another perspective:

  • There are d² data qubits
  • So for d = 5, 5² = 25

And

  • There are (d + 1) / 2 * (d — 1) syndrome measurement qubits.
  • So for d = 5, that’s 6 / 2 * 4 = 3 * 4 = 12 syndrome measurement qubits.

And

  • There are d * (d — 1) flag qubits.
  • So for d = 5, that’s 5 * 4 = 20 flag qubits.

For a total of:

  • Data qubits plus syndrome measurement qubits plus flag qubits.
  • 25 + 12 + 20
  • = 57 physical qubits.

And I verified those numbers by counting the respective qubits in Figure 2 of the IBM paper.

Heavy square code

The basic formula:

  • d² data qubits.
  • 2 d (d minus 1) flag and syndrome measurement qubits.

Or combined as:

  • Total 3 d² minus 2 d physical qubits per logical qubit.

So for d = 5:

  • 5² = 25 data qubits.
  • 2 * 5 * (5 minus 1) = 10 * 4 = 40 flag and syndrome measurement qubits.
  • 25 + 40 = 65 total physical qubits per logical qubit.

Or calculate it as:

  • 3 * 5² minus 2 * 5
  • = 3 * 25 minus 10
  • = 75 minus 10
  • = 65 total physical qubits per logical qubit.

And I verified those numbers by counting the respective qubits in the diagram in Figure 6 of the IBM paper:

  1. Data = 5 * 5 = 25
  2. Syndrome (dark) = 5 * 4 = 20
  3. Flag (white) = 4 * 5 = 20
  4. Total 25 + 20 + 20 = 65 total physical qubits per logical qubit.

For now, 65 physical qubits per logical qubit is as good an estimate as any

There really is far too much uncertainty to come up with a solid estimate for the number of physical qubits needed to construct a logical qubit, but based on my reading, I would venture that 65 physical qubits per logical qubit is as good a provisional estimate as any.

Please don’t engrave that in stone or accept it blindly as eternal gospel, but I do think it’s a good working rule of thumb — for now, until research and benchmarking efforts play out.

Top question #5: Does quantum error correction guarantee absolute 100% perfect qubits?

Is perfection — absolutely zero errors — promised and guaranteed, or just modestly greater quality, or… exactly what?

It’s unclear at this stage whether quantum error correction will actually guarantee absolute 100% perfect qubits, or just a dramatic improvement, but short of true perfection.

Even classical computing does not guarantee absolute perfection, but with few enough errors that few people even notice. Most users don’t even use ECC (Error Correction Code) memory since even standard commodity memory chips are so close to perfection. I’m not sure what the error rate is for typical commodity hardware for classical computing. ECC memory is not common in most commodity hardware, only in very high-end equipment.

What threshold defines “close enough to perfection”? I have no idea. It’s unclear how many nines of reliability will be supported — 99.9% (3 9’s), 99.99% (4 9’s), 99.999% (5 9’s), 99.9999% (6 9’s), 99.99999% (7 9’s), or what?

Top question #6: Does quantum error correction guarantee infinite coherence?

Exactly how much improvement to decoherence is quantum error correction promising — infinite coherence or just a moderate improvement in coherence? I simply haven’t seen any clear, explicit statement in the literature detailing or even characterizing how much coherence is being promised or guaranteed.

Top question #7: Does quantum error correction guarantee to eliminate 100% of gate errors, or just a moderate improvement?

Exactly how much improvement to gate errors is quantum error correction promising — perfection or just a moderate reduction in gate errors? I simply haven’t seen any clear, explicit statement in the literature detailing or even characterizing how much improvement is being promised or guaranteed.

Are there aspects of gate errors which are outside of the scope of quantum error correction, such as variability in the classical analog control circuitry?

Top question #8: Does quantum error correction guarantee to eliminate 100% of measurement errors, or just a moderate improvement?

Exactly how much improvement to measurement errors is quantum error correction promising — perfection or just a moderate reduction in measurement errors? I simply haven’t seen any clear, explicit statement in the literature detailing or even characterizing how much improvement is being promised or guaranteed.

Are there aspects of measurement errors which are outside of the scope of quantum error correction, such as variability in the classical analog control and readout circuitry?

Top question #9: What degree of external, environmental interference can be readily and 100% corrected by quantum error correction?

Exactly how much improvement to mitigating external interference is quantum error correction promising — perfect isolation from external, environmental interference or just a moderate improvement? I simply haven’t seen any clear, explicit statement in the literature detailing or even characterizing how much improvement is being promised or guaranteed.

Or is much better environmental shielding a prerequisite for quantum error correction? Can quantum error correction really compensate for 100% of all environmental interference?

Top question #10: How exactly does quantum error correction work for multiple, entangled qubits — multi-qubit product states?

Even presuming that quantum error correction works perfectly for individual, isolated qubits, how exactly does question error correction work for multiple, entangled qubits — so-called multi-qubit product states?

Exactly how much improvement is it promising? I simply haven’t seen any clear, explicit statement in the literature detailing or even characterizing how much improvement is being promised or guaranteed.

Are there any limits — will quantum error correction work flawlessly for thousands of entangled qubits for a single product state? For example, Bell (2 logical qubits), and GHZ and W states for 3 or more logical qubits, including many, dozens, hundreds, even thousands of entangled logical qubits.

Do we really need quantum error correction if we can achieve near-perfect qubits?

It will be debatable whether we need quantum error correction if we can engineer and mass produce near-perfect qubits.

Near-perfect may indeed be good enough for some algorithms and applications, but not good enough for others.

It may hinge on how near to perfect near-perfect really is. What exactly, precisely is the delta from being a perfect qubit?

How many nines does your algorithm or application need? 99.9% reliability? 99.99%? 99.999%? 99.99999? Who’s to say.

A more operational anwer: Build it and they will come. Some customers will accept your qubit accuracy even as others might reject it.

In any case, we’re not there yet and nobody expects near-perfect qubits in the reasonably near future, so we have to maintain the focus on pursuing quantum error correction.

Hopefully, one day, we will have machines that both support full quantum error correction and provide near-perfect qubits. Then the individual application developer can decide which they need for their particular circumstances.

Will qubits eventually become good enough that they don’t necessarily need quantum error correction?

Basically the same as the preceding question — Do we really need quantum error correction if we can achieve near-perfect qubits? But we can focus on asking and answering this question at each stage of improvement of qubits. The answer will remain “no” (don’t need quantum error correction) for the indefinite future, but maybe someday, before we actually achieve quantum error correction and logical qubits the answer might become “yes” (qubits are close enough to near-perfect), at least for some applications.”

Whether the answer ever becomes “yes” for all applications is a more difficult question.

Whether the answer ever becomes “yes” for 90%, 75%, or even 50% of all applications is still a very difficult question, but not as difficult as for 100% of all applications.

Which will win the race, quantum error correction or near-perfect qubits?

We really do need true logical qubits with full quantum error correction, but since that outcome is still far beyond the distant horizon, it’s reasonable to pin some degree of hope on near-perfect qubits which might in fact be good enough to serve most needs of many quantum applications — or at least that’s the conjecture. So, given that possibility, which is likely to come first, full quantum error correction or near-perfect qubits?

The answer: Unclear.

Sure, qubit quality will improve as time passes. Eventually we will even get to the stage where the error rate is almost low enough for advanced algorithms and applications to almost run properly with just raw physical qubits with no error correction. But, for many applications almost won’t be good enough. Near near-perfect just won’t cut it until near-near is actually near enough.

It is also possible that the error rate for the near-perfect qubits needed for quantum error correction will still be too high for typical algorithms and applications. In which case, near-perfect arrived at the finish line first, but couldn’t do the job without full quantum error correction.

It may well be that near-perfect gets to the finish line first and then has to sit and wait until hardware capacity and capabilities finally advance to the level where high-capacity logical qubits are actually supported. The number of physical qubits may simply be too high to achieve a sufficient number of logical qubits to achieve quantum advantage for production-scale applications for quite some time even after small numbers of near-perfect qubits and small numbers of logical qubits become generally available.

Personally, I suspect that near-perfect will be sufficient for small numbers of the most sophisticated elite algorithm designers and application developers to utilize manual error mitigation techniques to use near-perfect physical qubits to reach the ENIAC Moment for quantum computing where quantum advantage for a production-scale real-world application can be demonstrated. Quantum error correction may have also reached the finish line for small numbers of logical qubits, but the vast numbers of physical qubits needed to support moderate numbers of logical qubits may simply not yet be available.

A milestone will have been reached with the ENIAC Moment for quantum computing, but only a very limited number of applications and organizations will be able to exploit such elite use of the technology. Most organizations and most designers and developers will have to wait for the FORTRAN moment of quantum computing when sufficient near-perfect physical qubits are available to enable sufficient logical qubits with full quantum error correction to support production-scale real-world applications for more-average non-elite organizations and staff who will rely on much higher level programming models, tools, and languages rather than being comfortable working directly with less than perfect physical qubits.

When will logical qubits be ready to move beyond the laboratory curiosity stage of development?

As mentioned earlier, quantum error correction itself is not even yet at the stage of being an actual laboratory curiosity, so logical qubits cannot yet be a laboratory curiosity either.

Logical qubits remain a theoretical concept, on paper only.

Sure, recent machines from Google and IBM have implemented aspects of designs needed for surface codes, but they are still well short of actually implementing surface codes, quantum error correction, or complete and fully-functional logical qubits.

How close to perfect is a near-perfect qubit?

There are two distinct purposes for near-perfect qubits:

  1. To enable quantum error correction for logical qubits.
  2. To enable applications using raw physical qubits on NISQ devices.

Not every application will need the same number of nines of qubit reliability.

The degree of perfection needed for an application on a NISQ device will vary greatly from application to application:

  1. Shallow depth circuits will require fewer nines.
  2. Deeper circuits will require more nines.

Granted, generalization is risky, but generally, I would say that near-perfect qubit reliability will lie between three and five nines — 99.9% to 99.99% to 99.999%. Greater reliability would be highly desirable, but much harder to achieve.

How close to perfect must near-perfect qubits be to enable logical qubits?

The degree of perfection needed to enable logical qubits will depend on the specific details of the specific quantum error correction scheme being used. There are various schemes, each with its own requirements. The details are beyond the scope of this paper. For the purposes of this paper, the answer is unknown — it’s one of the open questions that need to be answered before we can expect logical qubits to become available.

Technically there is a critical error rate, called the error threshold, which must be satisfied for a given quantum error correction scheme to achieve workable logical qubits.

To some extent it is a tradeoff between degree of perfection and number of physical qubits needed to implement each logical qubit:

  1. Fewer physical qubits which are closer to perfection (more nines) are required for each logical qubit.
  2. More physical qubits which are further from perfection (fewer nines) are required for each logical qubit.

How many nines will become the gold standard for near-perfect qubits to enable logical qubits remains to be seen.

How close to perfect must near-perfect qubits be to enable logical qubits for 2-qubit gates?

Single qubit gates are relatively easy to implement compared to 2-qubit gates. There is much more room for error when two qubits are involved.

To me it is an open question as to what level of perfection is needed to enable error-free operation of quantum logic gates for a pair of logical qubits.

Actually, another way of looking at the issue is to say that 2-qubit gates need the same level of perfection as single qubit gates need, but since it’s much harder to achieve that degree of perfection for 2-qubits, it means that single qubit gates will end up being that much more error-free as an unintended free side effect of achieving near-perfection for 2-qubit gates.

When can we expect near-perfect qubits?

When we can expect near-perfect qubits will depend to some degree on how close to perfect we need near-perfect to be.

For the purposes of this paper, the answer is unknown — it’s one of the open questions that need to be answered before we can expect logical qubits to become available.

Are perfect qubits possible?

Near-perfect qubits is something I can relate to, but is a perfect qubit even theoretically possible? Maybe, but I’m not aware of anybody arguing in favor of such a concept.

Sure, we can quibble about how many nines of reliability is close enough to perfect 100% reliability to not matter anymore but I’d argue that we should just accept near-perfect as the goal and then work to achieve it.

Whether three or four or five nines is sufficient, or whether eight or nine or ten nines are needed is beyond my level of interest, at least at this stage.

After all, that’s the whole point of logical qubits — it is the logical qubit which gets you to 100% reliability, not individual physical qubits.

For me, near-perfect qubits are good enough.

Obsessing over perfection is not needed, in my view.

How close to perfect will logical qubits really be?

Will logical qubits ever be absolutely 100% perfect, with absolutely zero errors, or will there be some tiny residual error, much as we have with classical computer hardware? As far as I can tell, the answer is indeterminate at this time.

Who knows, maybe perfection will be achieved.

But I do think that it is more likely that there will be some minor residual error rate.

Whether the residual error rate is four 9’s (99.99%), five 9’s (99.999%), or even eight 9’s (99.999999%) or even better, it is likely to be close enough to perfect for most applications.

But doesn’t IonQ claim to have perfect qubits?

I was a little surprised by a quote from IonQ in the news a couple of months ago (October 2020) where they explicitly mentioned “perfect qubits.” I was stunned. I wondered what exactly they might be talking about.

Here’s the quote, from their own press release:

  • The system smashes all previous records with 32 perfect qubits with gate errors low enough to feature a quantum volume of at least 4,000,000. Getting down to the technology brass tacks: the hardware features perfect atomic clock qubits and random access all-to-all gate operations, allowing for efficient software compilation of a variety of applications.
  • https://ionq.com/posts/october-01-2020-most-powerful-quantum-computer

Are their qubits truly perfect?

Or are their qubits really simply near-perfect by my definition, maybe two to four nines, and they’re simply exaggerating a little? It’s so hard to say! It is indeed very unlikely that IonQ actually has perfect qubits, but those are their own words.

But, when it comes to marketing, anything goes. Or so it seems.

If all 32 of IonQ’s qubits were indeed perfect, with full any-to-any connectivity, the quantum volume would be about four billion (2³²), not four million (which would be 2²², not 2³².)

Being generous, I could surmise that maybe they simply mean that their qubits are good enough to enable efficient quantum error correction. That would be reasonable, but… they didn’t say that.

I’ll presume the latter generous presumption, for now, but I do find this kind of wildly-extravagant hype to be really annoying — and I’m being generous there.

When can we expect logical qubits of various capacities?

I honestly don’t know when we can expect logical qubits of any given capacity. But these are the interesting milestones to be met:

  1. 5 logical qubits — basic demonstration of logical qubits
  2. 8
  3. 12
  4. 16
  5. 20
  6. 24 — demonstrate some realistic algorithms
  7. 28
  8. 32 — where it starts to get interesting
  9. 40 — where I think we need to get to as a major milestone
  10. 48 — where algorithms will start to impress people
  11. 54 — the edge of quantum advantage
  12. 64 — quantum advantage for sure
  13. 72 — starting to impress people on quantum advantage
  14. 80
  15. 92
  16. 100 — beginning of really impressive results
  17. 256 — maybe the gold standard for results to establish quantum supremacy for real-world applications
  18. 512
  19. 1024
  20. 2K
  21. 4K — potential for Shor’s algorithm for 1K public encryption keys
  22. 8K — potential for Shor’s algorithm for 2K public encryption keys
  23. 16K — potential for Shor’s algorithm for 4K public encryption keys
  24. 32K
  25. 64K
  26. 256K
  27. 1M — unclear what people might actually use 1M qubits for
  28. 10M

When can we expect even a single logical qubit?

Nobody knows for sure, but maybe sometime within the next two or three years we could see experimental machines in the laboratory which implement quantum error correction for a single logical qubit. And that would be predicated on an error correction code that required no more than a few dozen or so physical qubits. Any approach which required hundreds or thousands of physical qubits for a single logical qubit would still be more than three years away.

Although it doesn’t address the question per se, IBM obliquely noted in their recent (September 2020) quantum computing roadmap:

Condor being the code name for their 1,121-qubit machine planned for 2023 — three years from when I write this in December 2020. Whether they mean a single logical qubit or possibly 5, 8, 12, 16, 20, or more logical qubits is unclear. Whether they mean that their 127-qubit and 433-qubit machines would have no logical qubit support is unclear as well. They could have meant that only with over 1,000 qubits would they have enough physical qubits to support enough logical qubits, say 12 to 20, to run interesting quantum algorithms, but that’s mere speculation on my part. IBM mentioned neither the number of logical qubits nor the number of physical qubits per logical qubit.

Personally I expect that we’ll see some initial implementation of at least a single logical qubit within the next one to two years. Implementing 5 to 8 logical qubits could take another year or two beyond that.

When can we expect 32 logical qubits?

To me, 32 logical qubits might represent a system which is on the verge of being usable for production-scale applications, and although not quite at the capacity needed to achieve true quantum advantage, it will be close enough that successful algorithms and applications can easily be envisioned to achieve quantum advantage with just a few increments of improvement, to 40, 48, 54, and 64 qubits.

In truth, even today, few practical applications use even 20 qubits, but a large part of that is because larger algorithms simply won’t function properly on today’s NISQ machines.

It could be that 40, 48, 54, or 64 qubits would represent better targets for production-scale applications, but for now, I feel comfortable targeting only 32. Besides, that’s only the initial target, with 36, 40, 44, 48, 54, 64, and 72 to follow.

In short, with 32 logical qubits we can start making real, substantial progress on real, practical applications. Anything less than 32 (even 20 or 24 or 28) and we are just experimenting and still just a mere laboratory curiosity.

But when can we expect 32 logical qubits? That’s completely unknown at this time. I would say:

  1. Not in the next two years.
  2. Possibly within three to five years.
  3. Certainly within seven to ten years.

What is quantum error correction?

Quantum error correction is a method for assuring that the correct values of qubits can be recovered even in the face of errors — the errors can be corrected. Although it is possible to use explicit, manual techniques to recover from errors, the primary focus is the use of a quantum error correcting code.

The technical details of quantum error correction are beyond the scope of this informal paper.

For more information, consult the Quantum error correction Wikipedia article. Or many of the papers listed in the References and bibliography section of this paper.

What is a quantum error correcting code?

A quantum error correcting code is a method for encoding the value of a logical qubit among a collection of physical qubits, such that errors within any of the physical qubits can be corrected so that the value of the logical qubit can be maintained even if physical qubits encounter errors.

A surface code is an example of a quantum error correcting code.

The technical details of quantum error correcting codes are beyond the scope of this informal paper.

Is NISQ a distraction and causing more harm than good?

People are getting a lot of mileage out of NISQ devices to experiment with quantum computing and publish academic papers, but are they actually getting any closer to the really important goal of quantum advantage? I don’t think they are, and probably never will be. Because if quantum advantage cannot be achieved with NISQ devices across a broad variety of application categories, then NISQ will have turned out to be a dead end, a technological cul-de-sac.

I know it’s a provocative statement, but I feel compelled to ask whether NISQ is creating a lot of bad habits and unproductive work which is not destined to achieving quantum advantage or quantum supremacy.

Sure, there may indeed be some niche applications which can actually achieve quantum advantage, but if 90% or 95% of developers are going to fail to achieve quantum advantage, that’s not good. Not good at all. And would lead to a quantum winter, where progress slows to a crawl and good people leave the sector in droves.

On the bright side, even if NISQ devices don’t directly lead to quantum advantage, they may be a needed stepping stone from a hardware evolution perspective. NISQ devices may eventually evolve into the kind of near-perfect physical qubits which are needed as the foundation for automated quantum error correction and logical qubits. But that’s not the path that most users of NISQ devices are on.

I fear that NISQ devices may have diverted far too much research away from development of advanced algorithms which are more likely to exploit logical qubits.

Energy could have been spent more productively using classical quantum simulators and pushing harder for even larger simulators, as well as work on scalability of algorithms and how to test and prove scalability so that work on algorithms on sub-50 qubit simulators can be extrapolated to super-50 qubit simulators. Then we’d have a library of algorithms which are much more ideally suited to exploit logical qubits as they become available.

From the get-go, NISQ should have been couched as a hardware development path towards the near-perfect qubits needed for cheap and efficient logical qubits, rather than a platform for directly implementing applications.

The only notable algorithm category for NISQ should have been algorithms to implement quantum error correction and logical qubits.

All other quantum algorithms should have been couched either as purely logical qubits or near-perfect physical qubits rather than for noisy qubits, as is the case today.

Noisy qubits should never have been seen as viable or a stepping stone for application-level algorithms since quantum advantage is the only acceptable end state for quantum algorithms since we already have classical computers for sub-advantage algorithms.

So, where do we go from here? Continue NISQ hardware development, but make it clear that the real goal is near-perfect qubits to enable automatic quantum error correction to enable logical qubits. And focus application-level algorithm design on classical quantum simulators, not constantly trying to shoehorn and otherwise distort algorithms to misguidedly fit into inappropriate NISQ devices.

NISQ as a stepping stone to quantum error correction and logical qubits

Just to reemphasize the point made in the previous section that NISQ could be recast as being a progression of stepping stones on the path to quantum error correction to enable logical qubits rather than expecting people to develop applications on bare NISQ devices without full automatic quantum error correction.

What is Riggeti doing about quantum error correction?

I was curious what Rigetti was up to, so I was looking at their job descriptions and found a reference to quantum error correction. It’s just a single position, but that’s a start.

The job posting is here:

It lists key Responsibilities as:

  1. Demonstrate practical improvements in quantum processor performance through error mitigation, error detection and error correction.
  2. Develop programming tools and compiler based optimization frameworks for mitigating quantum hardware errors on near-term machines.
  3. Establish hardware and software requirements for practical fault-tolerant codes to enable robust intermediate scale machines.
  4. Implement tests of small logical qubits and fault-tolerant codes.
  5. Collaborate with quantum hardware engineers on system benchmarking and error analysis.
  6. Collaborate with application scientists on quantum circuit construction and error analysis.
  7. Organize and lead directed research programs with external partners in academia, industry, and national labs.

It says they are looking for:

  1. Experience in one or more of the following: digital quantum error mitigation, quantum error correction, quantum fault-tolerance, classical or quantum channel coding.
  2. Experience in implementing error correction on hardware or collaborating with experimentalists or electrical engineers.

I’m glad they’re attempting to do something, but it seems a bit halfhearted, if you ask me.

On the flip side, maybe they actually recognize that implementing full quantum error correction is years away, so as a commercial firm they are not yet ready to fully and deeply commit to a technology which is still a research project.

That said, I appreciate their keen interest in wanting to “Organize and lead directed research programs with external partners in academia, industry, and national labs.” I much prefer that quantum error correction be focused on academic research at this stage, rather than have it get hopelessly distorted by a too-ambitious and premature commercial productization effort.

Is it likely that large-scale logical qubits can be implemented using current technology?

If I look at the kinds of systems that are being produced and proposed using current qubit technologies, including superconducting transmon qubits and trapped-ion qubits, they simply don’t appear to be scalable to the kind of very large numbers of near-perfect qubits needed to implement logical qubits in large numbers, such as hundreds of logical qubits, or even just 100 or 64. To me, there is a crying need for much more sophisticated qubit technologies.

What types of criteria would need to be met to achieve large-scale logical qubits? Unknown at this time, but possibly:

  1. Much simpler qubits. So they can be much smaller. So that many more can be placed on a single chip.
  2. Modular design. So a significant number of chips can be daisy-chained or arranged in a grid, or stacked, or some other form of modular qubit interconnection.
  3. Dramatically improved connectivity. Maybe true, full any-to-any connectivity is too much to ask for, but some solution other than tedious, slow, and error-prone SWAP networks.
  4. Other. Who knows what other criteria. Beyond the scope of this informal paper.

Maybe some modest innovations to current qubit technologies could do the trick, but I’m not so sure.

I’d be in favor of dramatically expanded funding for research in basic qubit technology.

Is quantum error correction fixed for a particular quantum computer or selectable and configurable for each algorithm or application?

I don’t have any answer here, but it just seems to me that there might be parameters or settings which an algorithm designer or application developer might wish to tune to impact the results.

Generally speaking, I don’t find one-size-fits-all solutions to be optimal for anyone.

What parameters or configuration settings should algorithm designers and application developers be able to tune for logical qubits?

I don’t have any answer here, but it just seems to me that there might be parameters or settings which an algorithm designer or application developer might wish to tune to impact the results.

Maybe the developer might wish to indicate the degree of perfection or approximation that would be acceptable in the results, on the theory that less perfection and more granular approximation could lead to greater performance or maybe greater capacity of logical qubits.

Choosing whether or when to drop down into raw physical qubits might be another option. Some portions of some computations might simply not need greater perfection than the near-perfect physical qubits.

I have no idea what other settings might be appropriate. This section and question is really just a placeholder, for now.

What do the wave functions of logical qubits look like?

I am curious what the wave functions of entangled logical qubits look like. I don’t recall seeing this topic discussed in any of the academic papers I looked at.

Technically, an application developer shouldn’t need to care, but… who knows.

On the flip side, algorithm designers should care about wave functions at the logical level, but I haven’t seen any discussion about the relationship between the logical wave function and the underlying physical wave functions.

In fact, I haven’t even seen any mention of the concept of a logical wave function. I imagine that the concept must or should exist, but I simply haven’t seen it.

In particular, I’m curious what the physical wave functions for physical qubits would look like for the Bell states of two entangled logical qubits.

I’m also curious about the physical wave functions for physical qubits for logical qubits enabled in GHZ and W states. Start with three logical qubits. How do the physical wave functions evolve as more and more qubits are added to the entanglement, far beyond three qubits — 8, 16, 32, 128, hundreds, or even thousands of qubits. Are there any practical limits to how entangled logical qubits can get?

Are all of the physical qubits of a single logical qubit entangled together?

I simply don’t know the answer for sure, but it seems as if they would have to be.

How many wave functions are there for a single logical qubit?

One? Or more than one?

One if all of the physical qubits must be entangled together.

I just don’t know what theory should apply here. Should there be two levels of wave functions — one at the physical level and one at the logical level?

For a Hadamard transform of n qubits to generate 2^n simultaneous (product) states, how exactly are logical qubits handling all of those product states?

Product states for a large number of qubits in a Hadamard transform are complicated enough, but what happens to the complexity of the product states when each logical qubit consists of k physical qubits? Is it simply 2^n times 2^k, or is it more complicated than that?

What is the performance cost of quantum error correction?

Quantum error correction doesn’t come for free, but the actual net cost is unclear and unknown at this time. There are two main components to the performance cost:

  1. Execution speed for quantum logic gates. Each logic gate on a logical qubit must be expanded to a large number of gate operations on a large number of physical qubits. Details unknown at this time.
  2. Number of physical qubits needed to implement each logical qubit. This impacts system capacity for a given amount of chip real estate, which reduces the capacity of qubits available to solve application problems.

What is the performance of logical qubit gates and measurements relative to NISQ?

I imagine that there must be some overhead to support logical qubits for execution of quantum logic gates and for measurements.

How exactly would the performance for logical qubits compare to physical qubits for the same operations? Is it a relatively minor overhead, a moderate overhead, or a very heavy overhead — 2–5%, 10–25%, 100%, 200%, 1,000% (10X), or even higher?

Is it a direct function of the number of physical qubits per logical qubit? Is it a linear relationship? Exponential? Or what?

How is a logical qubit initialized, to 0?

Initializing a single physical qubit to 0 is rather simple, but there is the possibility that decoherence or interference could cause the 0 to flip to a 1. This suggests to me that a logical qubit must have some rather sophisticated entanglement to assure that the 0 state will be maintained against decoherence and environmental interference.

Again, an application developer shouldn’t have to worry about such details, but I am curious. I suspect that the details could give me some sense of confidence in how logical qubits work overall.

What sort of performance hit occurs? How much of the initialization of physical qubits can occur in parallel and how much must be sequential?

What happens to connectivity under quantum error correction?

I have a lot of uncertainty and questions about what happens to qubit connectivity under quantum error correction and logical qubits.

Overall, will connectivity be the same, a little better, a little worse, much better, or much worse than working with raw, uncorrected physical qubits?

Some subsidiary questions:

  1. How does connectivity scale under quantum error correction?
  2. How do SWAP networks scale under quantum error correction?
  3. What is the performance of SWAP networks under quantum error correction? This may be the general question of the impact of quantum error correction on gate performance, as well as any additional issues peculiar to SWAP.
  4. How well does connectivity scale for more than a million logical qubits? Or scaling in general for the use of SWAP networks to achieve connectivity.
  5. Will quantum error correction provide 100% error-free full any-to-any connectivity, even if it does still require SWAP networks?
  6. Can swap networks be automatically implemented down in the firmware so that algorithms and applications can presume full any to any connectivity — with no downside or excessive cost?

How useful are logical qubits if still only weak connectivity?

Presuming that there is no native any to any connectivity (such as is available on a trapped-ion device), exactly how useful will logical qubits be? Granted, algorithms with only weak connectivity requirements will be fine, but more complex algorithms requiring more extensive connectivity may still be problematic.

And what would weak connectivity mean for 2-qubit gates on logical qubits?

I suppose it depends on whether SWAP networks, used to overcome weak connectivity, are efficient — and as fully error-free as the logical qubits. But then that raises the question of how efficient SWAP networks are for logical qubits.

Are SWAP networks still needed under quantum error correction?

SWAP networks are sequences of SWAP quantum logic gates used to move the quantum states of two qubits closer together so that they can be used for a two-qubit quantum logic gate on quantum computers which have limited connectivity — they don’t support full any-to-any connectivity for non-adjacent pairs of qubits. SWAP networks are a fact of life on NISQ quantum computers, but are they still needed when quantum error correction is used to implement logical qubits?

I fear that the answer is yes, SWAP networks will still be needed for logical qubits, but I don’t know with certainty. As fair as I can tell, there is nothing in the definition of quantum error correction which overcomes limited, adjacent-only connectivity.

How does a SWAP network work under quantum error correction?

Quantum error correction compensates for errors, but it doesn’t help with the other major obstacle to developing complex quantum algorithms — very limited connectivity (for superconducting transmon qubit devices, not trapped-ion devices), which requires that algorithm designers resort to SWAP networks to shuffle qubits around so that two qubits are adjacent or close enough that they can be directly connected for a two-qubit quantum logic gate.

The mechanics of a SWAP network are fairly straightforward, at least for two single physical qubits, but what exactly has to transpire to connect two logical qubits which are not physically adjacent?

Technically, algorithm designers and application developers won’t need to worry about all of the mechanics which will automatically happen under the hood to implement a SWAP network, but still, it would be nice to know, especially if there is a severe impact on performance, or other notable implications which might impact an application in some way.

From my perspective as a technologist, at a minimum I have an idle curiosity, but my main interest is impact on performance. If it’s free, cheap, and fast, that’s great, but I’m suspicious and skeptical — it just seems to me that there must be an incredible amount of work needed to perform a SWAP network for two logical qubits, especially over a distance of dozens, hundreds, or thousands of logical qubits.

I haven’t seen a paper mentioning this topic — implementation of SWAP networks for logical qubits under quantum error correction.

And then there are questions about the performance of SWAP networks for logical qubits.

How efficient are SWAP networks for logical qubits?

I would really like to understand any performance impact of SWAP networks for logical qubits, especially when a relatively large number of physical qubits are needed for a single logical qubit.

And I’d like to see a direct comparison of performance of a SWAP network for logical qubits compared to physical NISQ qubits.

Also, I’d like to see confirmation of my expectation that a SWAP network for logical qubits would be guaranteed to be error-free. It should be, but I haven’t seen such a guarantee — in writing.

What are the technical risks for achieving logical qubits?

I honestly don’t know all of the technical risks, but these are some that I expect are relevant:

  1. Basic theory. Is it really sound? How can we know?
  2. Evolution of basic theory. Always newer and better ideas appearing and evolving. Risks for sticking with current approach vs. switching to a newer, unproven approach.
  3. Achieving near-perfect qubits with sufficient 9’s.
  4. Firmware for gate operations. Particularly attempting many operations in parallel or sequencing rapidly enough.
  5. Performance.
  6. Granularity maintained. For probability amplitude and phase.

The hope and even expectation is that research programs and commercial projects will identify and characterize these and other risks as they consider and write about their efforts.

How perfectly can a logical qubit match the probability amplitudes for a physical qubit?

I can imagine three possibilities for the probability amplitudes of a logical qubit:

  1. They very closely match the probability amplitudes for a physical qubit.
  2. They are somewhat less accurate than the probability amplitudes for a physical qubit.
  3. They are somewhat more accurate than the probability amplitudes for a physical qubit.

Ultimately the question is whether there may be a loss of granularity or precision of probability amplitudes for logical qubits compared to physical qubits.

Can probability amplitude probabilities of logical qubits ever be exactly 0.0 or 1.0 or is there some tiny, Planck-level epsilon?

The essence of the question is whether there is some inherent quantum mechanical uncertainty in probability amplitudes, some epsilon, so that the probability can never be exactly 0.0 or 1.0, and that any probability p will actually be a range of p plus or minus epsilon. So that an estimated probability of 0.0 would actually be in the range 0.0 to epsilon, and an estimated probability of 1.0 would actually be in the range of 1.0 minus epsilon to 1.0.

This question arises for physical qubits as well, but given the complexity of logical qubits the question may be more significant.

It could also be the case that there is a different epsilon for physical and logical qubits. But I just don’t know and haven’t seen a discussion of it in the literature.

What is the precision or granularity of probability amplitudes and phase of the product states of entangled logical qubits?

How do the precision or granularity of probability amplitudes and phase of physical qubits and logical qubits compare to those of product states of entangled logical qubits?

  1. Do they increase, reduce, or stay exactly the same?
  2. Does stability of values of entangled qubits improve for logical qubits at the cost of the precision and granularity of probability amplitudes and phase, or is stability free with no impact on the precision and granularity of probability amplitudes and phase?

Does the stability of a logical qubit imply greater precision or granularity of quantum state?

Quantum error correction may indeed enable logical qubits to have greater stability (fewer errors, or even no errors), but that says nothing about the impact on the precision or granularity of the quantum state of the qubit in terms of phase and probability amplitude.

The impact is absolutely unclear. I have seen no affirmative statements or promises in any of the papers which I have perused.

There are three possibilities:

  1. Precision and granularity of probability amplitudes and phase is unchanged.
  2. It is better — greater precision and finer granularity.
  3. It is worse — less precision and coarser granularity.

In any case, it is not a slam-dunk that stability of the quantum state of a logical qubit will result in increased precision and granularity, such as is needed for quantum phase estimation, amplitude estimation, and quantum Fourier transforms.

Researchers for logical qubits need to come clean about this, and manufacturers of quantum computers need to clearly document the actual impact.

Is there a proposal for quantum error correction for trapped-ion qubits, or are surface code and other approaches focused on the specific peculiarities of superconducting transmon qubits?

I just don’t know the answer here. A lot of trapped-ion work is too new and of too limited a capacity to begin thinking about quantum error correction.

On the flip side, trapped-ion qubits are generally viewed as more stable and more reliable, with significantly longer coherence times, resulting in something much closer to the near-perfect qubits required for quantum error correction anyway.

That said, I don’t get the impression that tapped-ion devices are close enough to near-perfect qubits to satisfy the demand for the perfection of logical qubits.

On the other hand, the much higher quality of trapped-ion qubits may be of sufficient quality for some fraction of applications.

Even so, that may not be enough to satisfy the requirements to achieve dramatic quantum advantage — 50 to 65 qubits with a significant circuit depth.

Do trapped-ion qubits need quantum error correction?

Trapped-ion qubits, such as from IonQ and Honeywell, are said to be more perfect, with far fewer errors, much longer coherence, and any to any connectivity, so… How much quantum error correction do they still really need? I honestly don’t know the answer.

If trapped-ion qubits are still not close enough to near-perfect for most applications, is there at least a simpler and less expensive form of quantum error correction for trapped-ion qubits? How much simpler and how much less expensive?

Or is it simply that trapped-ion qubits are further along the progress curve for lower error rates for individual physical qubits than superconducting transmon qubits, so that they could begin using quantum error correction sooner and that it would be the same method for error correction as for transmon qubits?

To date, neither IonQ nor Honeywell has published any roadmap or details with regards to quantum error correction or logical qubits. Nor have they even offered any hints.

I haven’t done an academic literature search on the topic for trapped-ion qubits, so I don’t know the state of research for quantum error correction on trapped-ion qubits.

Can simulation of even an ideal quantum computer be the same as an absolutely perfect classical quantum simulator since there may be some residual epsilon uncertainty down at the Planck level for even a perfect qubit?

I don’t know the answer with certainty here, but I do suspect that there is some sort of inherent quantum mechanical uncertainty for both probability amplitudes and phase down at the Planck level.

As mentioned earlier, absolute 0.0 and 1.0 for probability amplitude probability may not be physically possible for a real qubit if there is some minimum epsilon of error.

The practical consequences may simply be that classical quantum simulators must use that epsilon, whatever it may be, possibly with some statistical distribution as a form of noise. Whether it’s an absolute constant or dependent on the specific implementation of the qubit hardware is unknown, to me.

How small must single-qubit error (physical or logical) be before nobody will notice?

Absolutely perfect, absolutely error-free may be (or obviously is) too high a bar to achieve, so the question is how small does single-qubit error rate need to be before the vast majority of potential users of quantum computers would ever really notice.

After all, nobody (or very few) notices hardware errors anymore for classical computers, even though they aren’t absolutely zero.

Does anybody know what the error rate is for a typical classical computer?

Should that be the same standard for quantum computers?

Or do quantum computers deserve an even small error threshold?

Or maybe quantum computers can get by with a significantly higher error threshold since they are probabilistic by nature and most advanced algorithms will still use circuit repetitions to develop a probability distribution for results. I suspect this is true, but it should be studied to confirm.

What is the impact of quantum error correction on quantum phase estimation (QPE) and quantum Fourier transform (QFT)?

The general claim is that quantum phase estimation (QPE) and quantum Fourier transform (QFT) simply aren’t feasible on NISQ devices due to noise and errors, coherence, and circuit depth.

The advent of quantum error correction and logical qubits should enable the use of algorithms based on quantum phase estimation (QPE) and quantum Fourier transform (QFT).

At least this is the theory, the promise. I’d like to see more explicit discussion of the topic, at least as logical qubits begin to become a reality, although I’d like to see discussion much sooner, so we know the impact on design of algorithms and applications.

What is the impact of quantum error correction on granularity of phase and probability amplitude?

Although quantum phase estimation (QPE) and quantum Fourier transform (QFT) will be enabled by quantum error correction, I have some question or concern about the reliance of QPE and QFT on very fine granularity of phase and probability amplitudes.

Note that gradations, granularity, and precision are roughly equivalent terms.

So, the following are equivalent terms:

  • Gradations of phase
  • Phase gradations
  • Granularity of phase
  • Phase granularity
  • Precision of phase
  • Phase precision

And the following are also equivalent terms:

  • Gradations of probability amplitude
  • Probability amplitude gradations
  • Granularity of probability amplitude
  • Probability amplitude granularity
  • Precision of probability amplitude
  • Probability amplitude precision

Phase and probability amplitudes may or may not be in exactly the same boat. I have no reason to believe that they will have different granularity, gradations or precision, but I don’t know either way for a fact.

They might be in reasonable shape, or not. It’s difficult to tell in advance until we can test the actual implementation of logical qubits and quantum error correction.

Technically, there is one difference between them in that while phase is a real number, a probability amplitude is a complex number with a real and imaginary part, the imaginary part being the phase.

The great unknown question is whether quantum error correction increases absolute granularity, or diminishes it in favor of stability of logical qubits.

I have concerns about fine granularity of phase and probability amplitudes, which I discussed in this informal paper:

Remarks from that paper:

  1. Will quantum error correction (QEC) have any impact on phase granularity or limits on phase in general? Hard to say.
  2. Will a logical qubit support more gradations of phase, or fewer gradations, or the same?
  3. Will there be a dramatic increase in phase precision, possibly exponential, based on the number of physical qubits per logical qubit?
  4. Or will it be more of a least common denominator for all of the physical qubits which comprise a logical qubit?
  5. The theoreticians owe us an answer.
  6. And then it’s up to the engineers to build and deliver hardware which fulfills the promises of the theoreticians.

Comments or questions related to phase likely apply to probability amplitude as well. Or so I surmise. Confirmation is needed.

The bottom line is that we could presume that everything will be fine and wonderful once quantum error correction and logical qubits arrive, but it would be much better for the theoreticians and engineers to give us definitive answers and commitments well in advance so that algorithm design and application design can take such factors into account.

What are the effects of quantum error correction on phase precision?

See the preceding section — What is the impact of quantum error correction on granularity of phase and probability amplitude?

What are the effects of quantum error correction on probability amplitude precision?

See the preceding section — What is the impact of quantum error correction on granularity of phase and probability amplitude?

What is the impact of quantum error correction on probability amplitudes of multi-qubit entangled product states?

What exactly is the probability amplitude of an entangled computational basis state for a logical qubit since it is the physical qubits which are entangled?

I’m not sure if there is a distinction between a multi-qubit product state and a tensor product. Should this question be about product states or tensor products? Does it matter?

Critical question:

  • Doesn’t the redundancy and correction need to be across all qubits of the multi-qubit computational basis state, not simply within a single logical qubit?

Bell states are a good two-qubit example.

W and GHZ states are good examples for three or more qubits.

Are there any practical or theoretical limits on the number of logical qubits in a single computational basis state?

What does the theoretical claim that the complete quantum state of entangled qubits cannot be measured or reconstructed one qubit at a time mean when dealing with measurement of logical qubits vs. physical qubits?

If the quantum state of entangled qubits cannot be expressed as the quantum state of the individual qubits, how can quantum error correction for entangled logical qubits perform correction on multi-qubit product states (multiple logical qubits)?

I haven’t seen any examples in papers that I have glanced at.

How are multi-qubit product states realized under quantum error correction?

It might facilitate answering the preceding question (“What is the impact of quantum error correction on probability amplitudes of multi-qubit entangled product states?”) if we knew a little more (or a lot more) about how multi-qubit product states are realized under quantum error correction.

How exactly are the qubits within a single logical qubit entangled, and then how exactly are the qubits between two or more entangled logical qubits entangled?

It sure sounds as if it could get very complicated very fast. Hopefully there is a relatively simple realization.

What is the impact of quantum error correction on probability amplitudes of Bell, GHZ, and W states?

This is simply a special case of the preceding question, but presents a good opportunity for specific, recognizable examples. The Bell states for two qubits, and the GHZ and W states for three, four, and more qubits.

At which stage(s) of the IBM quantum roadmap will logical qubits be operational?

I studied the IBM quantum roadmap carefully and although quantum error correction and logical qubits are briefly mentioned in the text, there’s no indication as to what support will be available at what stages.

My questions for IBM:

  1. What is the earliest stage at which even a single logical qubit will be demonstrated?
  2. What is the earliest stage at which two qubits and a two-qubit logical quantum logic gate will be demonstrated?
  3. What is the earliest stage at which five logical qubits will be demonstrated?
  4. What is the earliest stage at which eight logical qubits will be demonstrated?
  5. What is the earliest stage at which 16 logical qubits will be demonstrated?
  6. What is the earliest stage at which 24 logical qubits will be demonstrated?
  7. What is the earliest stage at which 32 logical qubits will be demonstrated?
  8. What is the earliest stage at which 40 logical qubits will be demonstrated?
  9. What is the earliest stage at which 48 logical qubits will be demonstrated?
  10. What is the earliest stage at which 54 logical qubits will be demonstrated?
  11. What is the earliest stage at which 64 logical qubits will be demonstrated?
  12. What is the earliest stage at which 72 logical qubits will be demonstrated?
  13. What is the earliest stage at which 96 logical qubits will be demonstrated?
  14. What is the earliest stage at which 128 logical qubits will be demonstrated?
  15. What is the earliest stage at which 256 logical qubits will be demonstrated?
  16. What is the earliest stage at which 1024 logical qubits will be demonstrated?
  17. At which stage will the number of qubits switch to being primarily measured as logical qubits rather than physical qubits? I think that in the current roadmap all numbers are for physical qubits.
  18. What will be the target or target range for the number of physical qubits per logical qubit for the various stages in the roadmap?
  19. What will be the default and/or recommended target for physical qubits per logical qubit?
  20. Will algorithms and applications be able to select and configure the number of physical qubits per logical qubit?

Does the Bloch sphere have any meaning or utility under quantum error correction?

Personally, I see very little utility in the Bloch sphere for anything other than visualizing operations on a single two-state qubit. The Bloch sphere is not appropriate for anything more, such as multiple qubits, entangled qubits, or more than two states in general. Given that a logical qubit using quantum error correction is represented by some significant number of physical qubits, the Bloch sphere does not appear to offer any interesting level of utility.

Is this really the case?

What can replace the Bloch sphere? Unknown.

Is there a prospect of a poor man’s quantum error correction, short of perfection but close enough?

Although many, most, or nearly all quantum applications will benefit greatly from quantum error correction and logical qubits, it may be possible that a significant fraction of applications would greatly benefit from a sort of poor man’s logical qubits or poor man’s quantum error correction — significantly lower quality quantum error correction, but enough to increase qubit quality by at least one or two or three 9’s, so that an average application would work well enough for most situations.

It is my personal opinion that it might be possible, but whether it is a reasonable prospect remains to be seen.

It may simply be that much higher-quality qubits, near-perfect qubits might do the trick with absolutely no need for full quantum error correction.

But whether some fractional variant of quantum error correction is practical is anybody’s guess at this stage.

For now, I think the focus should be on full quantum error correction for the long term, and incremental improvements in qubit quality for the near and medium term.

Is quantum error correction all or nothing or varying degrees or levels of correctness and cost?

I don’t yet know enough about the technical details of quantum error correction to know whether it:

  1. Absolutely guarantees 100% error-free operation.
  2. Only achieves some small but non-zero error rate.
  3. The error rate is tunable based on how many physical qubits you wish to allocate for each physical qubit.

Even if tunable, is it discretely tunable with a number of possibilities, or are only a very few fixed possibilities possible?

Even if tunable, is that an overall system parameter, or can each application or algorithm configure the final logical error rate or physical qubits per logical qubit?

Here are some possible overall levels or categories of interest:

  1. Fast and low cost. But only a modest to moderate improvement in error rate and coherence.
  2. Modestly slower and modestly more expensive. But with a significant improvement in error rate and coherence.
  3. Moderately slower and moderately more expensive. But with more dramatic reduction in error rate and coherence.
  4. Much slower and much more expensive. But with perfect or virtually perfect qubits — virtually no errors and virtually no decoherence.

What will actually be offered on real machines from real vendors is unknown at this time. No vendors are even offering a roadmap for quantum error correction with this level of detail.

Will we need classical quantum simulators beyond 50 qubits once we have true error-corrected logical qubits?

It’s unclear whether we will need classical quantum simulators beyond 50 qubits once we have true error-corrected logical qubits. After all, we don’t simulate most of what we do on a regular basis on modern classical computers.

An exception is algorithms which have a critical dependency on fine granularity of probability amplitudes and phase, such as for:

  1. Amplitude amplification.
  2. Quantum phase estimation (QPE).
  3. Quantum Fourier transform (QFT).

The only help there is to have mathematically-rigorous algorithm and circuit analysis tools to detect dependency on fine granularity and check analyzed requirements against the limits of the target quantum hardware.

Similar algorithm and circuit analysis tools are needed to evaluate scalability of an algorithm — to detect how far it can scale before it runs into hardware limitations on fine granularity.

For more on the nature of the fine granularity problem, consult my paper:

Do we really need logical qubits before we have algorithms which can exploit 40 to 60 qubits to achieve true quantum advantage for practical real-world problems?

Logical qubits with quantum error correction are definitely needed, but do we really need them before we have a rich portfolio of algorithms which can exploit 40 to 60 qubits of near-perfect qubits to achieve true quantum advantage for practical real-world problems?

Technically, maybe the answer is no, we don’t need the hardware if there aren’t algorithms ready to exploit it, but it may in fact be true that only the availability or promise of imminent availability will be able to incentivize algorithm designers and application developers to pursue high-qubit algorithms.

It might be worth going back and reviewing old academic papers which were written before researchers became focused on actually running on modest near-term NISQ hardware — algorithms which may have speculated what wondrous results could be achieved when lots of qubits become available. Shor’s factoring algorithm is one example, but it won’t be terribly useful or a quantum advantage for 40 to 60 qubits. But that’s the kind of speculative algorithm to look for.

40 to 60 qubits may still not be sufficient for true production-scale applications, but is likely a reasonably valid stepping stone on that path.

There is also the question of scalability. At present, most algorithms are not very scalable, requiring significant rework to work effectively and efficiently with larger inputs and more qubits. But this could change if tools and methodologies are developed to support the design and development of scalable algorithms. Then, designers and developers could focus on 20 to 30-qubit algorithms with the expectation that such scalable algorithms could easily and quickly scale to 40 to 60-qubit hardware as soon as it becomes available. But, this is not possible today — much fundamental research on algorithm design is needed.

How are gates executed for all data qubits of a single logical qubit?

I have no idea. I haven’t seen any papers yet which fully elaborate gate execution on the physical qubits of a logical qubit.

  1. Are the physical qubits operated on fully in parallel?
  2. Can each physical qubit be operated on serially?
  3. Or maybe in parallel one row or one column of the lattice at a time?
  4. Or, if physical data qubits are entangled, can operating on one change them all?

How are 2-qubit (or 3-qubit) gates executed for non-nearest neighbor physical qubits?

Since the physical qubits of a logical qubit consists of a large lattice, how exactly are quantum logic gates executed for two and three qubit gates where most of the physical qubits are not physically adjacent?

Is there some kind of magic that doesn’t require physical adjacency, or must they be nearest-neighbor physical qubits, implying that large swap networks are needed?

Are manual swap networks required for two and three-qubit quantum logic gates? Is a compiler needed to generate the swap networks, or will that be hidden in a software or firmware layer?

Can swap networks be automatically executed by the firmware or hardware to provide any-to-any connectivity?

What will be the performance impact of swap networks, whether manual or automatic?

Can full any to any connectivity be achieved using quantum error correction, or will it be relatively constrained?

Is a traditional swap network the optimal sequence for connecting two distant logical qubits? Or are more specialized techniques needed.

Can we leave NISQ behind as soon as we get quantum error correction and logical qubits?

Is it true that we will no longer need NISQ devices once we have quantum error correction and logical qubits? No, not quite. Eventually, yes, but not right away.

It’s only true once a sufficient capacity of logical qubits is available, which implies a very large number of physical qubits. Put simply, we can’t get fully beyond NISQ until we have quantum computers with more than a few hundred logical qubits — the definition of intermediate-scale.

Also, some applications may not need fully-corrected results or can partially correct results in an application-specific manner and gain dramatic efficiencies from working directly with physical qubits, both for performance and capacity.

Granted, some applications may only need an intermediate number of logical qubits (fifty to a few hundred), but that won’t be a true post-NISQ configuration. It may no longer be noisy, but it will still be intermediate scale — the IS in NISQ. Earlier I suggested a name for such a configuration: FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices.

Earlier I suggested that the post-NISQ era could indeed start with fault-tolerant intermediate-scale quantum devices, but I do expect that some applications won’t need more than an intermediate number of qubits and will be able to tolerate some minimal level of errors so that near-perfect physical qubits will be sufficient, so technically they will still be NISQ. It all comes down to whether there are enough physical qubits to support the number of logical qubits needed by the application.

How exactly does quantum error correction actually address gate errors — since they have more to do with external factors outside of the qubit?

It’s more clear how quantum error correction addresses the stability of the quantum state within a logical qubit, but how does this help to eliminate gate errors, which are more related to external factors?

Gate errors appear to be more of an interface issue with the world outside of the raw qubits. How can one side of the interface ever really fully grasp the intentions of the other side of the interface?

Technically, a user doesn’t need to know how logical qubits work under the hood, but I’m interested at this stage in terms of what issues need to be addressed to enable logical qubits to function properly — are logical qubits actually practical.

How exactly does quantum error correction actually address measurement errors?

While coherence is an internal issue for individual qubits, measurement is a rather different issue, an interface issue with the outside world, so even the most resilient qubit doesn’t address the source(s) of measurement errors. So, how exactly does quantum error correction actually address measurement errors?

Technically, a user doesn’t need to know how logical qubits work under the hood, but I’m interested at this stage in terms of what issues need to be addressed to enable logical qubits to function properly — are logical qubits actually practical.

Does quantum error correction really protect against gate errors or even measurement errors?

Just summarizing the concern of the preceding two questions — it’s clear how quantum error correction protects quantum state from decoherence, but gate errors and measurement errors, which are caused by external factors outside of the qubit, would seem to be a different story.

I’m looking for some specific, detailed insight into whether and how quantum error correction can protect against gate and measurement errors.

Will quantum error correction approaches vary based on the physical qubit technology?

I really don’t know the answer to this question, but I suspect that there may indeed be technology-specific differences in how quantum error correction is implemented.

So maybe superconducting transmon qubits, trapped-ion qubits, semiconductor qubits, diamond nitrogen-vacancy center qubits, and other technologies for qubits might have distinct schemes for quantum error correction.

At a minimum, each technology may have a significantly different error rate, which affects the number of physical qubits needed to implement a logical qubit.

Is the quantum volume metric still valid for quantum error correction and logical qubits?

IBM’s concept of quantum volume as a metric for performance of a quantum computer is only valid up to about 50 qubits — the size of a quantum circuit which can be simulated on a classical quantum simulator, so by definition, quantum volume is not practical for a quantum computer with more than about 50 logical qubits.

This also means, by definition, that quantum volume is not practical for any post-NISQ devices, which by definition have more than a few hundred logical qubits. Or even for NISQ devices proper since they are supposed to have a minimum of 50 qubits — 50 to a few hundred qubits.

Whether the number of physical qubits needed to implement logical qubits is relevant to measuring quantum volume is unclear. If only the logical qubits must be simulated, then quantum computers with up to 50 logical qubits could be simulated, at least in theory.

But if it was desirable to simulate the physical qubits of a logical qubit, once again, the 50-qubit limit comes into play. So if, for example, 57 or 65 physical qubits are needed to simulate a single logical qubit, then by definition the quantum volume metric would not be practical due to the need to fully simulate more than 50 physical qubits.

The 50 qubit limit is not an absolute hard limit. As classical computers advance, that number could grow, a little, but not by too much since each qubit of growth warrants a doubling of classical computing resources. In addition, actual available classical quantum simulators may not even reach that current 50-qubit limit, maybe maxing out at 45, 40, 38, or even 32 qubits due to the extreme amounts of computing resources required to perform the classical simulation of the generated circuits.

The bottom line is that a new metric for performance will be needed once more than about 50 qubits are used. And in practice the quantum volume metric may begin to break down at 45, 40, or even 38 or fewer qubits.

And all of this begs the question of whether a performance metric such as quantum volume is even relevant for perfect logical qubits. Maybe just use 2^n where n is the number of logical qubits, and skip the simulation since it should be perfect by definition.

For more detail on quantum volume:

As well as my own informal paper:

Is the quantum volume metric relevant to perfect logical qubits?

IBM’s notion of quantum volume as a performance metric is predicated on qubits being somewhat noisy — the goal is to find the largest circuit which can be executed before noise overwhelms correct results. So if logical qubits are perfect and without error, then there is nothing to measure.

Practically, that would mean that the quantum volume for n perfect logical qubits would always be 2^n.

Technically, you could say that the metric is still relevant, but only in that trivial sense since no measurement would be required.

On the other hand, if you presume the pure definition of the quantum volume metric, then measurement is not possible for more than about 50 qubits since that is the limit of what can be practically simulated on classical hardware.

But… it’s still an open question whether logical qubits will be truly and absolutely 100% perfect, so it may still make sense to simulate a logical quantum circuit to see if any measurable errors occurred. But that will be limited to the 50-qubit limit imposed by the simulation requirement.

See paper citations in the preceding section.

What will it mean, from a practical perspective, once quantum error correction and logical qubits arrive?

It’s unclear which application areas will benefit the most, initially, once quantum error correction and logical qubits arrive. It will depend in large part how many logical qubits are supported.

Unfortunately, we don’t have any algorithms that are ready to use quantum error correction and logical qubits. For example,

  1. Shor’s factoring algorithm requires many thousands of logical qubits.
  2. Quantum phase estimation and quantum Fourier transform would be very beneficial to many applications, such as quantum computational chemistry, but not until a significant number of logical qubits are supported, at least in the dozens. And even then algorithm designers will need to shift from hybrid variational approaches to the use of phase estimation.
  3. There isn’t much on the shelf in terms of 30 to 40-qubit algorithms.

In preparation for quantum error correction and logical qubits, we should push for a minimum of 45-qubit simulation and support for 32 to 40-qubit algorithms.

Which algorithms, applications, and application categories will most immediately benefit the most from quantum error correction and logical qubits?

Initially no algorithms, applications, or application categories will benefit from either quantum error correction or logical qubits — until there are sufficient logical qubits to actually achieve quantum advantage to at least some minimal degree.

If all a small quantum computer can do is mimic a large classical computer, that’s not a compelling business case.

Once we have dozens, 50, 75, 100, and even hundreds of logical qubits, we should see quantum computational chemistry, material design, and drug design begin to take off.

The problem right now is that since most algorithm designers and application developers are so heavily focused on relatively small NISQ devices, the algorithms are not being designed in such a way as to exploit larger numbers of perfect logical qubits.

Which algorithms, applications or classes of algorithms and applications are in most critical need of logical qubits?

This is similar to the preceding question, but with the emphasis on algorithms and applications which are less feasible or even outright infeasible without quantum error correction and logical qubits. The answer is still roughly the same.

In truth, I don’t have a good answer at this time. People are so focused on NISQ that they aren’t really thinking about the prospects and possibilities of fault-tolerant logical qubits.

Further, near-perfect qubits confuse the issue even more. A significant fraction of applications will be able to make do with near-perfect qubits, while others still require full quantum error correction, but the split, the dividing line, between the two will remain unclear for some time to come.

How is quantum error correction not a violation of the no-cloning theorem?

It is unclear how copies of quantum state can be made — multiple physical data qubits for each logical qubit — without violating the no-cloning theorem, which says that quantum state cannot be copied.

Is quantum error correction too much like magic?

Quantum error correction, surfaces codes, toric codes, magic state distillation, and all of that is far too cryptic, too complex, and defies wide comprehension. Even technically-sophisticated individuals are forced to take it all on faith rather than hope to understand it all on its merit and detailed technical capabilities.

It all seems too much like magic, a sort of sleight of hand.

It can seem suspicious, maybe even too good to be true.

We really do need to attempt to understand it deeply, but who can do that?

Simplification is desperately needed, but may be too much to ask for.

Who’s closest to real quantum error correction?

Is IBM or Google closer to achieving near-perfect qubits, quantum error correction, and logical qubits? It’s not clear to me. And being ahead right now says nothing about who might achieve a quantum leap ahead of the other in the coming years.

When might either of them achieve usable milestones? Unknown.

What specific features and specific error corrections does each intend to offer? Unknown.

What residual error does each expect to achieve — how close to perfect? Unknown.

Does quantum error correction necessarily mean that the qubit will have a very long or even infinite coherence?

I know that quantum error correction will dramatically extend the coherence time of a qubit (logical qubit), but the details are more than a little vague to me. Will coherence time truly become infinite, or just longer than the vast majority of imaginable algorithms and applications.

Some related questions…

Are logical qubits guaranteed to have infinite coherence?

Same basic question. The answer is unclear to me at this stage. I mean, in theory, the answer should be an unequivocal yes, but I’m not reading that explicitly anywhere.

The bottom line is that vendors need to make a clear and unequivocal statement about the coherence of logical qubits.

What is the specific mechanism of quantum error correction that causes longer coherence — since decoherence is not an “error” per se?

Good question. I don’t know the answer.

Is there a cost associated with quantum error correction extending coherence or is it actually free and a side effect of basic error correction?

Good question. I don’t know the answer.

Is there a possible tradeoff, that various degrees of coherence extension have different resource requirements?

Good question. I don’t know the answer.

Could a more modest degree of coherence extension be provided significantly more cheaply than full, infinite coherence extension?

Good question. I don’t know the answer.

Will evolution of quantum error correction over time incrementally reduce errors and increase precision and coherence, or is it an all or nothing proposition?

Will we have to wait until the end, the final stage of its evolution to use quantum error correction at all, or will it be usable at each stage along the way?

Further, might there be a variety of levels of quantum error correction at each moment of time, each with its own benefits and costs, or will it be a single one size fits all?

Does quantum error correction imply that the overall QPU is any less noisy, or just that logical qubits mitigate that noise?

I don’t have a great answer to this question.I think it is true that scientists and engineers are constantly seeking to reduce the noise and environmental interference in a quantum processing unit (QPU), but some degree of noise and environmental interference is probably inevitable.

Ultimately, quantum error correction doesn’t really care where the noise and errors come from. However the errors came into existence, they must be mitigated and corrected.

That said, I suspect that the error rate of a qubit has two components:

  1. External noise and interference. From outside of the individual qubit.
  2. Internal noise. Within the qubit itself.

Quantum error correction must ultimately deal with both.

What are the potential tradeoffs for quantum error correction and logical qubits?

Some possibilities for the potential tradeoffs when implementing quantum error correction and logical qubits:

  1. Increased gate execution time.
  2. Slower measurement time.
  3. Slower qubit initialization time.
  4. Slower SPAM time. State preparation and measurement.
  5. Fewer usable qubits — number of physical qubits used for each logical qubit.
  6. Residual error rate vs. number of qubits usable by the application.

What are the preferred set of tradeoffs? Good question. It’s not clear at this juncture.

How severely does quantum error correction impact gate execution performance?

How much of gate execution is fully in parallel under quantum error correction? Unknown. Likely to vary from machine to machine.

How large a net performance hit is quantum error correction? Unknown. At this juncture it seems as if it would likely be large and very significant, but it could likely be reduced dramatically over time as qubit technology, firmware, and control electronics evolve.

How does the performance hit on gate execution scale based on the number of physical qubits per logical qubit?

Good question. One might surmise that a larger number of physical qubits per logical qubit would result in a larger performance hit and that a smaller number of physical qubits would result in a smaller hit, but… we simply don’t know at this juncture.

Are there other approaches to logical qubits than strict quantum error correction?

There will always be clever ways to achieve goals other than the commonly-accepted wisdom, but for now, there are only two alternatives to full, automatic, and transparent quantum error correction for achieving true logical qubits:

  1. Perfect qubits. The ideal, but believed to be impractical. Maybe not absolute perfection with a true zero error rate, but at least such an incredibly tiny error rate that almost nobody would ever notice.
  2. Near-perfect qubits. An error rate which may still be significant for some applications, but is not significant for many or even most applications. And needed as the prerequisite for quantum error correction anyway.

Might other alternatives pop up over the coming years? Sure, hope does spring eternal, but I’m not holding my breath.

How many logical qubits are needed to achieve quantum advantage for practical applications?

It isn’t the total number of logical qubits on the system that matters, but how many logical qubits are used in a single Hadamard transform for a single computation. Using n qubits in a single Hadamard transform means operating on 2^n simultaneous quantum states in parallel. A value of 50 or more qubits is widely considered as the threshold for achieving quantum advantage. The precise value is unknown and will vary as classical computing hardware continues to evolve, and will depend on the application.

For some applications even n of 45 or even 40 may be sufficient to achieve quantum advantage. And for other applications n of 55, 60, or even 65 may be needed to achieve quantum advantage.

But for most practical purposes at this stage, n = 50 is the standard to shoot for to achieve quantum advantage.

Some common values for n:

  1. 20 = one million
  2. 30 = one billion
  3. 40 = one trillion
  4. 50 = one quadrillion = one million billions
  5. 60 = one quadrillion = one billion billions

Is it any accident that IBM’s latest machine has 65 qubits?

A recent paper from IBM discusses approaches to quantum error correction which use 57 and 65 physical qubits per logical qubit, so one has to wonder if it is any accident (coincidence) that their most recent high-end quantum computer has exactly 65 qubits, the number of physical qubits needed for a single logical qubit using the heavy square code approach.

Granted, a quantum computer with a single logical qubit is not very useful at all for practical real-world applications, but it would permit all of the capabilities of a single logical qubit to be tested, which is no easy feat. It would indeed be a big first step towards a quantum computer with multiple logical qubits.

In theory, they could test:

  1. Initialization of a logical qubit. And reset as well.
  2. All of the Bloch sphere rotations of a single isolated qubit. Elimination of gate errors.
  3. Extended coherence. Well beyond the coherence time of a physical qubit.
  4. Measurement. Elimination of measurement errors.

But whether this is IBM’s intention with this machine is pure speculation on my part.

The 2019/2020 paper:

What is a surface code?

It’s way beyond the scope of this informal paper, but a surface code is one of the main approaches being proposed and investigated for quantum error correction. I only mention it here to acknowledge its significance.

A surface code is a particular type of quantum error correcting code, which is the theoretical basis for quantum error correction.

Surface codes are one of the leading contenders for quantum error correction.

The reference to surface is for a toric surface, a torus or toroid, such as a doughnut. A surface code is a type of toric code.

Google has mentioned it, in 2018:

IBM has mentioned it as well, in 2020:

Background on surface codes

For a deeper technical discussion of surface codes, a paper from 2012:

Also, a paper from 2017/2018:

For some background on toric codes:

What is the Steane code?

Details are beyond the scope of this informal paper but the concept of the Steane code for quantum error correction was the result of the work by researcher Andrew Steane in the mid-1990’s.

You can read his original paper from 1995/1996 for details:

Also, Andrew’s tutorial:

How might quantum tomography, quantum state tomography, quantum process tomography, and matrix product state tomography relate to quantum error correction and measurement?

This is clearly beyond the scope of this informal paper, but likely to be important for achieving effective and cost-effective quantum error correction.

What is magic state distillation?

Magic state distillation is the kind of detail of quantum error correction that is beyond the scope and depth of this informal paper.

For more detail, see this paper from 2004:

What error threshold or logical error rate is needed to achieve acceptable quality quantum error correction for logical qubit results?

Details about error threshold and logical error rate are far beyond the scope of this paper, but will have a significant impact on implementation, availability, and capacity of logical qubits.

A key concern for me is what residual error rate might remain even after quantum error correction. I would expect that to decline over time, but it still might remain significant in the early implementations of quantum error correction — early implementations of logical qubits may be well short of perfect, much better than raw physical qubits, but still far from perfect.

For more details consult this 2017/2018 paper:

Depth d is the square root of physical qubits per logical qubit in a surface code

When you read about using a surface code for quantum error correction you encounter the term d. It’s an abbreviation for depth. I’m not really sure exactly why it’s called depth, but the net effect is that d happens to be the square root of the number of physical qubits per logical qubit for a particular instance of a surface code. It’s sometimes referred to as the array distance.

A surface code uses a square lattice or square grid of physical qubits to represent a single logical qubit. The depth d is the width and height of the lattice (grid). Being square, d² is the number of physical qubits in the square lattice (grid) for a single logical qubit.

Quoting from the paper cited below:

  • A distance-d surface code has one logical qubit and n = d² physical qubits located at sites of a square lattice of size d × d with open boundary conditions

Actually, d is not precisely the square root of the total number of physical qubits, but the square root of the number of data qubits, with the remaining physical qubits being stabilizer and flag qubits.

An IBM paper which discusses surface codes:

  1. Correcting coherent errors with surface codes
  2. Sergey Bravyi, Matthias Englbrecht, Robert Koenig, Nolan Peard
  3. https://arxiv.org/abs/1710.02270 (2017)
  4. https://www.nature.com/articles/s41534-018-0106-y (2018)

It’s not the intent to discuss surface codes in any depth here, but that paper should provide a significant amount of detail.

What are typical values of d for a surface code?

As discussed in the preceding section, d is the square root of the number of physical qubits per logical qubit for a surface code for quantum error correction. In other words d² (d squared) is the number of physical qubits per logical qubit. Technically, it may be the square root of the number of data qubits, with the remaining physical qubits being stabilizer and flag qubits.

But what are typical values of d?

I actually don’t have a definitive answer, but based on reading through the paper cited in the preceding section, I gleaned the following:

  1. d = 2, d = 4, … — all even values of d are excluded. I don’t know exactly why.
  2. d = 3 — excluded (skipped) “because of strong finite-size effects.” Again, unclear what that’s really all about. You can read the paper.
  3. d = 5 — the smallest practical value. Requires 25 physical (data) qubits.
  4. d = 7 — requires 49 physical (data) qubits.
  5. d = 9 — requires 81 physical (data) qubits.
  6. d = 19 — requires 361 physical (data) qubits.
  7. d = 25 — requires 625 physical (data) qubits.
  8. d = 29 — requires 841 physical (data) qubits.
  9. d = 37 — requires 1,369 physical (data) qubits. Fairly low error rate.
  10. d = 49 — requires 2,401 physical (data) qubits. Diminishing returns.

Is d = 5 really optimal for surface codes?

Why might d = 5 be optimal for surface codes? Is it optimal in all situations? And by what criteria is it optimal? I just don’t know, and was unable to find the answers.

My apologies for not having a specific citation for the assertion of d = 5 being optimal. I did some more searches and couldn’t recall where I actually saw it. I’m fairly sure I did see it somewhere, or that at least I came away with that impression.

I did find this 2019 paper from IBM which uses d = 5 heavily, but I didn’t see any statement indicating that d = 5 was optimal per se.

It could simply be that d = 5 is just optimal for convenient presentation in academic papers! A depth of d = 3 is not considered sufficient to achieve a low error rate, and using d = 7 or d = 37 would make the diagrams unreadable in a typical academic paper format. Whether this practical publication aspect is the rationale for focusing on d = 5 is unclear.

Also, d = 5 might simply be the largest lattice of physical qubits which is practical in the near term, especially if you want to implement two or even five or eight logical qubits. Once physical qubit capacities get up into the hundreds or thousands, maybe then d greater than 5 can be considered practical.

My big concern is why academic papers don’t focus on optimizing the case(s) which maximize the chances of achieving quantum advantage with a minimum number of physical qubits.

Prospects for logical qubits

What exactly are the prospects for logical qubits? More cogently, what are the specific aspects and issues which impinge on the prospects for logical qubits?

  1. Level of technical risk. This is the first and foremost issue, now and for the foreseeable future.
  2. Level of effort to achieve. Beyond the bullet list of technical hurdles, how many people and how much money will be required?
  3. Timeframe. Unknown and strictly speculative, although various parties are now talking more about roadmaps and milestones, which wasn’t true just two years ago.
  4. Capacity in what timeframe. Tee shirt sizes — S, M, L, XL, XXL, XXXL. Specific qubit targets will come once we achieve the smallest sizes — 1–5, 8, 10, 12. L and XL may achieve quantum advantage.

Google and IBM have factored quantum error correction into the designs of their recent machines

Both Google and IBM have repeatedly affirmed that fault-tolerant quantum computing is a priority, and that quantum error correction has been factored into the designs of their recent machines.

To be clear, this does not mean that recent machines actually support true, automatic, transparent quantum error correction, simply that they are making progress in that direction.

Google has mentioned it, in 2018:

Google recently gave an update and roadmap at their Quantum Summer Symposium for 2020:

IBM has mentioned it as well, in 2020:

  • Hardware-aware approach for fault-tolerant quantum computation
  • Although we are currently in an era of quantum computers with tens of noisy qubits, it is likely that a decisive, practical quantum advantage can only be achieved with a scalable, fault-tolerant, error-corrected quantum computer. Therefore, development of quantum error correction is one of the central themes of the next five to ten years.
  • the surface code is the most famous candidate for near-term demonstrations (as well as mid- to long-term applications) on a two-dimensional quantum computer chip. The surface code naturally requires a two-dimensional square lattice of qubits, where each qubit is coupled to four neighbors.
  • we developed two new classes of codes: subsystem codes called heavy-hexagon codes implemented on a heavy-hexagon lattice, and heavy-square surface codes implemented on a heavy-square lattice.
  • The IBM team is currently implementing these codes on the new quantum devices.
  • Guanyu Zhu and Andrew Cross
  • https://www.ibm.com/blogs/research/2020/09/hardware-aware-quantum/

NISQ simulators vs. post-NISQ simulators

Technically, even NISQ simulators are theoretically impractical since intermediate-scale supposedly starts at 50 qubits (50 to a few hundred) which is believed to be just past the end of the number of qubits which can be practically simulated on classical hardware. Getting a classical simulator for even 38 or 40 to 42 qubits is believed to be the current limit.

Presumably, beyond “intermediate scale” means more than a few hundred qubits in a simulator, which is not practical now or likely even in the long-term future.

I suspect that what people really want are simulators in the near-NISQ regime, say 45 to 50 qubits.

Or maybe over the coming years we can somehow manage to push a little beyond simulating 50 qubits, maybe even to 55 qubits.

Note that each incremental qubit implies a doubling of hardware resources — exponential growth, so the doubling process will hit a hard wall within a relatively small number of years.

Need for a paper showing how logical qubit gates work on physical qubits

None of the papers that I have read give a decent account of exactly how a quantum logic gate is executed for a logical qubit or pair of logical qubits.

Granted, no algorithm designer or application developer has a strict need to know such implementation details. They should just be grateful that they have logical qubits at all which magically do it all under the hood.

Still, as a technologist, concerned with capabilities, limitations, and issues, I would like to know what constraints are imposed and the implications on performance. Alternatively, I’d like to know how many hoops and hurdles the quantum computer engineers will have to jump through and over in order to implement logical qubits, to help me understand how long it might take to get there from where we are today.

An animation might be nice, showing how the sequencing through the individual physical qubits occurs on gate execution.

A sequence of images would be good enough, showing the key steps of logical gate execution.

Plain language description of the details of the sequence of steps to execute a logical gate would be okay with me, and would be the bare minimum required.

Pseudo-code for the sequence of steps, and any iterations or conditional execution for execution of a logical gate would be helpful as well. This may be enough for my needs, but a plain language description would be very useful as well.

Need detailed elaboration of basic logical qubit logic gate execution

In addition to a formal paper showing how logical qubit gates work on physical qubits in general, it would be nice to see a detailed elaboration of basic logical qubit logic gate execution for some common simple quantum logic gates which shows what exactly happens to each physical qubit as each logical quantum logic gate is executed.

Some of the more important basic cases:

  1. Initialize all qubits to 0. Are all physical qubits set to 0 or are some 1? What pattern of initialization is performed. Can all logical qubits be initialized simultaneously, in parallel, or is some sequencing required?
  2. Initialize a qubit to 1. After all qubits are initialized to 0, execute an X gate on a qubit to flip it from 0 to 1.
  3. Flip a qubit. Same as 2, but in an unknown state, not 0 per se.
  4. Hadamard gate. To see superposition.
  5. Reverse a Hadamard gate. H gate to create superposition, second H gate to restore to original state before superposition.
  6. Bell state. To entangle two logical qubits.
  7. Measurement. After each of the cases above.

Need animation of what happens between the physical qubits during correction

It may not be absolutely necessary, but I think an animation of how quantum correction works for a realistic logical qubit (such as 65 physical qubits per logical qubit) would help people develop an intuition for what’s really going on.

Actually, a number of animations, for the full range of error scenarios, are needed.

Sequences of discrete static images highlighting what changed would also be very useful.

Maybe even an interactive animation, where a user could create patterns of physical qubit failures and then observe correction in action.

Even with logical qubits, some applications may benefit from the higher performance of near-perfect physical qubits

Even once logical qubits become commonplace, there may still be a need or desire for higher performance and larger applications which operate directly on near-perfect physical qubits, without the performance overhead or more limited capacity of logical qubits.

It’s unclear at this time how much of a performance penalty might be required to implement logical qubits.

Granted, performance can be expected to improve over time, but initially it may be problematic, at least for some applications.

Similarly, the extreme number of physical qubits needed to implement each logical qubit will initially greatly limit the number of available logical qubits.

Granted, the number of available logical qubits can be expected to improve dramatically over time, but initially it is likely to be problematic for many applications.

Near-perfect physical qubits may be sufficient to achieve the ENIAC moment for niche applications

Logical qubits will greatly facilitate many applications, but very limited initial capacities of logical qubits will mean that any application needing a significant number of qubits will have to make do with physical qubits. The good news is that the level of quality needed to enable logical qubits will assure that physical qubits will have near-perfect quality. Still, working with physical qubits will be limited to the most sophisticated, most elite algorithm designers and application developers.

I suspect that larger numbers of near-perfect physical qubits may make it possible for such sophisticated, elite teams to finally achieve what I call The ENIAC Moment for quantum computing — quantum advantage for a production-scale practical application.

Not many teams will have the aptitudes, skills, or talents to achieve the ENIAC moment with raw physical qubits, but a few may just well be able to do it.

The ENIAC moment will be a real breakthrough, but won’t herald an opening of the floodgates for quantum applications — that will require The FORTRAN Moment, which will probably require logical qubits.

Likely need logical qubits to achieve the FORTRAN moment

Although a few sophisticated, elite teams may well be able to achieve The ENIAC Moment for quantum computing — quantum advantage for a production-scale practical application — that won’t help the more-average organization or development team. Most organizations and teams will require the greater convenience and greater reliability of logical qubits, as well as more advanced and approachable programming models, programming languages, and application frameworks. The confluence of all of these capabilities, underpinned by logical qubits, will enable what I call The FORTRAN Moment of quantum computing — where average, non-elite teams and organizations can tap into the power of quantum computing without requiring the higher level of sophistication needed to work with less than perfect physical qubits.

It is my view that logical qubits will indeed be required for the FORTRAN moment. Sure, some more adventurous teams will continue to achieve quantum advantage for applications without logical qubits, but only at great cost and great risk. Many ambitious projects will be started, but ultimately fail as the complexity of dealing with the subtle remaining errors of near-perfect physical qubits eat away at projects like termites.

There may well be projects which can achieve success with raw physical near-perfect qubits, but the nuances of subtle remaining errors may make it a game of Russian Roulette, with some teams succeeding and some failing, and no way to know in advance which is more likely. Logical qubits will eliminate this intense level of uncertainty and anxiety.

Irony: By the time qubits get good enough for efficient error correction, they may be good enough for many applications without the need for error correction

In truth, qubits can have a fairly high error rate and still be suitable for quantum error correction to achieve logical qubits, but that would require a dramatic number of noisy physical qubits to achieve each logical qubit, which limits the number of logical qubits for a machine of a given capacity of physical qubits. The twin goals are:

  1. Achieve logical qubits as quickly as possible.
  2. Maximize logical qubits for a given number of physical qubits. Achieve a low enough error rate for physical qubits so that only a modest number of physical qubits are needed for each logical qubit.

It’s a balancing act. We could achieve a very small number of logical qubits sooner with noisy qubits, but we would prefer a larger number of logical qubits — so that quantum advantage can be achieved (more than 50 logical qubits) — which means we would need a much smaller number of physical qubits per logical qubit, which means much less-noisy qubits.

The net result is that beyond demonstrating a small number of logical qubits as a mere laboratory curiosity, achieving quantum advantage with logical qubits could mean that applications which don’t need the 100% perfection of logical qubits could run reasonably well on the much larger number of raw physical qubits which would otherwise be used to implement a much smaller number of logical qubits, too small to achieve quantum advantage.

Granted, this won’t be true for all or necessarily most applications, but maybe enough to enable some organizations to address production-scale applications well before machines have a large enough logical qubit capacity to achieve production-scale using logical qubits.

I’m not suggesting that people should bet on this outcome, but it is an intriguing possibility.

Readers should suggest dates for various hardware and application milestones

My view is that every application category will be on its own timeline for using logical qubits:

  • Some may happen quickly.
  • Some may take a long time to get started, but then happen quickly.
  • Some may happen slowly over time.
  • Some may happen slowly initially but then begin to accelerate after some time has passed.

But exactly — or even very roughly — what those timelines might be is beyond my ability at this stage.

I’ll leave it to readers — and researchers and leading-edge developers — to suggest specific or even rough dates for the timeline for use of logical qubits by particular application categories.

Some that especially interest me include:

  • Computational chemistry. Using quantum phase estimation and quantum Fourier transform.
  • Shor’s algorithm for factoring very large semiprime numbers such as public encryption keys. Quantum Fourier transform is needed.

And of course all of the many other quantum application categories as well:

To be clear, the interest here in dates is not about experimental and prototype applications, but production-scale practical applications using a significant number of logical qubits, with special emphasis on achieving dramatic quantum advantage if not outright quantum supremacy.

Call for applications to plant stakes at various logical qubit milestones

I’d like to see the primary proponents of using quantum computing for various application categories be specific about how many logical qubits they need to meet various milestones on the path to achieving a dramatic quantum advantage. What milestones for input size and complexity make sense for each of the major quantum application categories?

For example, how many logical qubits does quantum computational chemistry need to effectively use quantum phase estimation to achieve a dramatic quantum advantage. I presume that the number of qubits will be some function of the complexity of the molecule being modeled, so a progression of molecules and their estimated logical qubit requirements would be nice. At roughly what molecular complexity — and logical qubit count — would quantum advantage be achieved?

Reasonable postures to take on quantum error correction and logical qubits

There are really only two postures on quantum error correction and logical qubits which are unreasonable:

  1. Quantum error correction and logical qubits are coming real soon. No, they’re not coming in the next two years.
  2. It will take more than ten years before we see production-scale quantum error correction and logical qubits. No, it won’t take that long.

Actually, there is a third unreasonable posture:

  1. Quantum error correction and logical qubits will never happen.

Reasonable postures include:

  1. Coming relatively soon — within a small number of years, but not the next two (or maybe three) years.
  2. Not happening for quite a few years. Could by 5–7 years, at least for production-scale.
  3. Not happening in the next two years.
  4. Not happening for another 7–10 years. Especially for larger production-scale.
  5. May require at least five years of hardware evolution. Hopefully less, but five years is not unreasonable.

I don’t want to see people be either too optimistic or too pessimistic.

Hardware fabrication challenges are the critical near-term driver, not algorithms

It appears to me that hardware fabrication challenges are the critical near-term driver, not algorithm development. Much more innovation and basic research is needed.

I sense that the most pressing near-term challenges to achieve basic working logical qubits — 5 and 8 logical qubits — are hardware fabrication — more qubits, lower error rate, and better connectivity, rather than a pressing need to develop algorithms and applications for 5–20 qubits.

Ditto for 12, 16, and 20 logical qubits — fabrication challenges are the critical driver, not algorithms, at least at this stage.

But algorithms and applications will become the critical driver once 24–40 logical qubits become widely available.

Need to prioritize basic research in algorithm design

Despite hardware being the primary driver, the lead time to design and develop algorithms can be just as unpredictable as for hardware, so algorithms need a significant priority, particularly those which are both scalable and can be run on classical quantum simulators.

Once hardware for 24–40 logical qubits becomes available, it will be too late to expect that algorithms can be quickly designed. Investment is basic research for algorithms, algorithmic building blocks, and application frameworks needs to be in place the moment the hardware is ready, with algorithms already tested and proven on classical quantum simulators.

I’d like to see at least some basic research on 24 to 40-qubit algorithms now, and a ramp-up over the next two to three years. These algorithms should be testable on classical quantum simulators, so they could be ready to go as soon as hardware supporting 24 to 40-logical qubits becomes available in a few years.

Need for algorithms to be scalable

Most algorithms today are very carefully handcrafted to make optimal use of the available hardware. That’s great for the existing hardware, but we need to be able to quickly exploit new hardware. Algorithms, especially complex algorithms, need to be scaled easily, not requiring complete redesign or significant rework.

The bottom line is that we need to achieve high confidence that scaling will be successful.

Technically, this is not a logical qubit issue per se, but given that initial generations of logical qubit hardware will be very limited, it is much more urgent that algorithms be able to exploit new hardware without the expense, time, and risk of redesign and rework.

Need for algorithms which are provably scalable

But how do we know if an algorithm is truly scalable? Trial and error and guesswork are not the preferred approaches. What’s really needed are mathematical techniques and tools which can automatically validate and prove that a particular algorithm is scalable.

It’s not clear how to do it, but it sure would be very helpful if it can be done.

It’s definitely an endeavor which is worthy of a significant research effort.

How scalable is your quantum algorithm?

From a practical perspective, what do we really mean by scalable? Put simply, if an algorithm runs properly on n qubits or for input size of n , then it should run equally fine on 2 * n to 10 * n or even 100 * n or 1,000 * n qubits or input of that size.

Some of the common sizes that algorithms should scale on include 4, 8, 12, 16, 20, 24, 28, 32, 40, 44, 48, 50, 54, 64, 72, 80, 96, 100, 128, 256, and 1024 qubits or input size — for starters.

The essential goal should be that if your algorithm runs fine on 24 or 40 qubits or inout of that size, then it should run fine on a machine and input of some multiple of that size.

The real goal here is that if an algorithm runs fine on the small machines of today, then it should run fine on the much larger machines of the future.

I see that there are three regimes:

  1. Simulate an ideal quantum computer on classical hardware. Targeting 24 to 44 qubits.
  2. Run on NISQ hardware. Targeting 4 to 32 qubits. Can compare results to simulation.
  3. Run on much larger post-NISQ hardware with quantum error correction and logical qubits. So large that no classical simulation is possible. Target 4 to 44 qubits for simulation validation, but beyond 44 qubits will not be validated on classical hardware.

The exact dividing line is unknown and will evolve over time, but for now, for the sake of argument it may be around 44 qubits. Maybe it can be stretched to 45, 48, 50, 54, 55, or even 60, but 44 or thereabouts may be the practical simulation limit for some time to come.

Classical simulation is not possible for post-NISQ algorithms and applications

Scalability and proof of scalability become urgent and even essential for the post-NISQ regime (more than a few hundred qubits) since simulation will no longer be available for validation, especially once quantum advantage is achieved, where by definition results cannot be achieved on classical hardware.

Points to keep in mind for this post-NISQ regime:

  1. Unable to classically simulate such large configurations.
  2. Presumption that results are valid since they were valid between the hardware and simulator for 4 to 44 qubits.
  3. Need for automated tools to examine an algorithm and mathematically prove that if it works for 4 to 44 qubits on a simulator or real hardware, then it will work correctly for more than 44 qubits on real hardware. Proof that the algorithm is scalable.
  4. Especially tricky to prove scalability of algorithms which rely on fine granularity of phase and probability amplitude. But it’s essential. Plenty of basic research is needed.
  5. Need benchmark algorithms whose results can be quickly validated. Need to be able to test and validate the hardware itself.
  6. Algorithms and applications whose results cannot be rapidly validated are risky although they may be high-value.

Quantum error correction does not eliminate the probabilistic nature of quantum computing

Quantum computing is inherently probabilistic. Many qubit measurements have the potential to provide results which are 0 or 1 with some probability, not because of errors but due to the probability amplitudes of the quantum state of the qubit. So, even with full quantum error correction and perfect logical qubits, results of qubit measurements may still vary as the quantum expectation value of the qubit.

On a quantum computer, 2 + 2 is not always 4 — sometimes it may be 3 or 5 or even other values, but on average, given enough runs or shots, 4 will be the most common value — the expectation value.

This also means that even with full quantum error correction and perfect logical qubits, shot count (circuit repetitions) is still needed to collect enough probabilistic results to provide an accurate probabilistic distribution for the expectation value of the quantum state of the qubit.

That said, in most cases it will be possible to dramatically reduce the shot count from its value on a NISQ device since many circuit repetitions are needed simply to compensate for errors that are not corrected on NISQ devices.

But even then, it may be that as algorithms are scaled from intermediate NISQ scale to production scale, shot count may need to be dramatically scaled as well. Generally, people aren’t publishing scalability parameters for published algorithms, including how shot count scales as input size grows.

For more on shot count and circuit repetitions:

Shot count (circuit repetitions) is still needed even with error-free logical qubits — to develop probabilistic expectation values

Simply reemphasizing the preceding section — that even with error-free logical qubits, shot count (or circuit repetitions) is still needed — since quantum computers are inherently probabilistic, even with perfect qubits.

Shot count (circuit repetitions) currently serves two distinct purposes: 1) to cope with errors, and 2) to develop a probabilistic expectation value, such as with quantum parallelism.

Error-free logical qubits do eliminate the first need for multiple shots (errors), but the second need remains necessary for any quantum computation which is exploiting the probabilistic nature of quantum computing, especially quantum parallelism.

See the paper cited in the preceding section: Shots and Circuit Repetitions: Developing the Expectation Value for Results from a Quantum Computer.

Use shot count (circuit repetitions) for mission-critical applications on the off chance of once in a blue moon errors

There is also the possibility that even logical qubits might not be absolutely perfectly 100% error-free, so that mission-critical applications may wish to use shot count (circuit repetitions) to run a quantum computation several times and use the common result of a majority of runs on the off chance that on a rare once-in-a-blue-moon occasion a quantum computation may encounter a stray uncorrected error.

Although application logic could handle this processing of multiple runs, it might be better or more efficient to have logic in the high-level interface library or even in the QPU driver on the quantum computer itself to do the multiple runs and checking for a common majority result.

We need nicknames for logical qubit and physical qubit

I can personally attest that it gets tedious and wordy to refer to logical qubit and physical qubit. We need nicknames.

That said, I don’t have any great proposals in mind at present.

I’m actually surprised that published papers haven’t suggested nicknames yet, but maybe scientists and other researchers are actually more content with wordy terms anyway.

I’m sure that once quantum error correction begins to advance out of the lab and become more of an engineering project, then the engineers will come up with nicknames, as they always do since they are less concerned with the formality of academic publication.

In my own notes I frequently write LQ to refer to a logical qubit. I haven’t used PQ for physical qubit yet, but it seems reasonably reasonable.

Competing approaches to quantum error correction will continue to evolve even after initial implementations become available

The first vendor or technology to produce a single logical qubit or even five or eight logical qubits will not automatically become the winner for all of time. As with technology in general, sometimes the pioneers reach too far and too fast and stumble badly on execution, allowing successive competitors to zoom past them and take the lead, at least for a time. Leapfrogging of competitors becomes common as well. So trying to pick the winning technology and winning vendor well in advance, like right now, is a true fool’s errand.

The main question is when hardware and quantum error correction will converge on a sweet-spot implementation which opens the algorithmic floodgates, enabling a broad range and diversity of production-scale algorithms and applications. Quantum error correction and hardware can then continue to evolve, potentially in a variety of directions, but algorithms will no longer be blocked by hardware, vendors, hardware errors, and lack of quantum error correction.

I care about the effects and any side effects or collateral effects that may be visible in algorithm results or visible to applications

I don’t care so much about the specifics of how error correction, measurement, or gate execution are implemented, but I do care about the effects and any side effects or collateral effects that may be visible in algorithm results or visible to applications, such as:

  1. Performance.
  2. Cost — total cost and cost per logical qubit.
  3. Capacity — physical qubits are a scarce capacity so they impact capacity of logical qubits.
  4. Absolute impact on error rates.
  5. Guidance for shot count (circuit repetitions.)
  6. Impact on granularity of phase.
  7. Impact on granularity of probability amplitude.

Need for a much higher-level programming model

Should quantum applications even need to be aware of qubits, even logical qubits, or should higher level abstractions (ala classical data types) be much more appropriate for an application-level quantum programming model? I definitely lean towards the latter.

If there is one lesson that my foray into quantum computing has taught me, over and over, it’s a renewed appreciation for the raw intellectual power of the programming models available on classical computers.

Believe it or not, classical programming doesn’t require knowledge of individual bits. How can that be?! How did they do it?!

I think part of the problem is that quantum computing originated with physicists and they just envisioned that computing could be accomplished easily without all of the overhead and complexity of classical programming models. And physicists didn’t need the complex classical programming abstractions to simulate physics, their main interest in quantum computing. Maybe. For very simple situations. But for large-scale and wide-ranging problems it’s a preposterous delusion.

In classical computing we aren’t restricted to working with individual bits, but have a variety of sophisticated data types to choose from:

  1. Integers
  2. Real numbers, floating point
  3. Booleans — logical true and false — binary, but not necessarily implemented as a single bit
  4. Text, strings, characters, character codes
  5. Structures, objects
  6. Arrays, trees, maps, graphs
  7. Media — audio, video
  8. Structured data
  9. Semi-structured data

This is not to suggest that the classical data types are in fact the most appropriate set of data types for quantum computers, but simply to draw an analogy between application-level information and data at the raw machine level.

Granted, some algorithms, just as in classical computing, may need to function at the binary “bit” level, but they should be the exception, not the rule.

Not having these data type abstractions, application developers must jump through algorithmic hoops and twist their reasoning into quantum algorithmic pretzels to transform real-world problems and data into a form suitable for execution on a quantum computer.

Besides, even “qubit” is a poor description of what is really happening at the hardware level, with a differential of probabilities between the purely binary 0 and 1 states, as well as the continuous value of phase angle between the binary basis states.

I won’t delve deeply into the possibilities for a much higher level of programming model for quantum computers since that is far beyond the scope of this informal paper focused only on achieving reliable bits, but the possibilities are virtually endless.

What Caltech Prof. John Preskill has to say about quantum error correction

Just some of his more recent words, as of 2019:

  1. In the near term, noise mitigation without full-blown quantum error correction.
  2. Lower quantum gate error rates will lower the overhead cost of quantum error correction, and also extend the reach of quantum algorithms which do not use error correction.
  3. Progress toward fault-tolerant QC must continue to be a high priority for quantum technologists.
  4. Quantum error correction (QEC) will be essential for solving some hard problems.

I do hope that I have fully and faithfully incorporated his wisdom into my own thinking and writing.

Source:

Getting beyond the hype

The nascent sector of quantum computing has already been plagued by hype of all forms, including marketing and technology. Getting beyond the hype is a major challenge in general, but is an especially daunting challenge when it comes to fault-tolerant quantum computing, quantum error correction, and logical qubits.

For example, too many people are acting as if fault-tolerant quantum computing, quantum error correction, and logical qubits — and quantum advantage — were already here and widely available when they are distinctly not available and won’t be available for years.

The temptations of hype are great, but we need to exert the supreme effort to leap beyond it.

I know I’m way ahead of the game, but that’s what I do, and what interests me

Logical qubits aren’t imminent or even likely within the next year or two, so why spend my attention on them at this juncture? Well, because that’s what I do (looking out into the future) and what I’m interested in. I have no personal need to focus on what’s available and practical right now, today. Rather, I’m much more interested in what comes in the future. Not too distant in the future (beyond ten or twenty years), but not limited to the very-near future (next two years) either.

I’m much more interested in focusing on what algorithm designers and application developers really need to develop production-scale practical applications rather than mere experiments and laboratory curiosities.

Scalability of algorithms needs to be a top priority to achieve dramatic quantum advantage. Logical qubits are a key technology to enable scalability of algorithms.

Conclusions

  1. We definitely need quantum error correction and logical qubits, urgently.
  2. We don’t have it and it’s not coming soon. Its arrival is not imminent.
  3. It’s an active area of research — nowhere close to being ready for prime-time production-scale practical real-world applications. Much more research money is needed. Much more creativity is needed.
  4. It’s not clear which qubit technology will prevail for achieving fault-tolerant quantum computing, quantum error correction, and logical qubits.
  5. Twin progressions are needed — research on quantum error correction and logical qubits and improvements to physical qubits.
  6. It’s a real race — quantum error correction and logical qubits vs. near-perfect qubits and the outcome is unclear.
  7. Near-perfect qubits are of value in their own right, even without quantum error correction.
  8. And research into advanced algorithms exploiting 24 to 40 logical qubits is needed. Including scalability and the ability to validate and prove scalability to support algorithms beyond 40 qubits which can no longer be tested and validated on classical quantum simulators.
  9. Plenty of open questions and issues.
  10. Lots of patience is required.

I wish I could state more definitive facts about quantum error correction and logical qubits, but there are just too many questions and issues due to vagueness, ambiguity, competing approaches, hype, hedged claims, ongoing research, and dependencies on hardware and hardware error rates. Even as hardware and hardware error rate gradually catch up, approaches to quantum error correction continue to evolve. There’s no clarity at this time as to which approach will ultimately be best.

What’s next?

I’ve only surveyed the tip of the iceberg for fault-tolerant quantum computing, quantum error correction, and logical qubits. Some possibilities for my future efforts relate to these areas:

  1. Monitor research and papers as they are published. Refinements in quantum error correction approaches. New approaches. Approaches to near-term hardware.
  2. Monitor vendor activity and advances. Advances in hardware which can enable cost-effective quantum error correction, as well as refinements in quantum error correction approaches which can work with real hardware.
  3. Monitor algorithms — which can actually exploit and require quantum error correction.
  4. Monitor advanced tentative experimental hardware.
  5. Lots of patience.
  6. Deeper dive into quantum error correction itself, including the underlying theory.

Also,

  1. Consider posting introduction and nutshell sections as a standalone, briefer paper for people without the patience to read this full paper.
  2. Consider posting bibliography and references as a standalone paper.

For more of my writing: List of My Papers on Quantum Computing.

Glossary

Most terms used in this paper are defined in my quantum computing glossary:

References and bibliography

This section lists references for greater detail and historical perspective on logical qubits, fault-tolerant quantum computing, and quantum error correction.

For an overall summary of quantum error correction, consult the Wikipedia article on the topic:

And some detail on toric codes:

For historical perspective and technical depth, consult these academic papers — listed chronologically for historical perspective:

  1. 1995 — Scheme for reducing decoherence in quantum computer memory — Shor
    https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.R2493
  2. 1996 — Fault-tolerant quantum computation — Shor
    https://arxiv.org/abs/quant-ph/9605011, https://dl.acm.org/doi/10.5555/874062.875509
  3. 1996 — Multiple Particle Interference and Quantum Error Correction — Steane
    https://arxiv.org/abs/quant-ph/9601029
  4. 1997 — Fault-tolerant quantum computation by anyons — Kitaev
    https://arxiv.org/abs/quant-ph/9707021, https://www.sciencedirect.com/science/article/abs/pii/S0003491602000180
  5. 1997 — Fault-tolerant quantum computation — Preskill
    https://arxiv.org/abs/quant-ph/9712048
  6. 1997 — Fault Tolerant Quantum Computation with Constant Error — Aharonov and Ben-Or
    https://arxiv.org/abs/quant-ph/9611025, https://dl.acm.org/doi/10.1145/258533.258579
  7. 1998 — Quantum codes on a lattice with boundary — Bravyi and Kitaev
    https://arxiv.org/abs/quant-ph/9811052v1
  8. 2004 — Universal Quantum Computation with ideal Clifford gates and noisy ancillas — Bravyi and Kitaev
    https://arxiv.org/abs/quant-ph/0403025
  9. 2005 — Operator Quantum Error Correcting Subsystems for Self-Correcting Quantum Memories — Bacon
    https://arxiv.org/abs/quant-ph/0506023, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.73.012340
  10. 2006 — A Tutorial on Quantum Error Correction — Steane
    https://www2.physics.ox.ac.uk/sites/default/files/ErrorCorrectionSteane06.pdf
  11. 2007 — Fault-tolerant quantum computation with high threshold in two dimensions — Raussendorf and Harrington
    https://arxiv.org/abs/quant-ph/0610082, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.190504
  12. 2007 — Optimal Resources for Topological 2D Stabilizer Codes: Comparative Study — Bombin and Martin-Delgado
    https://arxiv.org/abs/quant-ph/0703272v1, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.76.012305
  13. 2010 — Quantum Computation and Quantum Information: 10th Anniversary Edition
    Michael Nielsen and Isaac Chuang (“Mike & Ike”)
    Chapter 10 — Quantum Error-Correction (Shor code)
    https://www.amazon.com/Quantum-Computation-Information-10th-Anniversary/dp/1107002176
    https://en.wikipedia.org/wiki/Quantum_Computation_and_Quantum_Information
  14. 2012 — Surface codes: Towards practical large-scale quantum computation — Fowler, Mariantoni, Martinis, Cleland
    https://arxiv.org/abs/1208.0928, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.86.032324
  15. 2013 — Implementing a strand of a scalable fault-tolerant quantum computing fabric — Chow, Gambetta, Steffen, et al
    https://arxiv.org/abs/1311.6330, https://www.nature.com/articles/ncomms5015
  16. 2014 — Dealing with errors in quantum computing — Chow
    https://www.ibm.com/blogs/research/2014/06/dealing-with-errors-in-quantum-computing/
  17. 2014 — Logic gates at the surface code threshold: Superconducting qubits poised for fault-tolerant quantum computing — Barends, Martinis, et al
    https://arxiv.org/abs/1402.4848, https://www.nature.com/articles/nature13171
  18. 2015 — Demonstration of a quantum error detection code using a square lattice of four superconducting qubits — Córcoles, Magesan, Srinivasan, Cross, Steffen, Gambetta, Chow
    https://www.nature.com/articles/ncomms7979
  19. 2015 — Building logical qubits in a superconducting quantum computing system — Gambetta, Chow, Steffen
    https://arxiv.org/abs/1510.04375, https://www.nature.com/articles/s41534-016-0004-0
  20. 2016 — Overhead analysis of universal concatenated quantum codes — Chamberland, Jochym-O’Connor, Laflamme
    https://arxiv.org/abs/1609.07497
  21. 2017/2018 — Correcting coherent errors with surface codes — Bravyi, Englbrecht, Koenig, and Peard
    https://arxiv.org/abs/1710.02270, https://www.nature.com/articles/s41534-018-0106-y
  22. 2019 — Topological and subsystem codes on low-degree graphs with flag qubits — Chamberland, Zhu, Yoder, Hertzberg, Cross
    https://arxiv.org/abs/1907.09528, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.10.011022
  23. 2018 — Quantum Computing with Noisy Qubits — Sheldon. In: National Academy of Engineering. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2018 Symposium. Washington (DC): National Academies Press (US); 2019 Jan 28. Available from: https://www.ncbi.nlm.nih.gov/books/NBK538709/
  24. 2020 — Hardware-aware approach for fault-tolerant quantum computation — Zhu and Cross
    https://www.ibm.com/blogs/research/2020/09/hardware-aware-quantum/
  25. 2020 — Day 1 opening keynote by Hartmut Neven (Google Quantum Summer Symposium 2020)
    Current Google research status and roadmap for quantum error correction.
    https://www.youtube.com/watch?v=TJ6vBNEQReU&t=1231
  26. 2020 — Fault-Tolerant Operation of a Quantum Error-Correction Code — Egan, Monroe, et al
    https://arxiv.org/abs/2009.11482
  27. 2020 — Machine learning of noise-resilient quantum circuits — Cincio, Rudinger, Sarovar, and Coles
    https://arxiv.org/abs/2007.01210

Some interesting notes

This is simply interesting material that I an across while researching this paper, which didn’t integrate cleanly into a section of the paper:

  1. The literature on surface codes is somewhat opaque.
    https://arxiv.org/abs/1208.0928
  2. The tolerance of surface codes to errors, with a peroperation error rate as high as about 1% [22, 23], is far less stringent than that of other quantum computational approaches. For example, calculations of error tolerances of the Steane and Bacon-Shor codes, implemented on two-dimensional lattices with nearest-neighbor coupling, find per-step thresholds of about 2 × 10−5 [33, 34], thus requiring three orders of magnitude lower error rate than the surface code.
    https://arxiv.org/abs/1208.0928

--

--

Jack Krupansky
Jack Krupansky

No responses yet