Preliminary Thoughts on Fault-Tolerant Quantum Computing, Quantum Error Correction, and Logical Qubits

  1. The problem in a nutshell
  2. The problem
  3. NISQ has served us well, but much more is needed
  4. The basic story for fault-tolerant quantum computing
  5. Putting the magnitude of the problem in perspective
  6. My motivation
  7. I’m still not expert on quantum error correction theory
  8. My intentions
  9. Apology — we’re just not there yet, or even close
  10. In a nutshell
  11. Quantum advantage is the real goal and logical qubits are the only way to get there
  12. Dramatic and compelling quantum advantage is needed
  13. Requirements to enable dramatic quantum advantage
  14. Quantum error correction is needed to realize the unfulfilled promise of quantum computing
  15. The raw horsepower of advanced algorithmic building blocks such as quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve quantum advantage, but fault-free logical qubits are needed to get there
  16. Qubit reliability
  17. Qubit fidelity
  18. Error rate
  19. Types of errors
  20. NISQ — Noisy Intermediate-Scale Quantum devices
  21. Technically quantum computers with fewer than 50 qubits are not NISQ devices
  22. But people tend to refer to all current quantum computers as NISQ devices
  23. NSSQ is a better term for current small-scale quantum computers
  24. Fault-tolerant quantum computing and fault-tolerant qubits
  25. Fault-free vs. fault-tolerant
  26. FTQC — fault-tolerant quantum computing
  27. Logical qubit
  28. Fault-tolerant logical qubit
  29. Qubit as abstract information vs. qubit as a physical device or physical representation of that abstract information
  30. No logical qubits on NISQ devices
  31. Fault-free logical qubits
  32. Near-perfect qubits
  33. Virtually perfect qubits
  34. Fault tolerance vs. quantum error correction vs. logical qubits
  35. To my mind, progress in perfecting qubits is the best way to go
  36. Classical ECC
  37. Metaphor of ECC for classical computers
  38. Stabilized qubit
  39. Stable qubit
  40. Data qubit
  41. Stabilizer qubit
  42. Coherence extension
  43. Quantum memory
  44. Technical prerequisites for quantum error correction and logical qubits
  45. Technical requirements for quantum error correction and logical qubits
  46. Theory vs. design and architecture vs. implementation vs. idiosyncrasies for quantum error correction of each particular quantum computer
  47. I’ve changed my view of quantum error correction
  48. My own preference is for near-perfect qubits over overly-complex quantum error correction
  49. Manual error mitigation and correction just won’t cut it
  50. Quantum error mitigation vs. quantum error correction
  51. Manual, explicit “error correction” (error mitigation)
  52. Automatic quantum error correction
  53. Quantum error correction is inherently automatic, implied, and hidden (transparent) while error mitigation is inherently manual, explicit, and visible
  54. Noise-resilient and noise-aware techniques
  55. Quantum error correction is still a very active research area — not even yet a laboratory curiosity
  56. Twin progressions — research on quantum error correction and improvements to physical qubits
  57. Quantum threshold theorem
  58. Focus on simulators to accelerate development of critical applications that will be able to exploit logical qubit hardware when it becomes available to achieve dramatic quantum advantage
  59. Still need to advance algorithms to 30–40 qubits using ideal simulators
  60. NISQ vs. fault-tolerant and near-perfect, small-scale, and large-scale
  61. NSSQ — Noisy Small-Scale Quantum devices
  62. NISQ — Noisy Intermediate-Scale Quantum devices
  63. NLSQ — Noisy Large-Scale Quantum devices
  64. NPSSQ — Near-Perfect Small-Scale Quantum devices
  65. NPISQ — Near-Perfect Intermediate-Scale Quantum devices
  66. NPLSQ — Near-Perfect Large-Scale Quantum devices
  67. FTSSQ — Fault-Tolerant Small-Scale Quantum devices
  68. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices
  69. FTLSQ — Fault-Tolerant Large-Scale Quantum devices
  70. What is post-NISQ?
  71. When will post-NISQ begin?
  72. Post-noisy is a more accurate term than post-NISQ
  73. But for most uses post-NISQ will refer to post-noisy
  74. Vendors need to publish roadmaps for quantum error correction
  75. Vendors need to publish roadmaps for near-perfect qubits
  76. Likely that 32-qubit machines can achieve near-perfect qubits for relatively short algorithms within a couple of years
  77. Unlikely to achieve 32 logical qubits for at least five years
  78. Levels of qubit quality
  79. Possible need for co-design to achieve optimal hardware design for quantum error correction
  80. Top 10 questions
  81. Additional important questions
  82. Kinds of questions beyond the scope or depth of this paper
  83. Top question #1: When will quantum error correction and logical qubits be practical?
  84. Top question #2: How much will hardware have to advance before quantum error correction becomes practical?
  85. Top question #3: Will quantum error correction be truly 100% transparent to quantum algorithms and applications?
  86. Top question #4: How many physical qubits will be needed for each logical qubit?
  87. Citations for various numbers of physical qubits per logical qubit
  88. Formulas from IBM paper for physical qubits per logical qubit
  89. For now, 65 physical qubits per logical qubit is as good an estimate as any
  90. Top question #5: Does quantum error correction guarantee absolute 100% perfect qubits?
  91. Top question #6: Does quantum error correction guarantee infinite coherence?
  92. Top question #7: Does quantum error correction guarantee to eliminate 100% of gate errors, or just a moderate improvement?
  93. Top question #8: Does quantum error correction guarantee to eliminate 100% of measurement errors, or just a moderate improvement?
  94. Top question #9: What degree of external, environmental interference can be readily and 100% corrected by quantum error correction?
  95. Top question #10: How exactly does quantum error correction work for multiple, entangled qubits — multi-qubit product states?
  96. Do we really need quantum error correction if we can achieve near-perfect qubits?
  97. Will qubits eventually become good enough that they don’t necessarily need quantum error correction?
  98. Which will win the race, quantum error correction or near-perfect qubits?
  99. When will logical qubits be ready to move beyond the laboratory curiosity stage of development?
  100. How close to perfect is a near-perfect qubit?
  101. How close to perfect must near-perfect qubits be to enable logical qubits?
  102. How close to perfect must near-perfect qubits be to enable logical qubits for 2-qubit gates?
  103. When can we expect near-perfect qubits?
  104. Are perfect qubits possible?
  105. How close to perfect will logical qubits really be?
  106. But doesn’t IonQ claim to have perfect qubits?
  107. When can we expect logical qubits of various capacities?
  108. When can we expect even a single logical qubit?
  109. When can we expect 32 logical qubits?
  110. What is quantum error correction?
  111. What is a quantum error correcting code?
  112. Is NISQ a distraction and causing more harm than good?
  113. NISQ as a stepping stone to quantum error correction and logical qubits
  114. What is Riggeti doing about quantum error correction?
  115. Is it likely that large-scale logical qubits can be implemented using current technology?
  116. Is quantum error correction fixed for a particular quantum computer or selectable and configurable for each algorithm or application?
  117. What parameters or configuration settings should algorithm designers and application developers be able to tune for logical qubits?
  118. What do the wave functions of logical qubits look like?
  119. Are all of the physical qubits of a single logical qubit entangled together?
  120. How many wave functions are there for a single logical qubit?
  121. For a Hadamard transform of n qubits to generate 2^n simultaneous (product) states, how exactly are logical qubits handling all of those product states?
  122. What is the performance cost of quantum error correction?
  123. What is the performance of logical qubit gates and measurements relative to NISQ?
  124. How is a logical qubit initialized, to 0?
  125. What happens to connectivity under quantum error correction?
  126. How useful are logical qubits if still only weak connectivity?
  127. Are SWAP networks still needed under quantum error correction?
  128. How does a SWAP network work under quantum error correction?
  129. How efficient are SWAP networks for logical qubits?
  130. What are the technical risks for achieving logical qubits?
  131. How perfectly can a logical qubit match the probability amplitudes for a physical qubit?
  132. Can probability amplitude probabilities of logical qubits ever be exactly 0.0 or 1.0 or is there some tiny, Planck-level epsilon?
  133. What is the precision or granularity of probability amplitudes and phase of the product states of entangled logical qubits?
  134. Does the stability of a logical qubit imply greater precision or granularity of quantum state?
  135. Is there a proposal for quantum error correction for trapped-ion qubits, or are surface code and other approaches focused on the specific peculiarities of superconducting transmon qubits?
  136. Do trapped-ion qubits need quantum error correction?
  137. Can simulation of even an ideal quantum computer be the same as an absolutely perfect classical quantum simulator since there may be some residual epsilon uncertainty down at the Planck level for even a perfect qubit?
  138. How small must single-qubit error (physical or logical) be before nobody will notice?
  139. What is the impact of quantum error correction on quantum phase estimation (QPE) and quantum Fourier transform (QFT)?
  140. What is the impact of quantum error correction on granularity of phase and probability amplitude?
  141. What are the effects of quantum error correction on phase precision?
  142. What are the effects of quantum error correction on probability amplitude precision?
  143. What is the impact of quantum error correction on probability amplitudes of multi-qubit entangled product states?
  144. How are multi-qubit product states realized under quantum error correction?
  145. What is the impact of quantum error correction on probability amplitudes of Bell, GHZ, and W states?
  146. At which stage(s) of the IBM quantum roadmap will logical qubits be operational?
  147. Does the Bloch sphere have any meaning or utility under quantum error correction?
  148. Is there a prospect of a poor man’s quantum error correction, short of perfection but close enough?
  149. Is quantum error correction all or nothing or varying degrees or levels of correctness and cost?
  150. Will we need classical quantum simulators beyond 50 qubits once we have true error-corrected logical qubits?
  151. Do we really need logical qubits before we have algorithms which can exploit 40 to 60 qubits to achieve true quantum advantage for practical real-world problems?
  152. How are gates executed for all data qubits of a single logical qubit?
  153. How are 2-qubit (or 3-qubit) gates executed for non-nearest neighbor physical qubits?
  154. Can we leave NISQ behind as soon as we get quantum error correction and logical qubits?
  155. How exactly does quantum error correction actually address gate errors — since they have more to do with external factors outside of the qubit?
  156. How exactly does quantum error correction actually address measurement errors?
  157. Does quantum error correction really protect against gate errors or even measurement errors?
  158. Will quantum error correction approaches vary based on the physical qubit technology?
  159. Is the quantum volume metric still valid for quantum error correction and logical qubits?
  160. Is the quantum volume metric relevant to perfect logical qubits?
  161. What will it mean, from a practical perspective, once quantum error correction and logical qubits arrive?
  162. Which algorithms, applications, and application categories will most immediately benefit the most from quantum error correction and logical qubits?
  163. Which algorithms, applications or classes of algorithms and applications are in most critical need of logical qubits?
  164. How is quantum error correction not a violation of the no-cloning theorem?
  165. Is quantum error correction too much like magic?
  166. Who’s closest to real quantum error correction?
  167. Does quantum error correction necessarily mean that the qubit will have a very long or even infinite coherence?
  168. Are logical qubits guaranteed to have infinite coherence?
  169. What is the specific mechanism of quantum error correction that causes longer coherence — since decoherence is not an “error” per se?
  170. Is there a cost associated with quantum error correction extending coherence or is it actually free and a side effect of basic error correction?
  171. Is there a possible tradeoff, that various degrees of coherence extension have different resource requirements?
  172. Could a more modest degree of coherence extension be provided significantly more cheaply than full, infinite coherence extension?
  173. Will evolution of quantum error correction over time incrementally reduce errors and increase precision and coherence, or is it an all or nothing proposition?
  174. Does quantum error correction imply that the overall QPU is any less noisy, or just that logical qubits mitigate that noise?
  175. What are the potential tradeoffs for quantum error correction and logical qubits?
  176. How severely does quantum error correction impact gate execution performance?
  177. How does the performance hit on gate execution scale based on the number of physical qubits per logical qubit?
  178. Are there other approaches to logical qubits than strict quantum error correction?
  179. How many logical qubits are needed to achieve quantum advantage for practical applications?
  180. Is it any accident that IBM’s latest machine has 65 qubits?
  181. What is a surface code?
  182. Background on surface codes
  183. What is the Steane code?
  184. How might quantum tomography, quantum state tomography, quantum process tomography, and matrix product state tomography relate to quantum error correction and measurement?
  185. What is magic state distillation?
  186. Depth d is the square root of physical qubits per logical qubit in a surface code
  187. What are typical values of d for a surface code?
  188. Is d = 5 really optimal for surface codes?
  189. What error threshold or logical error rate is needed to achieve acceptable quality quantum error correction for logical qubit results?
  190. Prospects for logical qubits
  191. Google and IBM have factored quantum error correction into the designs of their recent machines
  192. NISQ simulators vs. post-NISQ simulators
  193. Need for a paper showing how logical qubit gates work on physical qubits
  194. Need detailed elaboration of basic logical qubit logic gate execution
  195. Need animation of what happens between the physical qubits during correction
  196. Even with logical qubits, some applications may benefit from the higher performance of near-perfect physical qubits
  197. Near-perfect physical qubits may be sufficient to achieve the ENIAC moment for niche applications
  198. Likely need logical qubits to achieve the FORTRAN moment
  199. Irony: By the time qubits get good enough for efficient error correction, they may be good enough for many applications without the need for error correction
  200. Readers should suggest dates for various hardware and application milestones
  201. Call for applications to plant stakes at various logical qubit milestones
  202. Reasonable postures to take on quantum error correction and logical qubits
  203. Hardware fabrication challenges are the critical near-term driver, not algorithms
  204. Need to prioritize basic research in algorithm design
  205. Need for algorithms to be scalable
  206. Need for algorithms which are provably scalable
  207. How scalable is your quantum algorithm?
  208. Classical simulation is not possible for post-NISQ algorithms and applications
  209. Quantum error correction does not eliminate the probabilistic nature of quantum computing
  210. Shot count (circuit repetitions) is still needed even with error-free logical qubits — to develop probabilistic expectation values
  211. Use shot count (circuit repetitions) for mission-critical applications on the off chance of once in a blue moon errors
  212. We need nicknames for logical qubit and physical qubit
  213. Competing approaches to quantum error correction will continue to evolve even after initial implementations become available
  214. I care about the effects and any side effects or collateral effects that may be visible in algorithm results or visible to applications
  215. Need for a much higher-level programming model
  216. What Caltech Prof. John Preskill has to say about quantum error correction
  217. Getting beyond the hype
  218. I know I’m way ahead of the game, but that’s what I do, and what interests me
  219. Conclusions
  220. What’s next?
  221. Glossary
  222. References and bibliography
  223. Some interesting notes

The problem in a nutshell

  1. Quantum state of qubits dissipates rapidly — circuit depth is very limited.
  2. Operations on qubits (quantum logic gates) are imperfect.
  3. Measurement of qubits is imperfect.

The problem

  1. Errors which occur within individual qubits, even when completely idle.
  2. Errors which occur when operations are performed on qubits. Qubits in action.
  1. Decoherence. Gradual decay of values over time.
  2. Gate errors. Each operation on a qubit introduces another degree of error.
  3. Measurement errors. Simply measuring a qubit has some chance of failure.
  1. Environmental interference. Even despite the best available shielding.
  2. Crosstalk between devices. Absolute isolation is not assured.
  3. Noise in control circuitry. Noise in the classical digital and analog circuitry which controls execution of gates on qubits.
  4. Imperfections in the manufacture of qubits.

NISQ has served us well, but much more is needed

The basic story for fault-tolerant quantum computing

Putting the magnitude of the problem in perspective

My motivation

I’m still not expert on quantum error correction theory

My intentions

  1. What the technology can do.
  2. What the technology can’t do or might not do.
  3. What limitations the technology might have.
  4. Where the technology needs more work.
  5. The potential timing of when the technology might become available.
  6. What actions algorithm designers and application developers might consider taking to exploit this new technology.
  7. Any questions I have.
  8. Any issues that I have identified.

Apology — we’re just not there yet, or even close

  1. IBM
  2. Google
  3. IonQ
  4. Honeywell
  5. Rigetti

In a nutshell

  1. NISQ is better than nothing, but…
  2. NISQ is not enough.
  3. Noisy qubit algorithms won’t be scalable to significant circuit depth.
  4. Perfect qubits would be best, but…
  5. If we can’t have perfect qubits, logical qubits would be good enough.
  6. Quantum error correction is the critical wave of the future.
  7. Quantum error correction is needed to unlock and realize the unfulfilled promise and full potential of quantum computing.
  8. But no time soon — it could be 2–7 years before we see even a modest number of logical qubits.
  9. Focus on logical qubits where algorithms and applications see only logical qubits — error correction is automatic, implicit, and hidden, rather than algorithm or application-driven error mitigation which is manual, explicit, and visible.
  10. The FORTRAN moment for quantum computing may not be possible without quantum error correction and logical qubits.
  11. But near-perfect physical qubits may be sufficient to achieve the ENIAC moment for niche applications, at least for some of the more elite users, even though not for most average users.
  12. Quantum advantage may not be possible without quantum error correction.
  13. The raw horsepower of advanced algorithmic building blocks such as quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve quantum advantage, but fault-free logical qubits are needed to get there.
  14. Achieving near-perfect qubits would obviate some not insignificant fraction of the need for full quantum error correction. More-perfect qubits are a win-win — better for NISQ and result in more efficient quantum error correction.
  15. It’s a real race — quantum error correction vs. near-perfect qubits — the outcome is unclear.
  16. Do we really need quantum error correction if we can achieve near-perfect qubits?
  17. It’s not clear if quantum error correction means logical qubits are absolutely perfect and without any errors, or just a lot better even if not perfect. Maybe there are levels of quantum error correction. Or stages of evolution towards full, perfect quantum error correction.
  18. Quantum error correction must be automatic and transparent — anything less is really just error mitigation, not true error correction, and won’t fully enable widespread development of production-scale applications achieving true quantum advantage.
  19. It may be five years or even seven years before we see widespread availability of quantum error correction and logical qubits. Maybe we might see partial implementations in 2–3 years.
  20. It’s not yet clear what number of physical qubits will be needed for each logical qubit. Maybe 7, 9, 13, 25, 57, 65, hundreds, thousands, or even millions.
  21. There may be degrees of fault tolerance or levels of logical qubit quality rather than one size fits all.
  22. Need to focus on simulators to accelerate development of critical applications that will be able to exploit logical qubit hardware when it becomes available, to achieve dramatic quantum advantage.
  23. Definition of logical qubit.
  24. Plenty of open questions and issues.
  25. Still a very active area of research.
  26. Still a laboratory curiosity — or not even yet a laboratory curiosity as many proposals are still only on paper, nowhere near close to being ready for production-scale use for practical real-world applications.
  27. Great progress is indeed being made — IBM and Google. Lots of research papers.
  28. Quantum error correction does not eliminate the probabilistic nature of quantum computing — shot count (circuit repetitions) is still needed to collect enough probabilistic results to provide an accurate probability distribution.
  29. Even with quantum error correction, we still need to see the development of a rich portfolio of algorithms which exploit 40 to 60 qubits of logical qubits to achieve true quantum advantage for practical real-world problems.
  30. It’s unclear whether we will need classical quantum simulators beyond 40–50 qubits once we have true quantum error correction and logical qubits. After all, we don’t simulate most of what we do on a regular basis on modern classical computers.
  31. Should quantum applications even need to be aware of qubits, even logical qubits, or should higher level abstractions (ala classical data types) be much more appropriate for an application-level quantum programming model?

Quantum advantage is the real goal and logical qubits are the only way to get there

Dramatic and compelling quantum advantage is needed

Requirements to enable dramatic quantum advantage

  1. More than 50 qubits in a single quantum computation. In a single Hadamard transform of 2⁵⁰ quantum states to be operated on in parallel.
  2. More than shallow circuit depth.

Quantum error correction is needed to realize the unfulfilled promise of quantum computing

The raw horsepower of advanced algorithmic building blocks such as quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve quantum advantage, but fault-free logical qubits are needed to get there

Qubit reliability

  1. One nine. Such as 90%, 98%, 97%, or maybe even 95%. One error in 10, 50, 33, or 20 operations.
  2. Two nines. Such as 99%, 99.5%, or even 99.8%. One error in 100 operations.
  3. Three nines. Such as 99.9%, 99.95%, or even 99.98%. One error in 1,000 operations.
  4. Four nines. Such as 99.99%, 99.995%, or even 99.998%. One error in 10,000 operations.
  5. Five nines. Such as 99.999%, 99.9995%, or even 99.9998%. One error in 100,000 operations.
  6. Six nines. Such as 99.9999%, 99.99995%, or even 99.99998%. One error in one million operations.
  7. Seven nines. Such as 99.99999%, 99.999995%, or even 99.999998%. One error in ten million operations.
  8. And so on. As many nines as you wish.

Qubit fidelity

Error rate

  1. 1.0 or 100% — all operations fail. 0% qubit reliability.
  2. 0.50 or 50% — half of operations fail. 50% qubit reliability.
  3. 0.10 or 10% — one in ten operations fail. 90% qubit reliability.
  4. 0.05 or 5% — one in twenty operations fail. 95% qubit reliability.
  5. 0.02 or 2% — one in fifty operations fail. 98% qubit reliability.
  6. 0.01 or 10E-2 or 1% — one in a hundred operations fail. 99% qubit reliability.
  7. 0.001 or 10E-3 or 0.1% — one in a thousand operations fail. 99.9% (3 9’s) qubit reliability.
  8. 0.0001 or 10E-4 or 0.01% — one in ten thousand operations fail. 99.99% (4 9’s) qubit reliability.
  9. 0.00001 or 10E-5 or 0.001% — one a hundred thousand operations fail. 99.999% (5 9’s) qubit reliability.
  10. 0.000001 or 10E-6 or 0.0001% — one in a million operations fail. 99.9999% (6 9’s) qubit reliability.
  11. 0.0000001 or 10E-7 or 0.00001% — one in ten million operations fail. 99.99999% (7 9’s) qubit reliability.
  12. 0.00000001 or 10E-8 or 0.000001% — one in a hundred million operations fail. 99.999999% (8 9’s) qubit reliability.
  13. 0.000000001 or 10E-9 or 0.0000001% — one in a billion operations fail. 99.9999999% (9 9’s) qubit reliability.

Types of errors

  1. Errors which occur within individual qubits, even when completely idle.
  2. Errors which occur when operations are performed on qubits. Qubits in action.
  1. Decoherence. Gradual decay of values over time. Even when idle.
  2. Gate errors. Each operation on a qubit introduces another potential degree of error.
  3. Measurement errors. Simply measuring a qubit has some chance of failure.
  1. Environmental interference. Even despite the best available shielding.
  2. Crosstalk between devices. Absolute isolation is not assured.
  3. Noise in control circuitry. Noise in the classical digital and analog circuitry which controls execution of gates on qubits.
  4. Imperfections in the manufacture of qubits.

NISQ — Noisy Intermediate-Scale Quantum devices

  • This stands for Noisy IntermediateScale Quantum. Here “intermediate scale” refers to the size of quantum computers which will be available in the next few years, with a number of qubits ranging from 50 to a few hundred. 50 qubits is a significant milestone, because that’s beyond what can be simulated by brute force using the most powerful existing digital supercomputers. “Noisy” emphasizes that we’ll have imperfect control over those qubits; the noise will place serious limitations on what quantum devices can achieve in the near term.
  • Quantum Computing in the NISQ era and beyond
  • John Preskill
  • https://arxiv.org/abs/1801.00862

Technically quantum computers with fewer than 50 qubits are not NISQ devices

  • Google — 53 qubits.
  • IBM — 53 qubits.
  • IBM — 65 qubits

But people tend to refer to all current quantum computers as NISQ devices

NSSQ is a better term for current small-scale quantum computers

Fault-tolerant quantum computing and fault-tolerant qubits

Fault-free vs. fault-tolerant

FTQC — fault-tolerant quantum computing

Logical qubit

  1. What the algorithm designer or application developer sees in the programming model.
  2. What the hardware implements.
  • logical qubit. The fault-free qubits which are directly referenced by algorithm designers and application developers, usually to distinguish them from the physical qubits and quantum error correction algorithms which may be needed to implement each logical qubit in order to provide them with fault-tolerance.

Fault-tolerant logical qubit

Qubit as abstract information vs. qubit as a physical device or physical representation of that abstract information

  1. A classical bit is a unit of abstract information that can be represented on a storage medium or in an electronic device, such as a flip flop, memory cell, or logic gate such as a flip flop. A 0 or 1, distinct from how a 0 or 1 is represented physically.
  2. A quantum bit or qubit is a hardware device which can store the representation of quantum state in that hardware device. Quantum state being two basis states, |0> and |1>, two probability amplitudes, and a phase difference. But that abstract notion of quantum state is distinct from the hardware device representing that abstract state information.

No logical qubits on NISQ devices

Fault-free logical qubits

Near-perfect qubits

  1. Good enough to enable quantum error correction to produce logical qubits.
  2. Good enough for some interesting range of quantum applications so that they are able to produce acceptable results even without quantum error correction.

Virtually perfect qubits

Fault tolerance vs. quantum error correction vs. logical qubits

  1. Fault tolerance means that the computer is capable of successfully completing a computation even in the presence of errors. Typically via quantum error correction.
  2. Quantum error correction is a method for restoring qubits to their correct quantum state should they be corrupted somehow. A method for achieving fault tolerance.
  3. Logical qubits are qubits which are fault tolerant. Typically via quantum error correction.

To my mind, progress in perfecting qubits is the best way to go

Classical ECC

Metaphor of ECC for classical computers

Stabilized qubit

Stable qubit

Data qubit

Stabilizer qubit

Coherence extension

  1. A factor or ratio relative to the coherence time of a physical qubit. 1.25 for a 25% increase, 2.00 for a doubling or 100% increase, 10 for a tenfold increase, 100 for a hundredfold increase, 1,000,000 for a millionfold increase.
  2. A percentage of the coherence time of a physical qubit. 125% for a 25% increase, 200% for a doubling or 100% increase, 1000% for a tenfold increase, 10000% for a hundredfold increase, 100000000% for a millionfold increase.
  3. Indefinite. As long as the machine has power.
  4. Infinite or persistent. Persists even if the machine is powered off.

Quantum memory

  • quantum memory. One or more qubits which are capable of maintaining their quantum state for an indefinite if not infinite period of time. Indefinite meaning as long as the machine has power, and infinite or persistent meaning even if the machine no longer has power. The former analogous to the main memory of a classical computer, and the latter analogous to the mass storage of a classical computer.

Technical prerequisites for quantum error correction and logical qubits

  1. Complete theoretical basis. All of the details of the theory behind both quantum error correction and logical qubits need to be worked out in excruciating detail before the hardware can be designed and implemented.
  2. Lower-error qubits. Better, higher-quality hardware for qubits. The exact required error rate is unknown.
  3. Many physical qubits. The exact number of physical qubits per logical qubit is unknown.
  4. Fast qubit control to handle many physical qubits. Execution of a single quantum logic gate will affect not simply one or two qubits, but many dozens, hundreds, or even thousands of physical qubits.

Technical requirements for quantum error correction and logical qubits

  • A physical qubit that is well isolated from the environment and is capable of being addressed and coupled to more than one extra qubit in a controllable manner,
  • A fault-tolerant architecture supporting reliable logical qubits, and
  • Universal gates, initialization, and measurement of logical qubits

Some quotes from IBM’s 2015/2017 paper on logical qubits

  1. Building logical qubits in a superconducting quantum computing system
  2. Jay M. Gambetta, Jerry M. Chow, Matthias Steffen
  3. https://arxiv.org/abs/1510.04375
  4. https://www.nature.com/articles/s41534-016-0004-0
  1. Scalable fault-tolerant quantum computers — “Overall, the progress in this exciting field has been astounding, but we are at an important turning point where it will be critical to incorporate engineering solutions with quantum architectural considerations, laying the foundation towards scalable fault-tolerant quantum computers in the near future.
  2. Quantum conflict — “balancing just enough control and coupling, while preserving quantum coherence.
  3. Logical qubits — “The essential idea in quantum error correction (QEC) is to encode information in subsystems of a larger physical space that are immune to noise.
  4. Fault-tolerant logical qubits — “QEC can be used to define fault-tolerant logical qubits, through employing a subtle redundancy in superpositions of entangled states and non-local measurements to extract entropy from the system without learning the state of the individual physical qubits.
  5. Surface code — “There are many approaches to achieving quantum fault-tolerance, one of the most promising is the two-dimensional (2D) surface code.
  6. Quantum memory, fault-tolerant quantum memory — “near term progress towards the monumental task of fully fault-tolerant universal quantum computing will hinge upon using QEC for demonstrating a quantum memory: a logical qubit that is sufficiently stable against local errors and ultimately allows essentially error-free storage.
  7. Fault-tolerant error correction architecture — “The particular arrangement of physical qubits is governed by selection of a fault-tolerant error correction architecture.
  1. Surface code and the Bacon-Shor code, both of which are famous and widely studied examples in the quantum error correction community.
  2. Co-design of quantum hardware and error-correcting codes
  3. The tension between ideal requirements and physical constraints couples the abstract and the practical.
  4. as we move closer as a community to experimentally demonstrating fault-tolerant quantum error correction.

Theory vs. design and architecture vs. implementation vs. idiosyncrasies for quantum error correction of each particular quantum computer

  1. Theory. There may be multiple theoretical approaches or methods.
  2. Design and architecture. There could be multiple approaches to designing implementations of a given theory, plus multiple theories.
  3. Implementation. Even given a particular design and architecture, there may be practical reasons, challenges, or opportunities for implementing a particular design and architecture differently on a particular machine, such as resource constraints, technology constraints, and tradeoffs for balancing competing constraints, or even application-specific requirements.
  4. Idiosyncrasies. Every machine has its own quirks which can interfere with or enhance sophisticated algorithms such as quantum error correction.
  1. Technical details of design and implementation.
  2. The subset of details which algorithm designers and application developers need to be aware of to fully exploit the capabilities of the machine. Anything which can affect the function, performance, or limitations of algorithms and applications, but excludes under-the-hood details which have no impact on the design of algorithms and applications.

I’ve changed my view of quantum error correction

  1. Quantum error correction proposals were way too complicated to be implemented any time soon, likely more than five years and maybe not even for ten years — or longer.
  2. Based on progress being reported by hardware vendors such as IBM and Rigetti, that quantum hardware reliability was improving quite rapidly, so rapidly that quantum hardware was likely to be much closer to 100% reliability well before quantum error correction was even feasible, so that many or most quantum algorithm designers and application developers would likely be able to get by if not thrive with the improved hardware likely within a few years without any real and pressing need for the fabled and promised but distant quantum error correction.
  1. Quite a few algorithms and applications really will need the greater precision of absolute quantum error correction, even if many algorithms and applications do not. Experiments and prototypes may not need quantum error correction, but production-scale applications are likely to need full-blown quantum error correction.
  2. Algorithms and applications are likely to need quantum error correction to achieve dramatic quantum advantage.
  3. Dramatic quantum advantage is likely to require advanced algorithmic methods such as quantum phase estimation (QPE) and quantum Fourier transforms (QFT), which will in turn require the greater precision of quantum error correction.
  4. To be usable by average, non-elite staff, quantum error correction needs to be full, automatic, and transparent with true logical qubits, as opposed to manual and explicit error mitigation.
  5. To be clear, a quantum computer with 50 or more noisy qubits without quantum error correction is simply not going to be able to support applications capable of achieving dramatic quantum advantage.
  6. The intermediate hardware improvements (qubits with much lower error rates) I envisioned in my original second conclusion are actually also needed as the foundation of quantum error correction, so that achieving that better hardware would also enable quantum error correction to come to fruition forthwith.
  7. Hardware vendors, including IBM and Google, have already been designing aspects of their newer hardware to be much closer to what is required to support quantum error correction. Although quantum error correction is not imminent, it is a lot closer than I originally imagined. Maybe two to five years rather than five to ten years.

My own preference is for near-perfect qubits over overly-complex quantum error correction

Manual error mitigation and correction just won’t cut it

Quantum error mitigation vs. quantum error correction

Manual, explicit “error correction” (error mitigation)

Automatic quantum error correction

Quantum error correction is inherently automatic, implied, and hidden (transparent) while error mitigation is inherently manual, explicit, and visible

  1. Automatic vs. manual.
  2. Implied vs. explicit.
  3. Hidden (transparent) vs. visible.

Noise-resilient and noise-aware techniques

  1. Hardware which handles noise well.
  2. Application techniques for coping with errors caused by noise. In other words, manual error mitigation.

Quantum error correction is still a very active research area — not even yet a laboratory curiosity

Twin progressions — research on quantum error correction and improvements to physical qubits

  1. Discover and develop newer and better technologies for individual qubits, with the essential goals of dramatically reducing their error rate and increasing the physical capacity since even a single logical qubit requires a lot of physical qubits.
  2. Discover and develop newer and better schemes for encoding a logical qubit as a collection of physical qubits, as well as schemes for controlling and performing operations on one or more logical qubits as collections of physical qubits.
  1. Better qubits are of value even without quantum error correction. A double benefit.
  2. Efficient and cost-effective quantum error correction requires better qubits. Qubits of lower quality (higher error rate) increase the cost and lower the capacity of logical qubits. Lower quality qubits mean more physical qubits are required per logical qubit, reducing the number of logical qubits which can be implemented for a given hardware implementation since physical qubits remain a relatively scarce commodity.

Focus on simulators to accelerate development of critical applications that will be able to exploit logical qubit hardware when it becomes available to achieve dramatic quantum advantage

Still need to advance algorithms to 30–40 qubits using ideal simulators

  1. Small-scale algorithms for 5 to 24 qubits. They can run on real machines, current hardware.
  2. Large-scale algorithms designed for hundreds to thousands of qubits. Academic papers. Purely theoretical. Nothing that runs on current hardware.

Quantum threshold theorem

NISQ vs. fault-tolerant and near-perfect, small-scale, and large-scale

  1. Noisy — N. All current and near-term quantum computers.
  2. Near-perfect — NP. Any current, near-term, and longer-term quantum computers with more than a couple of 9’s in their qubit reliability, like 99.9%, 99.99%, 99.999%, and 99.9999% — using only raw physical qubits, no error correction or logical qubits. Close enough to perfection that quite a few applications can get respectable results without the need for quantum error correction and logical qubits.
  3. Fault-tolerant — FT. Quantum error correction and logical qubits with 100% reliability of qubits.
  1. Small scale — SS. Under 50 qubits.
  2. Intermediate scale — IS. 50 to a few hundred qubits.
  3. Large scale — LS. More than a few hundred qubits.
  1. NSSQ — Noisy Small-Scale Quantum devices. Most of today’s quantum computers. Under 50 or so qubits.
  2. NISQ — Noisy Intermediate-Scale Quantum devices. 50 to a few hundred or so noisy qubits.
  3. NLSQ — Noisy Large-Scale Quantum devices. More than a few hundred or so to thousands or even millions of noisy qubits.
  4. NPSSQ — Near-Perfect Small-Scale Quantum devices. Less than 50 or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  5. NPISQ — Near-Perfect Intermediate-Scale Quantum devices. 50 to a few hundred or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  6. NPLSQ — Near-Perfect Large-Scale Quantum devices. More than a few hundred or so to thousands or even millions of near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  7. FTSSQ — Fault-Tolerant Small-Scale Quantum devices. Under 50 or so logical qubits. Perfect computation, but insufficient for quantum advantage.
  8. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices. Start of quantum advantage. Good place to start post-NISQ devices. 50 to a few hundred or so logical qubits.
  9. FTLSQ — Fault-Tolerant Large-Scale Quantum devices. Production-scale quantum advantage. More than a few hundred or so to thousands or even millions of logical qubits.

NSSQ — Noisy Small-Scale Quantum devices

NISQ — Noisy Intermediate-Scale Quantum devices

NLSQ — Noisy Large-Scale Quantum devices

NPSSQ — Near-Perfect Small-Scale Quantum devices

NPISQ — Near-Perfect Intermediate-Scale Quantum devices

NPLSQ — Near-Perfect Large-Scale Quantum devices

FTSSQ — Fault-Tolerant Small-Scale Quantum devices

FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices

FTLSQ — Fault-Tolerant Large-Scale Quantum devices

What is post-NISQ?

  1. Achieving fault tolerance, or at least near-perfect qubits.
  2. Getting beyond a few hundred fault-tolerant or near-perfect qubits.
  1. NPISQ — Near-Perfect Intermediate-Scale Quantum devices.
  2. NPLSQ — Near-Perfect Large-Scale Quantum devices.
  3. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices.
  4. FTLSQ — Fault-Tolerant Large-Scale Quantum devices.
  1. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices — where quantum advantage starts.
  2. FTLSQ — Fault-Tolerant Large-Scale Quantum devices — where production-scale quantum advantage flourishes.

When will post-NISQ begin?

Post-noisy is a more accurate term than post-NISQ

  1. NPSSQ — Near-Perfect Small-Scale Quantum devices. Less than 50 or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  2. NPISQ — Near-Perfect Intermediate-Scale Quantum devices. 50 to a few hundred or so near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  3. NPLSQ — Near-Perfect Large-Scale Quantum devices. More than a few hundred or so to thousands or even millions of near-perfect qubits — with qubit reliability in the range 99.9% to 99.9999%.
  4. FTSSQ — Fault-Tolerant Small-Scale Quantum devices. Under 50 or so logical qubits. Perfect computation, but insufficient for quantum advantage.
  5. FTISQ — Fault-Tolerant Intermediate-Scale Quantum devices. Start of quantum advantage. Good place to start post-NISQ devices. 50 to a few hundred or so logical qubits.
  6. FTLSQ — Fault-Tolerant Large-Scale Quantum devices. Production-scale quantum advantage. More than a few hundred or so to thousands or even millions of logical qubits.

But for most uses post-NISQ will refer to post-noisy

  1. Getting past noisy qubits. To either near-perfect or fault tolerant qubits.
  2. True fault tolerance. With quantum error correction and logical qubits.
  3. Near-perfect is good enough. True fault tolerance is not needed.

Vendors need to publish roadmaps for quantum error correction

Vendors need to publish roadmaps for near-perfect qubits

Likely that 32-qubit machines can achieve near-perfect qubits for relatively short algorithms within a couple of years

Unlikely to achieve 32 logical qubits for at least five years

  1. 32 logical qubits require 32 * 65 = 2,080 physical qubits.
  2. 48 logical qubits require 48 * 65 = 3,120 physical qubits.
  3. 64 logical qubits require 64 * 65 = 4,160 physical qubits.
  4. 96 logical qubits require 96 * 65 = 6,240 physical qubits.
  5. 128 logical qubits require 128 * 65 = 8,320 physical qubits.

Levels of qubit quality

  1. Extremely noisy. Not usable. But possible during the earliest stages of developing a new qubit technology. May be partially usable for testing and development, but not generally usable.
  2. Very noisy. Not very reliable. Need significant shot count to develop a statistical average for results. Barely usable.
  3. Moderately noisy. Okay for experimentation and okay if rerunning a multiple times is needed. Not preferred, but workable.
  4. Modestly noisy. Frequently computes correctly. Occasionally needs to be rerun. Reasonably usable for NISQ prototyping, but not for production-scale real-world applications.
  5. Slightly noisy. Usually gives correct results. Very occasionally needs to be rerun.
  6. Near-perfect qubit. Just short of perfect qubit. Rare failures, but enough to spoil perfection.
  7. Perfect qubit. No detectable errors, or so infrequent to be unnoticeable by the vast majority of applications. Comparable to current classical computing, including ECC memory.
  8. Corrected logical qubit. Correct result at the logical level even though physical qubits are somewhat noisy. What level of quality is needed for physical qubits? Slightly or only modestly noisy is best. May or may not be possible with moderately noisy physical qubits.

Possible need for co-design to achieve optimal hardware design for quantum error correction

  • Co-design of quantum hardware and error-correcting codes

Top 10 questions

  1. When will quantum error correction and logical qubits be practical?
  2. How much will hardware have to advance before quantum error correction becomes practical?
  3. Will quantum error correction be truly 100% transparent to quantum algorithms and applications?
  4. How many physical qubits will be needed for each logical qubit?
  5. Does quantum error correction guarantee absolute 100% perfect qubits?
  6. Does quantum error correction guarantee infinite coherence?
  7. Does quantum error correction guarantee to eliminate 100% of gate errors, or just a moderate improvement?
  8. Does quantum error correction guarantee to eliminate 100% of measurement errors, or just a moderate improvement?
  9. What degree of external, environmental interference can be readily and 100% corrected by quantum error correction?
  10. How exactly does quantum error correction work for multiple, entangled qubits — multi-qubit product states?

Additional important questions

  1. Do we really need quantum error correction if we can achieve near-perfect qubits?
  2. Will qubits eventually become good enough that they don’t necessarily need quantum error correction?
  3. Which will win the race, quantum error correction or near-perfect qubits?
  4. When will logical qubits be ready to move beyond the laboratory curiosity stage of development?
  5. How close to perfect is a near-perfect qubit?
  6. How close to perfect must near-perfect qubits be to enable logical qubits?
  7. How close to perfect must near-perfect qubits be to enable logical qubits for 2-qubit gates?
  8. When can we expect near-perfect qubits?
  9. Are perfect qubits possible?
  10. How close to perfect will logical qubits really be?
  11. But doesn’t IonQ claim to have perfect qubits?
  12. When can we expect logical qubits of various capacities?
  13. When can we expect even a single logical qubit?
  14. When can we expect 32 logical qubits?
  15. What is quantum error correction?
  16. What is a quantum error correcting code?
  17. Is NISQ a distraction and causing more harm than good?
  18. NISQ as a stepping stone to quantum error correction and logical qubits
  19. What is Riggeti doing about quantum error correction?
  20. Is it likely that large-scale logical qubits can be implemented using current technology?
  21. Is quantum error correction fixed for a particular quantum computer or selectable and configurable for each algorithm or application?
  22. What parameters or configuration settings should algorithm designers and application developers be able to tune for logical qubits?
  23. What do the wave functions of logical qubits look like?
  24. Are all of the physical qubits of a single logical qubit entangled together?
  25. How many wave functions are there for a single logical qubit?
  26. For a Hadamard transform of n qubits to generate 2^n simultaneous (product) states, how exactly are logical qubits handling all of those product states?
  27. What is the performance cost of quantum error correction?
  28. What is the performance of logical qubit gates and measurements relative to NISQ?
  29. How is a logical qubit initialized, to 0?
  30. What happens to connectivity under quantum error correction?
  31. How useful are logical qubits if still only weak connectivity?
  32. Are SWAP networks still needed under quantum error correction?
  33. How does a SWAP network work under quantum error correction?
  34. How efficient are SWAP networks for logical qubits?
  35. What are the technical risks for achieving logical qubits?
  36. How scalable is your quantum algorithm?
  37. How perfectly can a logical qubit match the probability amplitudes for a physical qubit?
  38. Can probability amplitude probabilities of logical qubits ever be exactly 0.0 or 1.0 or is there some tiny, Planck-level epsilon?
  39. What is the precision or granularity of probability amplitudes and phase of the product states of entangled logical qubits?
  40. Does the stability of a logical qubit imply greater precision or granularity of quantum state?
  41. Is there a proposal for quantum error correction for trapped-ion qubits, or are surface code and other approaches focused on the specific peculiarities of superconducting transmon qubits?
  42. Do trapped-ion qubits need quantum error correction?
  43. Can simulation of even an ideal quantum computer be the same as an absolutely perfect classical quantum simulator since there may be some residual epsilon uncertainty down at the Planck level for even a perfect qubit?
  44. How small must single-qubit error (physical or logical) be before nobody will notice?
  45. What is the impact of quantum error correction on quantum phase estimation (QPE) and quantum Fourier transform (QFT)?
  46. What is the impact of quantum error correction on granularity of phase and probability amplitude?
  47. What are the effects of quantum error correction on phase precision?
  48. What are the effects of quantum error correction on probability amplitude precision?
  49. What is the impact of quantum error correction on probability amplitudes of multi-qubit entangled product states?
  50. How are multi-qubit product states realized under quantum error correction?
  51. What is the impact of quantum error correction on probability amplitudes of Bell, GHZ, and W states?
  52. At which stage(s) of the IBM quantum roadmap will logical qubits be operational?
  53. Does the Bloch sphere have any meaning or utility under quantum error correction?
  54. Is there a prospect of a poor man’s quantum error correction, short of perfection but close enough?
  55. Is quantum error correction all or nothing or varying degrees or levels of correctness and cost?
  56. Will we need classical quantum simulators beyond 50 qubits once we have true error-corrected logical qubits?
  57. Do we really need logical qubits before we have algorithms which can exploit 40 to 60 qubits to achieve true quantum advantage for practical real-world problems?
  58. How are gates executed for all data qubits of a single logical qubit?
  59. How are 2-qubit (or 3-qubit) gates executed for non-nearest neighbor physical qubits?
  60. Can we leave NISQ behind as soon as we get quantum error correction and logical qubits?
  61. How exactly does quantum error correction actually address gate errors — since they have more to do with external factors outside of the qubit?
  62. How exactly does quantum error correction actually address measurement errors?
  63. Does quantum error correction really protect against gate errors or even measurement errors?
  64. Will quantum error correction approaches vary based on the physical qubit technology?
  65. Is the quantum volume metric still valid for quantum error correction and logical qubits?
  66. Is the quantum volume metric relevant to perfect logical qubits?
  67. What will it mean, from a practical perspective, once quantum error correction and logical qubits arrive?
  68. Which algorithms, applications, and application categories will most immediately benefit the most from quantum error correction and logical qubits?
  69. Which algorithms, applications or classes of algorithms and applications are in most critical need of logical qubits?
  70. How is quantum error correction not a violation of the no-cloning theorem?
  71. Is quantum error correction too much like magic?
  72. Who’s closest to real quantum error correction?
  73. Does quantum error correction necessarily mean that the qubit will have a very long or even infinite coherence?
  74. Are logical qubits guaranteed to have infinite coherence?
  75. What is the specific mechanism of quantum error correction that causes longer coherence — since decoherence is not an “error” per se?
  76. Is there a cost associated with quantum error correction extending coherence or is it actually free and a side effect of basic error correction?
  77. Is there a possible tradeoff, that various degrees of coherence extension have different resource requirements?
  78. Could a more modest degree of coherence extension be provided significantly more cheaply than full, infinite coherence extension?
  79. Will evolution of quantum error correction over time incrementally reduce errors and increase precision and coherence, or is it an all or nothing proposition?
  80. Does quantum error correction imply that the overall QPU is any less noisy, or just that logical qubits mitigate that noise?
  81. What are the potential tradeoffs for quantum error correction and logical qubits?
  82. How severely does quantum error correction impact gate execution performance?
  83. How does the performance hit on gate execution scale based on the number of physical qubits per logical qubit?
  84. Are there other approaches to logical qubits than strict quantum error correction?
  85. How many logical qubits are needed to achieve quantum advantage for practical applications?
  86. Is it any accident that IBM’s latest machine has 65 qubits?

Kinds of questions and issues beyond the scope or depth of this paper

  1. Specific quantum error correction proposals.
  2. What is a surface code?
  3. Background on surface codes
  4. What is the Steane code?
  5. How might quantum tomography, quantum state tomography, quantum process tomography, and matrix product state tomography relate to quantum error correction and measurement?
  6. What is magic state distillation?
  7. What error threshold or logical error rate is needed to achieve acceptable quality quantum error correction for logical qubit results?
  8. What are typical values of d for a surface code?
  9. Is d = 5 really optimal for surface codes?
  10. What is a stabilizer qubit?
  11. What is a data qubit?
  12. What is a flag qubit?
  13. What is entanglement distillation?
  14. What is virtual distillation?
  15. What is topological quantum error correction?
  16. What is Shor code?
  17. What is Shor-Bacon code?
  18. What is Reed-Muller code?
  19. What is quantum error mitigation?
  20. What is a gate error?
  21. What is a bit flip error?
  22. What is a phase flip error?
  23. What is a measurement error?
  24. What is quantum error mitigation?

Top question #1: When will quantum error correction and logical qubits be practical?

Top question #2: How much will hardware have to advance before quantum error correction becomes practical?

Top question #3: Will quantum error correction be truly 100% transparent to quantum algorithms and applications?

Top question #4: How many physical qubits will be needed for each logical qubit?

Citations for various numbers of physical qubits per logical qubit

  1. Less than a dozen.
  2. A dozen or so.
  3. Dozens.
  4. Hundreds.
  5. Several thousand.
  6. Tens of thousands.
  7. A million.
  8. Millions.
  • It is unclear if anyone is seriously suggesting millions of physical qubits per single logical qubit. I vaguely recall references that seemed to suggest millions of physical qubits per logical qubit, but upon reflection, I strongly suspect that most of the intentions were millions of physical qubits for the entire algorithm, so more likely simply thousands of physical qubits per logical qubit.
  • A fully fault-tolerant quantum computer based on the surface code assuming realistic error rates is predicted to require millions of physical qubits.
    https://www.ncbi.nlm.nih.gov/books/NBK538709/
  • How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
    https://arxiv.org/abs/1905.09749
    20 million physical qubits = 6,176 logical qubits with 3,238 physical qubits per logical qubit.
  • 3,238
    How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
    https://arxiv.org/abs/1905.09749
    Total 20 million physical qubits = 6,176 logical qubits with 3,238 physical qubits per logical qubit.
    Logical qubits = 2048*3+0.002 * 2048 * ln(2048) = 6175.23043937 = 6,176 logical qubits.
    Physical qubits per logical qubit = 20,000,000 / 6,176 = 3238.34196891 = 3,238 physical qubits per logical qubit.
  • 20,000
    Researchers think they can sidestep that problem if they can initialize all the qubits in their computer in particular “magic states” that, more or less, do half the work of the problematic gates. Unfortunately, still more qubits may be needed to produce those magic states. “If you want to perform something like Shor’s algorithm, probably 90% of the qubits would have to be dedicated to preparing these magic states,” Roffe says. So a full-fledged quantum computer, with 1000 logical qubits, might end up containing many millions of physical qubits.
    https://www.sciencemag.org/news/2020/07/biggest-flipping-challenge-quantum-computing
    Okay, that’s actually thousands rather than millions. Say, 20 million physical qubits and 1,000 logical qubits would require 20,000 physical qubits per logical qubit.
  • 14,500
    The number of physical qubits needed to define a logical qubit is strongly dependent on the error rate in the physical qubits. Error rates just below the threshold require larger numbers of physical qubits per logical qubit, while error rates substantially smaller than the threshold allow smaller numbers of physical qubits. Here we assume an error rate approximately one-tenth the threshold rate, which implies that we need about 14,500 physical qubits per logical qubit to give a sufficiently low logical error rate to successfully execute the algorithm.
    https://arxiv.org/abs/1208.0928
  • 1,000 to 10,000
    It takes a minimum of thirteen physical qubits to implement a single logical qubit. A reasonably fault-tolerant logical qubit that can be used effectively in a surface code takes of order 10³ to 10⁴ physical qubits.
    We find that nq increases rapidly as p approaches the threshold pth, so that a good target for the gate fidelity is above about 99.9% (p <~ 1/10³). In this case, a logical qubit will need to contain 10³ − 10⁴ physical qubits in order to achieve logical error rates below 1/10¹⁴ − 1/10¹⁵
    https://arxiv.org/abs/1208.0928
  • 1,000
    Although that may really mean “thousands” rather than only roughly 1,000.
    Day 1 opening keynote by Hartmut Neven (Google Quantum Summer Symposium 2020)
    https://www.youtube.com/watch?v=TJ6vBNEQReU&t=1231
    A logical qubit is a collection of a thousand physical qubits.” — The exact 1,000 number was stated by Eric Lucero, Lead Quantum Mechanic of the Google Quantum AI team on April 14, 2022.
    Google Quantum AI Update 2022
    https://youtu.be/ZQpVRIhKusY?t=1418
  • None. I haven’t seen any references using hundreds of physical qubits for a single logical qubit.

Formulas from IBM paper for physical qubits per logical qubit

  1. Heavy hexagon code. 57 physical qubits per logical qubit.
  2. Heavy square code. 65 physical qubits per logical qubit.
  • (5 * d² — 2 * d — 1) / 2
  • (5 * 5² — 2 * 5–1) / 2
  • = (125–10–1) / 2
  • = 114 / 2
  • = 57 physical qubits.
  • There are d² data qubits
  • So for d = 5, 5² = 25
  • There are (d + 1) / 2 * (d — 1) syndrome measurement qubits.
  • So for d = 5, that’s 6 / 2 * 4 = 3 * 4 = 12 syndrome measurement qubits.
  • There are d * (d — 1) flag qubits.
  • So for d = 5, that’s 5 * 4 = 20 flag qubits.
  • Data qubits plus syndrome measurement qubits plus flag qubits.
  • 25 + 12 + 20
  • = 57 physical qubits.
  • d² data qubits.
  • 2 d (d minus 1) flag and syndrome measurement qubits.
  • Total 3 d² minus 2 d physical qubits per logical qubit.
  • 5² = 25 data qubits.
  • 2 * 5 * (5 minus 1) = 10 * 4 = 40 flag and syndrome measurement qubits.
  • 25 + 40 = 65 total physical qubits per logical qubit.
  • 3 * 5² minus 2 * 5
  • = 3 * 25 minus 10
  • = 75 minus 10
  • = 65 total physical qubits per logical qubit.
  1. Data = 5 * 5 = 25
  2. Syndrome (dark) = 5 * 4 = 20
  3. Flag (white) = 4 * 5 = 20
  4. Total 25 + 20 + 20 = 65 total physical qubits per logical qubit.

For now, 65 physical qubits per logical qubit is as good an estimate as any

Top question #5: Does quantum error correction guarantee absolute 100% perfect qubits?

Top question #6: Does quantum error correction guarantee infinite coherence?

Top question #7: Does quantum error correction guarantee to eliminate 100% of gate errors, or just a moderate improvement?

Top question #8: Does quantum error correction guarantee to eliminate 100% of measurement errors, or just a moderate improvement?

Top question #9: What degree of external, environmental interference can be readily and 100% corrected by quantum error correction?

Top question #10: How exactly does quantum error correction work for multiple, entangled qubits — multi-qubit product states?

Do we really need quantum error correction if we can achieve near-perfect qubits?

Will qubits eventually become good enough that they don’t necessarily need quantum error correction?

Which will win the race, quantum error correction or near-perfect qubits?

When will logical qubits be ready to move beyond the laboratory curiosity stage of development?

How close to perfect is a near-perfect qubit?

  1. To enable quantum error correction for logical qubits.
  2. To enable applications using raw physical qubits on NISQ devices.
  1. Shallow depth circuits will require fewer nines.
  2. Deeper circuits will require more nines.

How close to perfect must near-perfect qubits be to enable logical qubits?

  1. Fewer physical qubits which are closer to perfection (more nines) are required for each logical qubit.
  2. More physical qubits which are further from perfection (fewer nines) are required for each logical qubit.

How close to perfect must near-perfect qubits be to enable logical qubits for 2-qubit gates?

When can we expect near-perfect qubits?

Are perfect qubits possible?

How close to perfect will logical qubits really be?

But doesn’t IonQ claim to have perfect qubits?

  • The system smashes all previous records with 32 perfect qubits with gate errors low enough to feature a quantum volume of at least 4,000,000. Getting down to the technology brass tacks: the hardware features perfect atomic clock qubits and random access all-to-all gate operations, allowing for efficient software compilation of a variety of applications.
  • https://ionq.com/posts/october-01-2020-most-powerful-quantum-computer

When can we expect logical qubits of various capacities?

  1. 5 logical qubits — basic demonstration of logical qubits
  2. 8
  3. 12
  4. 16
  5. 20
  6. 24 — demonstrate some realistic algorithms
  7. 28
  8. 32 — where it starts to get interesting
  9. 40 — where I think we need to get to as a major milestone
  10. 48 — where algorithms will start to impress people
  11. 54 — the edge of quantum advantage
  12. 64 — quantum advantage for sure
  13. 72 — starting to impress people on quantum advantage
  14. 80
  15. 92
  16. 100 — beginning of really impressive results
  17. 256 — maybe the gold standard for results to establish quantum supremacy for real-world applications
  18. 512
  19. 1024
  20. 2K
  21. 4K — potential for Shor’s algorithm for 1K public encryption keys
  22. 8K — potential for Shor’s algorithm for 2K public encryption keys
  23. 16K — potential for Shor’s algorithm for 4K public encryption keys
  24. 32K
  25. 64K
  26. 256K
  27. 1M — unclear what people might actually use 1M qubits for
  28. 10M

When can we expect even a single logical qubit?

When can we expect 32 logical qubits?

  1. Not in the next two years.
  2. Possibly within three to five years.
  3. Certainly within seven to ten years.

What is quantum error correction?

What is a quantum error correcting code?

Is NISQ a distraction and causing more harm than good?

NISQ as a stepping stone to quantum error correction and logical qubits

What is Riggeti doing about quantum error correction?

  1. Demonstrate practical improvements in quantum processor performance through error mitigation, error detection and error correction.
  2. Develop programming tools and compiler based optimization frameworks for mitigating quantum hardware errors on near-term machines.
  3. Establish hardware and software requirements for practical fault-tolerant codes to enable robust intermediate scale machines.
  4. Implement tests of small logical qubits and fault-tolerant codes.
  5. Collaborate with quantum hardware engineers on system benchmarking and error analysis.
  6. Collaborate with application scientists on quantum circuit construction and error analysis.
  7. Organize and lead directed research programs with external partners in academia, industry, and national labs.
  1. Experience in one or more of the following: digital quantum error mitigation, quantum error correction, quantum fault-tolerance, classical or quantum channel coding.
  2. Experience in implementing error correction on hardware or collaborating with experimentalists or electrical engineers.

Is it likely that large-scale logical qubits can be implemented using current technology?

  1. Much simpler qubits. So they can be much smaller. So that many more can be placed on a single chip.
  2. Modular design. So a significant number of chips can be daisy-chained or arranged in a grid, or stacked, or some other form of modular qubit interconnection.
  3. Dramatically improved connectivity. Maybe true, full any-to-any connectivity is too much to ask for, but some solution other than tedious, slow, and error-prone SWAP networks.
  4. Other. Who knows what other criteria. Beyond the scope of this informal paper.

Is quantum error correction fixed for a particular quantum computer or selectable and configurable for each algorithm or application?

What parameters or configuration settings should algorithm designers and application developers be able to tune for logical qubits?

What do the wave functions of logical qubits look like?

Are all of the physical qubits of a single logical qubit entangled together?

How many wave functions are there for a single logical qubit?

For a Hadamard transform of n qubits to generate 2^n simultaneous (product) states, how exactly are logical qubits handling all of those product states?

What is the performance cost of quantum error correction?

  1. Execution speed for quantum logic gates. Each logic gate on a logical qubit must be expanded to a large number of gate operations on a large number of physical qubits. Details unknown at this time.
  2. Number of physical qubits needed to implement each logical qubit. This impacts system capacity for a given amount of chip real estate, which reduces the capacity of qubits available to solve application problems.

What is the performance of logical qubit gates and measurements relative to NISQ?

How is a logical qubit initialized, to 0?

What happens to connectivity under quantum error correction?

  1. How does connectivity scale under quantum error correction?
  2. How do SWAP networks scale under quantum error correction?
  3. What is the performance of SWAP networks under quantum error correction? This may be the general question of the impact of quantum error correction on gate performance, as well as any additional issues peculiar to SWAP.
  4. How well does connectivity scale for more than a million logical qubits? Or scaling in general for the use of SWAP networks to achieve connectivity.
  5. Will quantum error correction provide 100% error-free full any-to-any connectivity, even if it does still require SWAP networks?
  6. Can swap networks be automatically implemented down in the firmware so that algorithms and applications can presume full any to any connectivity — with no downside or excessive cost?

How useful are logical qubits if still only weak connectivity?

Are SWAP networks still needed under quantum error correction?

How does a SWAP network work under quantum error correction?

How efficient are SWAP networks for logical qubits?

What are the technical risks for achieving logical qubits?

  1. Basic theory. Is it really sound? How can we know?
  2. Evolution of basic theory. Always newer and better ideas appearing and evolving. Risks for sticking with current approach vs. switching to a newer, unproven approach.
  3. Achieving near-perfect qubits with sufficient 9’s.
  4. Firmware for gate operations. Particularly attempting many operations in parallel or sequencing rapidly enough.
  5. Performance.
  6. Granularity maintained. For probability amplitude and phase.

How perfectly can a logical qubit match the probability amplitudes for a physical qubit?

  1. They very closely match the probability amplitudes for a physical qubit.
  2. They are somewhat less accurate than the probability amplitudes for a physical qubit.
  3. They are somewhat more accurate than the probability amplitudes for a physical qubit.

Can probability amplitude probabilities of logical qubits ever be exactly 0.0 or 1.0 or is there some tiny, Planck-level epsilon?

What is the precision or granularity of probability amplitudes and phase of the product states of entangled logical qubits?

  1. Do they increase, reduce, or stay exactly the same?
  2. Does stability of values of entangled qubits improve for logical qubits at the cost of the precision and granularity of probability amplitudes and phase, or is stability free with no impact on the precision and granularity of probability amplitudes and phase?

Does the stability of a logical qubit imply greater precision or granularity of quantum state?

  1. Precision and granularity of probability amplitudes and phase is unchanged.
  2. It is better — greater precision and finer granularity.
  3. It is worse — less precision and coarser granularity.

Is there a proposal for quantum error correction for trapped-ion qubits, or are surface code and other approaches focused on the specific peculiarities of superconducting transmon qubits?

Do trapped-ion qubits need quantum error correction?

Can simulation of even an ideal quantum computer be the same as an absolutely perfect classical quantum simulator since there may be some residual epsilon uncertainty down at the Planck level for even a perfect qubit?

How small must single-qubit error (physical or logical) be before nobody will notice?

What is the impact of quantum error correction on quantum phase estimation (QPE) and quantum Fourier transform (QFT)?

What is the impact of quantum error correction on granularity of phase and probability amplitude?

  • Gradations of phase
  • Phase gradations
  • Granularity of phase
  • Phase granularity
  • Precision of phase
  • Phase precision
  • Gradations of probability amplitude
  • Probability amplitude gradations
  • Granularity of probability amplitude
  • Probability amplitude granularity
  • Precision of probability amplitude
  • Probability amplitude precision
  1. Will quantum error correction (QEC) have any impact on phase granularity or limits on phase in general? Hard to say.
  2. Will a logical qubit support more gradations of phase, or fewer gradations, or the same?
  3. Will there be a dramatic increase in phase precision, possibly exponential, based on the number of physical qubits per logical qubit?
  4. Or will it be more of a least common denominator for all of the physical qubits which comprise a logical qubit?
  5. The theoreticians owe us an answer.
  6. And then it’s up to the engineers to build and deliver hardware which fulfills the promises of the theoreticians.

What are the effects of quantum error correction on phase precision?

What are the effects of quantum error correction on probability amplitude precision?

What is the impact of quantum error correction on probability amplitudes of multi-qubit entangled product states?

  • Doesn’t the redundancy and correction need to be across all qubits of the multi-qubit computational basis state, not simply within a single logical qubit?

How are multi-qubit product states realized under quantum error correction?

What is the impact of quantum error correction on probability amplitudes of Bell, GHZ, and W states?

At which stage(s) of the IBM quantum roadmap will logical qubits be operational?

  1. What is the earliest stage at which even a single logical qubit will be demonstrated?
  2. What is the earliest stage at which two qubits and a two-qubit logical quantum logic gate will be demonstrated?
  3. What is the earliest stage at which five logical qubits will be demonstrated?
  4. What is the earliest stage at which eight logical qubits will be demonstrated?
  5. What is the earliest stage at which 16 logical qubits will be demonstrated?
  6. What is the earliest stage at which 24 logical qubits will be demonstrated?
  7. What is the earliest stage at which 32 logical qubits will be demonstrated?
  8. What is the earliest stage at which 40 logical qubits will be demonstrated?
  9. What is the earliest stage at which 48 logical qubits will be demonstrated?
  10. What is the earliest stage at which 54 logical qubits will be demonstrated?
  11. What is the earliest stage at which 64 logical qubits will be demonstrated?
  12. What is the earliest stage at which 72 logical qubits will be demonstrated?
  13. What is the earliest stage at which 96 logical qubits will be demonstrated?
  14. What is the earliest stage at which 128 logical qubits will be demonstrated?
  15. What is the earliest stage at which 256 logical qubits will be demonstrated?
  16. What is the earliest stage at which 1024 logical qubits will be demonstrated?
  17. At which stage will the number of qubits switch to being primarily measured as logical qubits rather than physical qubits? I think that in the current roadmap all numbers are for physical qubits.
  18. What will be the target or target range for the number of physical qubits per logical qubit for the various stages in the roadmap?
  19. What will be the default and/or recommended target for physical qubits per logical qubit?
  20. Will algorithms and applications be able to select and configure the number of physical qubits per logical qubit?

Does the Bloch sphere have any meaning or utility under quantum error correction?

Is there a prospect of a poor man’s quantum error correction, short of perfection but close enough?

Is quantum error correction all or nothing or varying degrees or levels of correctness and cost?

  1. Absolutely guarantees 100% error-free operation.
  2. Only achieves some small but non-zero error rate.
  3. The error rate is tunable based on how many physical qubits you wish to allocate for each physical qubit.
  1. Fast and low cost. But only a modest to moderate improvement in error rate and coherence.
  2. Modestly slower and modestly more expensive. But with a significant improvement in error rate and coherence.
  3. Moderately slower and moderately more expensive. But with more dramatic reduction in error rate and coherence.
  4. Much slower and much more expensive. But with perfect or virtually perfect qubits — virtually no errors and virtually no decoherence.

Will we need classical quantum simulators beyond 50 qubits once we have true error-corrected logical qubits?

  1. Amplitude amplification.
  2. Quantum phase estimation (QPE).
  3. Quantum Fourier transform (QFT).

Do we really need logical qubits before we have algorithms which can exploit 40 to 60 qubits to achieve true quantum advantage for practical real-world problems?

How are gates executed for all data qubits of a single logical qubit?

  1. Are the physical qubits operated on fully in parallel?
  2. Can each physical qubit be operated on serially?
  3. Or maybe in parallel one row or one column of the lattice at a time?
  4. Or, if physical data qubits are entangled, can operating on one change them all?

How are 2-qubit (or 3-qubit) gates executed for non-nearest neighbor physical qubits?

Can we leave NISQ behind as soon as we get quantum error correction and logical qubits?

How exactly does quantum error correction actually address gate errors — since they have more to do with external factors outside of the qubit?

How exactly does quantum error correction actually address measurement errors?

Does quantum error correction really protect against gate errors or even measurement errors?

Will quantum error correction approaches vary based on the physical qubit technology?

Is the quantum volume metric still valid for quantum error correction and logical qubits?

Is the quantum volume metric relevant to perfect logical qubits?

What will it mean, from a practical perspective, once quantum error correction and logical qubits arrive?

  1. Shor’s factoring algorithm requires many thousands of logical qubits.
  2. Quantum phase estimation and quantum Fourier transform would be very beneficial to many applications, such as quantum computational chemistry, but not until a significant number of logical qubits are supported, at least in the dozens. And even then algorithm designers will need to shift from hybrid variational approaches to the use of phase estimation.
  3. There isn’t much on the shelf in terms of 30 to 40-qubit algorithms.

Which algorithms, applications, and application categories will most immediately benefit the most from quantum error correction and logical qubits?

Which algorithms, applications or classes of algorithms and applications are in most critical need of logical qubits?

How is quantum error correction not a violation of the no-cloning theorem?

Is quantum error correction too much like magic?

Who’s closest to real quantum error correction?

Does quantum error correction necessarily mean that the qubit will have a very long or even infinite coherence?

Are logical qubits guaranteed to have infinite coherence?

What is the specific mechanism of quantum error correction that causes longer coherence — since decoherence is not an “error” per se?

Is there a cost associated with quantum error correction extending coherence or is it actually free and a side effect of basic error correction?

Is there a possible tradeoff, that various degrees of coherence extension have different resource requirements?

Could a more modest degree of coherence extension be provided significantly more cheaply than full, infinite coherence extension?

Will evolution of quantum error correction over time incrementally reduce errors and increase precision and coherence, or is it an all or nothing proposition?

Does quantum error correction imply that the overall QPU is any less noisy, or just that logical qubits mitigate that noise?

  1. External noise and interference. From outside of the individual qubit.
  2. Internal noise. Within the qubit itself.

What are the potential tradeoffs for quantum error correction and logical qubits?

  1. Increased gate execution time.
  2. Slower measurement time.
  3. Slower qubit initialization time.
  4. Slower SPAM time. State preparation and measurement.
  5. Fewer usable qubits — number of physical qubits used for each logical qubit.
  6. Residual error rate vs. number of qubits usable by the application.

How severely does quantum error correction impact gate execution performance?

How does the performance hit on gate execution scale based on the number of physical qubits per logical qubit?

Are there other approaches to logical qubits than strict quantum error correction?

  1. Perfect qubits. The ideal, but believed to be impractical. Maybe not absolute perfection with a true zero error rate, but at least such an incredibly tiny error rate that almost nobody would ever notice.
  2. Near-perfect qubits. An error rate which may still be significant for some applications, but is not significant for many or even most applications. And needed as the prerequisite for quantum error correction anyway.

How many logical qubits are needed to achieve quantum advantage for practical applications?

  1. 20 = one million
  2. 30 = one billion
  3. 40 = one trillion
  4. 50 = one quadrillion = one million billions
  5. 60 = one quadrillion = one billion billions

Is it any accident that IBM’s latest machine has 65 qubits?

  1. Initialization of a logical qubit. And reset as well.
  2. All of the Bloch sphere rotations of a single isolated qubit. Elimination of gate errors.
  3. Extended coherence. Well beyond the coherence time of a physical qubit.
  4. Measurement. Elimination of measurement errors.

What is a surface code?

Background on surface codes

What is the Steane code?

How might quantum tomography, quantum state tomography, quantum process tomography, and matrix product state tomography relate to quantum error correction and measurement?

What is magic state distillation?

What error threshold or logical error rate is needed to achieve acceptable quality quantum error correction for logical qubit results?

Depth d is the square root of physical qubits per logical qubit in a surface code

  • A distance-d surface code has one logical qubit and n = d² physical qubits located at sites of a square lattice of size d × d with open boundary conditions
  1. Correcting coherent errors with surface codes
  2. Sergey Bravyi, Matthias Englbrecht, Robert Koenig, Nolan Peard
  3. https://arxiv.org/abs/1710.02270 (2017)
  4. https://www.nature.com/articles/s41534-018-0106-y (2018)

What are typical values of d for a surface code?

  1. d = 2, d = 4, … — all even values of d are excluded. I don’t know exactly why.
  2. d = 3 — excluded (skipped) “because of strong finite-size effects.” Again, unclear what that’s really all about. You can read the paper.
  3. d = 5 — the smallest practical value. Requires 25 physical (data) qubits.
  4. d = 7 — requires 49 physical (data) qubits.
  5. d = 9 — requires 81 physical (data) qubits.
  6. d = 19 — requires 361 physical (data) qubits.
  7. d = 25 — requires 625 physical (data) qubits.
  8. d = 29 — requires 841 physical (data) qubits.
  9. d = 37 — requires 1,369 physical (data) qubits. Fairly low error rate.
  10. d = 49 — requires 2,401 physical (data) qubits. Diminishing returns.

Is d = 5 really optimal for surface codes?

Prospects for logical qubits

  1. Level of technical risk. This is the first and foremost issue, now and for the foreseeable future.
  2. Level of effort to achieve. Beyond the bullet list of technical hurdles, how many people and how much money will be required?
  3. Timeframe. Unknown and strictly speculative, although various parties are now talking more about roadmaps and milestones, which wasn’t true just two years ago.
  4. Capacity in what timeframe. Tee shirt sizes — S, M, L, XL, XXL, XXXL. Specific qubit targets will come once we achieve the smallest sizes — 1–5, 8, 10, 12. L and XL may achieve quantum advantage.

Google and IBM have factored quantum error correction into the designs of their recent machines

  • Hardware-aware approach for fault-tolerant quantum computation
  • Although we are currently in an era of quantum computers with tens of noisy qubits, it is likely that a decisive, practical quantum advantage can only be achieved with a scalable, fault-tolerant, error-corrected quantum computer. Therefore, development of quantum error correction is one of the central themes of the next five to ten years.
  • the surface code is the most famous candidate for near-term demonstrations (as well as mid- to long-term applications) on a two-dimensional quantum computer chip. The surface code naturally requires a two-dimensional square lattice of qubits, where each qubit is coupled to four neighbors.
  • we developed two new classes of codes: subsystem codes called heavy-hexagon codes implemented on a heavy-hexagon lattice, and heavy-square surface codes implemented on a heavy-square lattice.
  • The IBM team is currently implementing these codes on the new quantum devices.
  • Guanyu Zhu and Andrew Cross
  • https://www.ibm.com/blogs/research/2020/09/hardware-aware-quantum/

NISQ simulators vs. post-NISQ simulators

Need for a paper showing how logical qubit gates work on physical qubits

Need detailed elaboration of basic logical qubit logic gate execution

  1. Initialize all qubits to 0. Are all physical qubits set to 0 or are some 1? What pattern of initialization is performed. Can all logical qubits be initialized simultaneously, in parallel, or is some sequencing required?
  2. Initialize a qubit to 1. After all qubits are initialized to 0, execute an X gate on a qubit to flip it from 0 to 1.
  3. Flip a qubit. Same as 2, but in an unknown state, not 0 per se.
  4. Hadamard gate. To see superposition.
  5. Reverse a Hadamard gate. H gate to create superposition, second H gate to restore to original state before superposition.
  6. Bell state. To entangle two logical qubits.
  7. Measurement. After each of the cases above.

Need animation of what happens between the physical qubits during correction

Even with logical qubits, some applications may benefit from the higher performance of near-perfect physical qubits

Near-perfect physical qubits may be sufficient to achieve the ENIAC moment for niche applications

Likely need logical qubits to achieve the FORTRAN moment

Irony: By the time qubits get good enough for efficient error correction, they may be good enough for many applications without the need for error correction

  1. Achieve logical qubits as quickly as possible.
  2. Maximize logical qubits for a given number of physical qubits. Achieve a low enough error rate for physical qubits so that only a modest number of physical qubits are needed for each logical qubit.

Readers should suggest dates for various hardware and application milestones

  • Some may happen quickly.
  • Some may take a long time to get started, but then happen quickly.
  • Some may happen slowly over time.
  • Some may happen slowly initially but then begin to accelerate after some time has passed.
  • Computational chemistry. Using quantum phase estimation and quantum Fourier transform.
  • Shor’s algorithm for factoring very large semiprime numbers such as public encryption keys. Quantum Fourier transform is needed.

Call for applications to plant stakes at various logical qubit milestones

Reasonable postures to take on quantum error correction and logical qubits

  1. Quantum error correction and logical qubits are coming real soon. No, they’re not coming in the next two years.
  2. It will take more than ten years before we see production-scale quantum error correction and logical qubits. No, it won’t take that long.
  1. Quantum error correction and logical qubits will never happen.
  1. Coming relatively soon — within a small number of years, but not the next two (or maybe three) years.
  2. Not happening for quite a few years. Could by 5–7 years, at least for production-scale.
  3. Not happening in the next two years.
  4. Not happening for another 7–10 years. Especially for larger production-scale.
  5. May require at least five years of hardware evolution. Hopefully less, but five years is not unreasonable.

Hardware fabrication challenges are the critical near-term driver, not algorithms

Need to prioritize basic research in algorithm design

Need for algorithms to be scalable

Need for algorithms which are provably scalable

How scalable is your quantum algorithm?

  1. Simulate an ideal quantum computer on classical hardware. Targeting 24 to 44 qubits.
  2. Run on NISQ hardware. Targeting 4 to 32 qubits. Can compare results to simulation.
  3. Run on much larger post-NISQ hardware with quantum error correction and logical qubits. So large that no classical simulation is possible. Target 4 to 44 qubits for simulation validation, but beyond 44 qubits will not be validated on classical hardware.

Classical simulation is not possible for post-NISQ algorithms and applications

  1. Unable to classically simulate such large configurations.
  2. Presumption that results are valid since they were valid between the hardware and simulator for 4 to 44 qubits.
  3. Need for automated tools to examine an algorithm and mathematically prove that if it works for 4 to 44 qubits on a simulator or real hardware, then it will work correctly for more than 44 qubits on real hardware. Proof that the algorithm is scalable.
  4. Especially tricky to prove scalability of algorithms which rely on fine granularity of phase and probability amplitude. But it’s essential. Plenty of basic research is needed.
  5. Need benchmark algorithms whose results can be quickly validated. Need to be able to test and validate the hardware itself.
  6. Algorithms and applications whose results cannot be rapidly validated are risky although they may be high-value.

Quantum error correction does not eliminate the probabilistic nature of quantum computing

Shot count (circuit repetitions) is still needed even with error-free logical qubits — to develop probabilistic expectation values

Use shot count (circuit repetitions) for mission-critical applications on the off chance of once in a blue moon errors

We need nicknames for logical qubit and physical qubit

Competing approaches to quantum error correction will continue to evolve even after initial implementations become available

I care about the effects and any side effects or collateral effects that may be visible in algorithm results or visible to applications

  1. Performance.
  2. Cost — total cost and cost per logical qubit.
  3. Capacity — physical qubits are a scarce capacity so they impact capacity of logical qubits.
  4. Absolute impact on error rates.
  5. Guidance for shot count (circuit repetitions.)
  6. Impact on granularity of phase.
  7. Impact on granularity of probability amplitude.

Need for a much higher-level programming model

  1. Integers
  2. Real numbers, floating point
  3. Booleans — logical true and false — binary, but not necessarily implemented as a single bit
  4. Text, strings, characters, character codes
  5. Structures, objects
  6. Arrays, trees, maps, graphs
  7. Media — audio, video
  8. Structured data
  9. Semi-structured data

What Caltech Prof. John Preskill has to say about quantum error correction

  1. In the near term, noise mitigation without full-blown quantum error correction.
  2. Lower quantum gate error rates will lower the overhead cost of quantum error correction, and also extend the reach of quantum algorithms which do not use error correction.
  3. Progress toward fault-tolerant QC must continue to be a high priority for quantum technologists.
  4. Quantum error correction (QEC) will be essential for solving some hard problems.

Getting beyond the hype

I know I’m way ahead of the game, but that’s what I do, and what interests me

Conclusions

  1. We definitely need quantum error correction and logical qubits, urgently.
  2. We don’t have it and it’s not coming soon. Its arrival is not imminent.
  3. It’s an active area of research — nowhere close to being ready for prime-time production-scale practical real-world applications. Much more research money is needed. Much more creativity is needed.
  4. It’s not clear which qubit technology will prevail for achieving fault-tolerant quantum computing, quantum error correction, and logical qubits.
  5. Twin progressions are needed — research on quantum error correction and logical qubits and improvements to physical qubits.
  6. It’s a real race — quantum error correction and logical qubits vs. near-perfect qubits and the outcome is unclear.
  7. Near-perfect qubits are of value in their own right, even without quantum error correction.
  8. And research into advanced algorithms exploiting 24 to 40 logical qubits is needed. Including scalability and the ability to validate and prove scalability to support algorithms beyond 40 qubits which can no longer be tested and validated on classical quantum simulators.
  9. Plenty of open questions and issues.
  10. Lots of patience is required.

What’s next?

  1. Monitor research and papers as they are published. Refinements in quantum error correction approaches. New approaches. Approaches to near-term hardware.
  2. Monitor vendor activity and advances. Advances in hardware which can enable cost-effective quantum error correction, as well as refinements in quantum error correction approaches which can work with real hardware.
  3. Monitor algorithms — which can actually exploit and require quantum error correction.
  4. Monitor advanced tentative experimental hardware.
  5. Lots of patience.
  6. Deeper dive into quantum error correction itself, including the underlying theory.
  1. Consider posting introduction and nutshell sections as a standalone, briefer paper for people without the patience to read this full paper.
  2. Consider posting bibliography and references as a standalone paper.

Glossary

References and bibliography

  1. 1995 — Scheme for reducing decoherence in quantum computer memory — Shor
    https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.R2493
  2. 1996 — Fault-tolerant quantum computation — Shor
    https://arxiv.org/abs/quant-ph/9605011, https://dl.acm.org/doi/10.5555/874062.875509
  3. 1996 — Multiple Particle Interference and Quantum Error Correction — Steane
    https://arxiv.org/abs/quant-ph/9601029
  4. 1997 — Fault-tolerant quantum computation by anyons — Kitaev
    https://arxiv.org/abs/quant-ph/9707021, https://www.sciencedirect.com/science/article/abs/pii/S0003491602000180
  5. 1997 — Fault-tolerant quantum computation — Preskill
    https://arxiv.org/abs/quant-ph/9712048
  6. 1997 — Fault Tolerant Quantum Computation with Constant Error — Aharonov and Ben-Or
    https://arxiv.org/abs/quant-ph/9611025, https://dl.acm.org/doi/10.1145/258533.258579
  7. 1998 — Quantum codes on a lattice with boundary — Bravyi and Kitaev
    https://arxiv.org/abs/quant-ph/9811052v1
  8. 2004 — Universal Quantum Computation with ideal Clifford gates and noisy ancillas — Bravyi and Kitaev
    https://arxiv.org/abs/quant-ph/0403025
  9. 2005 — Operator Quantum Error Correcting Subsystems for Self-Correcting Quantum Memories — Bacon
    https://arxiv.org/abs/quant-ph/0506023, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.73.012340
  10. 2006 — A Tutorial on Quantum Error Correction — Steane
    https://www2.physics.ox.ac.uk/sites/default/files/ErrorCorrectionSteane06.pdf
  11. 2007 — Fault-tolerant quantum computation with high threshold in two dimensions — Raussendorf and Harrington
    https://arxiv.org/abs/quant-ph/0610082, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.190504
  12. 2007 — Optimal Resources for Topological 2D Stabilizer Codes: Comparative Study — Bombin and Martin-Delgado
    https://arxiv.org/abs/quant-ph/0703272v1, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.76.012305
  13. 2010 — Quantum Computation and Quantum Information: 10th Anniversary Edition
    Michael Nielsen and Isaac Chuang (“Mike & Ike”)
    Chapter 10 — Quantum Error-Correction (Shor code)
    https://www.amazon.com/Quantum-Computation-Information-10th-Anniversary/dp/1107002176
    https://en.wikipedia.org/wiki/Quantum_Computation_and_Quantum_Information
  14. 2012 — Surface codes: Towards practical large-scale quantum computation — Fowler, Mariantoni, Martinis, Cleland
    https://arxiv.org/abs/1208.0928, https://journals.aps.org/pra/abstract/10.1103/PhysRevA.86.032324
  15. 2013 — Implementing a strand of a scalable fault-tolerant quantum computing fabric — Chow, Gambetta, Steffen, et al
    https://arxiv.org/abs/1311.6330, https://www.nature.com/articles/ncomms5015
  16. 2014 — Dealing with errors in quantum computing — Chow
    https://www.ibm.com/blogs/research/2014/06/dealing-with-errors-in-quantum-computing/
  17. 2014 — Logic gates at the surface code threshold: Superconducting qubits poised for fault-tolerant quantum computing — Barends, Martinis, et al
    https://arxiv.org/abs/1402.4848, https://www.nature.com/articles/nature13171
  18. 2015 — Demonstration of a quantum error detection code using a square lattice of four superconducting qubits — Córcoles, Magesan, Srinivasan, Cross, Steffen, Gambetta, Chow
    https://www.nature.com/articles/ncomms7979
  19. 2015 — Building logical qubits in a superconducting quantum computing system — Gambetta, Chow, Steffen
    https://arxiv.org/abs/1510.04375, https://www.nature.com/articles/s41534-016-0004-0
  20. 2016 — Overhead analysis of universal concatenated quantum codes — Chamberland, Jochym-O’Connor, Laflamme
    https://arxiv.org/abs/1609.07497
  21. 2017/2018 — Correcting coherent errors with surface codes — Bravyi, Englbrecht, Koenig, and Peard
    https://arxiv.org/abs/1710.02270, https://www.nature.com/articles/s41534-018-0106-y
  22. 2019 — Topological and subsystem codes on low-degree graphs with flag qubits — Chamberland, Zhu, Yoder, Hertzberg, Cross
    https://arxiv.org/abs/1907.09528, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.10.011022
  23. 2018 — Quantum Computing with Noisy Qubits — Sheldon. In: National Academy of Engineering. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2018 Symposium. Washington (DC): National Academies Press (US); 2019 Jan 28. Available from: https://www.ncbi.nlm.nih.gov/books/NBK538709/
  24. 2020 — Hardware-aware approach for fault-tolerant quantum computation — Zhu and Cross
    https://www.ibm.com/blogs/research/2020/09/hardware-aware-quantum/
  25. 2020 — Day 1 opening keynote by Hartmut Neven (Google Quantum Summer Symposium 2020)
    Current Google research status and roadmap for quantum error correction.
    https://www.youtube.com/watch?v=TJ6vBNEQReU&t=1231
  26. 2020 — Fault-Tolerant Operation of a Quantum Error-Correction Code — Egan, Monroe, et al
    https://arxiv.org/abs/2009.11482
  27. 2020 — Machine learning of noise-resilient quantum circuits — Cincio, Rudinger, Sarovar, and Coles
    https://arxiv.org/abs/2007.01210

Some interesting notes

  1. The literature on surface codes is somewhat opaque.
    https://arxiv.org/abs/1208.0928
  2. The tolerance of surface codes to errors, with a peroperation error rate as high as about 1% [22, 23], is far less stringent than that of other quantum computational approaches. For example, calculations of error tolerances of the Steane and Bacon-Shor codes, implemented on two-dimensional lattices with nearest-neighbor coupling, find per-step thresholds of about 2 × 10−5 [33, 34], thus requiring three orders of magnitude lower error rate than the surface code.
    https://arxiv.org/abs/1208.0928

--

--

--

Freelance Consultant

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jack Krupansky

Jack Krupansky

Freelance Consultant

More from Medium

Quantum computing — hype or opportunity?

Edge is gaining momentum

A new quantum computer in the UK