Notes on IBM’s September 14, 2022 Paper on the Future of Quantum Computing (with Superconducting Qubits)

Jack Krupansky
34 min readOct 5, 2022

--

IBM recently published a paper on the future of quantum computing (with superconducting qubits.) Actually, it’s the future of quantum computing (with superconducting qubits) at IBM. It was based in part on information which IBM disclosed when they posted their updated quantum roadmap for 2022 back in May 2022, but with a lot more detail on the technology itself, especially the hardware, but software as well. This informal paper will give some highlights and my comments on IBM’s recent paper, but not a full summary or a full detailing of my thoughts since my previous writing covered much of that when I reviewed their 2022 roadmap update in August 2022.

I won’t delve into IBM’s 2022 roadmap update here, but I have already reviewed it and given my summary and comments on it:

IBM’s recent paper doesn’t go into all of the specific hardware systems and milestone dates, but does delve deeper into architectural issues and even some technical details. Here’s their paper:

The abstract of their paper:

  • For the first time in history, we are seeing a branching point in computing paradigms with the emergence of quantum processing units (QPUs). Extracting the full potential of computation and realizing quantum algorithms with a super-polynomial speedup will most likely require major advances in quantum error correction technology. Meanwhile, achieving a computational advantage in the near term may be possible by combining multiple QPUs through circuit knitting techniques, improving the quality of solutions through error suppression and mitigation, and focusing on heuristic versions of quantum algorithms with asymptotic speedups. For this to happen, the performance of quantum computing hardware needs to improve and software needs to seamlessly integrate quantum and classical processors together to form a new architecture that we are calling quantum-centric supercomputing. Long term, we see hardware that exploits qubit connectivity in higher than 2D topologies to realize more efficient quantum error correcting codes, modular architectures for scaling QPUs and parallelizing workloads, and software that evolves to make the intricacies of the technology invisible to the users and realize the goal of ubiquitous, frictionless quantum computing.

Top highlights

Briefly, here are a few of the top highlights I gleaned from IBM’s paper — beyond what I’ve already written about after their quantum roadmap update in May:

  1. on-chip non-local couplers”. The ability to perform two-qubit operations on qubits which are not adjacent to each other, without requiring a SWAP network. This is interesting and was not shown or mentioned in the roadmap back in May. It is billed as being focused on error correction, so it’s not clear whether or not it can also be used for two-qubit gates in general for qubits which are not nearest neighbors. It’s also not clear when it will be introduced — will Osprey, Heron, Condor, or Flamingo have it?
  2. hardware that exploits qubit connectivity in higher than 2D topologies to realize more efficient quantum error correcting codes”. Making the same point as above. Additional circuit layers are needed to achieve non-local qubit connectivity. See the abstract above as well.
  3. split a single QPU into multiple virtual QPUs to allow classical parallelization of quantum workloads”. Sounds too good to be true — will the multiple virtual QPUs really all run in parallel, or will the classical control logic have to bounce between them with only one running at any moment? Also not clear when this feature will be introduced.
  4. The error correction story continues to evolve. The paper provides more detail than the roadmap update in May. It’s not clear how much of what is in the paper is further evolution or simply deeper detail of the story told in the roadmap update in May. The bottom line is that the story has gotten very complicated, and full, automatic, and transparent quantum error correction is still or even further off in the distant future. In the mean-time, a mish-mash variety of ad hoc techniques are being proposed to mitigate errors. It’s a mixed bag — the good news is that error rates can be reduced, but the bad news is that the reduction will not be fully automatic.
  5. improvements in hardware to increase the fidelity of QPUs to 99.99% or higher”. Four nines of qubit fidelity, which I refer to as near-perfect qubits. Not clear when they will achieve this, although Jay Gambetta is quoted in a Physics World interview dated September 20, 2022 as saying “I think 99.99% wouldn’t be impossible by the end of next year.” That would be by the end of 2023, when the Condor and Heron processors are due out — according to the quantum roadmap update in May.
  6. a quantum processing unit (QPU) with two-qubit gate fidelity of 99.99% can implement circuits with a few thousand gates to a fair degree of reliability without resorting to error correction”. My sentiments exactly, that near-perfect qubits with four nines of qubit fidelity could enable 32 to 40-qubit quantum algorithms without necessarily any need for any form of quantum error correction or even quantum error mitigation.
  7. near-term quantum advantage should be possible by exploring less expensive, possibly heuristic versions of the algorithm considered. Those heuristic quantum algorithms lack rigorous performance guarantees, but they may be able to certify the quality of a solution a posteriori and offer a way to tackle problems that cannot be simulated classically.” Yes, it’s possible, but… it would be a real shame if that’s the best we can do a few years from now.
  8. IBM is still not giving us a firm sense or even a rough sense of what the end point for their quantum error correction efforts will be — will logical qubits be absolutely perfect with absolutely no errors, or will there be some residual error. Based on what I know and can gather, there likely will be some residual error — the question is what magnitude will it have. Will the residual error be one in a million, one in a billion, one in a trillion, one in a quadrillion, or what? Will it be a fixed residual error rate, or can users tune the configuration to trade off performance, capacity, and error rate? IBM should endeavor to provide us with an error correction end point story which sets some sort of expectation, even if still somewhat rough. And we need some sense of how that story might be expected to evolve as the hardware and architecture evolves. Even if expectations can’t be set precisely, we at least need an approximate placeholder for expectations.
  9. Technically, this IBM paper focuses only on superconducting qubits, and doesn’t necessarily apply to other qubit technologies. Indeed, what is the future of trapped ion, neutral atom, silicon spin, topological, or other qubit technologies?

Highlights and comments on IBM’s paper

I’m not attempting to cover all of the high points and my comments on this paper, especially if I’ve already done so in my informal paper on the IBM 2022 roadmap update, which I posted in August. The emphasis here is on fresh highlights and fresh comments specific to the new IBM paper. I present snippets from the paper in italics, followed by my own comments without italics.

  1. achieving a computational advantage in the near term may be possible by combining multiple QPUs through circuit knitting techniques, improving the quality of solutions through error suppression and mitigation, and focusing on heuristic versions of quantum algorithms with asymptotic speedups.
    How to achieve near-term quantum advantage without full quantum error correction (QEC).
  2. Long term, we see hardware that exploits qubit connectivity in higher than 2D topologies to realize more efficient quantum error correcting codes, modular architectures for scaling QPUs and parallelizing workloads, and software that evolves to make the intricacies of the technology invisible to the users and realize the goal of ubiquitous, frictionless quantum computing.
    This is simultaneously very good news and bad news. The very good news is that IBM is acknowledging that qubit connectivity better than nearest-neighbor is highly desirable and even necessary, but the bad news is that they make it sound as if it is not a general feature but reserved for error correction alone. Whether that latter caveat is true remains to be seen. I suspect that IBM is emphasizing the enhanced qubit connectivity for the limited distance needed for the physical qubits which comprise each logical qubit. Whether IBM really does intend such a limit is, as I just noted, indeterminate, for the moment.
  3. the number of qubits required to realize error-corrected quantum circuits solving classically hard problems exceeds the size of systems available today by several orders of magnitude.
    Seems to presume 1,000 physical qubits for each logical qubit — three orders of magnitude.
  4. a quantum processing unit (QPU) with two-qubit gate fidelity of 99.99% can implement circuits with a few thousand gates to a fair degree of reliability without resorting to error correction.
    My sentiments exactly, that near-perfect qubits with four nines of qubit fidelity could enable 32 to 40-qubit quantum algorithms without necessarily any need for any form of quantum error correction or even quantum error mitigation.
  5. the first demonstrations of a computational quantum advantage — where a computational task of business or scientific relevance can be performed more efficiently, cost-effectively, or accurately using a quantum computer than with classical computations alone — may be achieved without or with limited error correction.
    My sentiments exactly, what I refer to as The ENIAC Moment.
  6. a computational task of business or scientific relevance
    A reasonable approach to keep the focus on meaningful applications and not mere oddball computer science experiments.
  7. weak noise regime
    Basically what I call a near-perfect qubit — a rather low error rate.
  8. how to improve the efficiency of quantum error-correction schemes and use error correction more sparingly.
    The challenges IBM is attempting to address with regards to error correction. To me this says that they had some reticence with their previous approach to quantum error correction and are now taking a new tack on the problem.
  9. two-dimensional qubit connectivity
    I think this refers to what I call nearest-neighbor connectivity — you can only connect to adjacent qubits. The alternative requires “breaking the plane” — “i.e., routing microwave control and readout lines to qubits in the center of dense arrays” using additional circuit layers to achieve non-local connectivity.
  10. For problem sizes of practical interest, error correction increases the size of quantum circuits by nearly six orders of magnitude, making it prohibitively expensive for near-term QPUs (see Section II A).
    Yikes!! Six orders of magnitude is 1,000,000X — one million times. I suspect that this relates in part to the need for massive SWAP networks to compensate for lack of full direct qubit connectivity.
  11. quantum error mitigation and circuit knitting. These techniques extend the size of quantum circuits that can be executed reliably on a given QPU without resorting to error correction.
    Focus on avoiding full quantum error correction as long as possible.
  12. Circuit knitting techniques exploit structural properties of the simulated system, such as geometric locality, to decompose a large quantum circuit into smaller sub-circuits or combine solutions produced by multiple QPUs.
    Good to have a slightly more detailed technical characterization than in the roadmap.
  13. The classical simulation algorithms used in computational physics or chemistry are often heuristics and work well in practice, even though they do not offer rigorous performance guarantees. Thus, it is natural to ask whether rigorous quantum algorithms designed for simulating time evolution admit less expensive heuristic versions that are more amenable to near-term QPUs.
    That’s a mixed bag. Sure, some people will be thrilled to have something, anything, that can run on current hardware, but what’s the point if there is no robust performance advantage. We (and IBM) should be more focused on greater performance benefit on the hardware to become available in a few years, three to five years, than near-term hardware.
  14. how to improve the efficiency of quantum error-correction schemes and use error correction more sparingly. … we discuss generalizations of the surface code known as low-density parity check (LDPC) quantum codes. These codes can pack many more logical qubits into a given number of physical qubits such that, as the size of quantum circuits grows, only a constant fraction of physical qubits is devoted to error correction (see Section II A for details). These more efficient codes need long-range connections between qubits embedded in a two-dimensional grid, but the efficiency benefits are expected to outweigh the long-range connectivity costs.
    IBM shifting to a new approach to error correction. And needing greater qubit connectivity to do it. That was one of my big concerns even with the old way of doing error correction — the need for SWAP networks to couple qubits which are not nearest neighbors.
  15. We need classical integration at real-time to enable conditioning quantum circuits on classical computations (dynamic circuits), at near-time to enable error mitigation and eventually error correction, and at compile time to enable circuit knitting and advanced compiling.
    I’m still kind of disappointed that error correction is not being fully handled at the hardware or FPGA level and is still expected to require software, and at only near-time at that.
  16. we introduce a series of schemes — which we denote m, l, c, and t couplers — that give us the amount of flexibility needed for realizing LDPC codes, scaling QPUs, and enabling workflows that take advantage of local operations and classical communication (LOCC) and parallelization.
    The new c coupler — on-chip non-local coupler — is interesting and was not shown or mentioned in the roadmap back in May. It is billed as being focused on error correction, so it’s not clear whether or not it can also be used for two-qubit gates for qubits which are not nearest neighbors. It’s also not clear when it will be introduced — will Osprey, Heron, Condor, or Flamingo have it?
  17. TABLE I. Types of modularity in a long-term scalable quantum system
    p — Real-time classical communication
    for Classical parallelization of QPUs
    m — Short range, high speed, chip-to-chip
    for Extend effective size of QPUs
    l — Meter-range, microwave, cryogenic
    for Escape I/O bottlenecks, enabling multi-QPUs
    c — On-chip non-local couplers
    for Non-planar error-correcting code
    t — Optical, room-temperature links
    for Ad-hoc quantum networking
    Again, the c on-chip non-local coupler is new since the roadmap.
  18. we can define a cluster-like architecture that we call quantum-centric supercomputer. It consists of many quantum computation nodes comprised of classical computers, control electronics, and QPUs. A quantum runtime can be executed on a quantum-centric supercomputer, working in the cloud or other classical computers to run many quantum runtimes in parallel. Here we propose that a serverless model should be used so that developers can focus on code and do not have to manage the underlying infrastructure.
    A concise description of their overall architectural view.
  19. Hardware advances will raise the bar of quantum computers’ size and fidelity. Theory and software advances will lower the bar for implementing algorithms and enable new capabilities. As both bars converge in the next few years, we will start seeing the first practical benefits of quantum computation.
    A concise description of the convergence of hardware, software, and algorithms.
  20. As the problem size n grows, the more favorable scaling of the quantum runtime quickly compensates for a relatively high cost and slowness of quantum gates compared with their classical counterparts.
    An important fact to keep in mind. Every algorithm designer should be cognizant of the crossover point where a quantum algorithm does achieve an actual advantage over classical solutions.
  21. These exponential or, formally speaking, super-polynomial speedups are fascinating from a purely theoretical standpoint and provide a compelling practical reason for advancing quantum technologies.
    A necessary reminder of the proper terminology — we’re looking for super-polynomial speedups, not technically exponential speedups.
  22. (We leave aside speedups obtained in the so-called Quantum RAM model, for although it appears to be more powerful than the standard quantum circuit model, it is unclear whether a Quantum RAM can be efficiently implemented in any real physical system.)
    Interesting theoretical point, but asserting that it doesn’t matter.
  23. about 10⁷ CNOT gates (and a comparable number of single-qubit gates) are needed. This exceeds the size of quantum circuits demonstrated experimentally to date by several orders of magnitude. As we move from simple spin chain models to more practically relevant Hamiltonians, the gate count required to achieve quantum advantage increases dramatically. For example, simulating the active space of molecules involved in catalysis problems may require about 10¹¹ Toffoli gates. The only viable path to reliably implementing circuits with 10⁷ gates or more on noisy quantum hardware is quantum error correction.
    A stark reminder that even some relatively simple problems in physics and quantum computational chemistry can require vast numbers of gates — tens of millions, even billions and hundreds of billions, and that only with full quantum error correction can such gate counts be accommodated.
  24. More generally, a code may have k logical qubits encoded into n physical qubits and the code distance d quantifies how many physical qubits need to be corrupted before the logical (encoded) state is destroyed. Thus good codes have a large distance d and a large encoding rate k/n.
    Concise description of what quantum error correction is trying to achieve.
  25. Stabilizer-type codes are by far the most studied and promising code family. A stabilizer code is defined by a list of commuting multi-qubit Pauli observables called stabilizers such that logical states are +1 eigenvectors of each stabilizer. One can view stabilizers as quantum analogues of classical parity checks. Syndrome measurements aim to identify stabilizers whose eigenvalue has flipped due to errors. The eigenvalue of each stabilizer is repeatedly measured and the result — known as the error syndrome — is sent to a classical decoding algorithm. Assuming that the number of faulty qubits and gates is sufficiently small, the error syndrome provides enough information to identify the error (modulo stabilizers). The decoder can then output the operation that needs to be applied to recover the original logical state.
    Reasonably concise description of stabilizer codes for quantum error correction.
  26. The syndrome measurement circuit for a quantum LDPC code requires a qubit connectivity dictated by the structure of stabilizers, i.e., one must be able to couple qubits that participate in the same stabilizer. Known examples of LDPC codes with a single-shot error correction require 3D or 4D geometry.
    Pointing out that additional chip layers are needed to achieve the qubit connectivity needed to implement efficient quantum error correction.
  27. The large overhead associated with logical non-Clifford gates may rule out the near-term implementation of error-corrected quantum circuits, even if fully functioning logical qubits based on the surface code become available soon.
    A gentle reminder that even if we could implement logical qubits, implementing gates to operate on pairs of those logical qubits is extremely challenging — and not currently within our reach.
  28. A recent breakthrough result by Benjamin Brown shows how to realize a logical non-Clifford gate CCZ (controlled-controlled-Z) in the 2D surface code architecture without resorting to state distillation. This approach relies on the observation that a 3D version of the surface code enables an easy (transversal) implementation of a logical CCZ and a clever embedding of the 3D surface code into a 2+1 dimensional space-time. It remains to be seen whether this method is competitive compared with magic state distillation.
    In short, research continues in quantum error correction. We may still be more than a few years from a practical implementation that delivers on the full promise.
  29. Although error correction is vital for realizing large-scale quantum algorithms with great computational power, it may be overkill for small or medium size computations. A limited form of correction for shallow quantum circuits can be achieved by combining the outcomes of multiple noisy quantum experiments in a way that cancels the contribution of noise to the quantity of interest. These methods, collectively known as error mitigation, are well suited for the QPUs available today because they introduce little to no overhead in terms of the number of qubits and only a minor overhead in terms of extra gates.
    Full, transparent, and automatic quantum error correction is the preferred, long-term solution, but manual error mitigation is more practical in the near term.
  30. However, error mitigation comes at the cost of an increased number of circuits (experiments) that need to be executed. In general, this will result in an exponential overhead; however, the base of the exponent can be made close to one with improvements in hardware and control methods, and each experiment can be run in parallel.
    So, despite its near-term practicality, manual error mitigation has its downsides as well. Also, this argues for having a much greater number of quantum processors, so all of these extra experiments can indeed be run in parallel, rather than sequentially.
  31. Furthermore, known error mitigation methods apply only to a restricted class of quantum algorithms that use the output state of a quantum circuit to estimate the expected value of observables.
    And manual error mitigation won’t be so effective in all use cases.
  32. Probabilistic error cancellation (PEC) aims to approximate an ideal quantum circuit via a weighted sum of noisy circuits that can be implemented on a given quantum computer. The weights assigned to each noisy circuit can be computed analytically if the noise in the system is sufficiently well characterized or learned by mitigating errors on a training set of circuits that can be efficiently simulated classically. We expect that the adoption of PEC will grow due to the recent theoretical and experimental advances in quantum noise metrology.
    Another newer approach to dealing with errors. But still at the research stage.
  33. We can also measure the quantity of interest at several different values of the noise rate and perform an extrapolation to the zero-noise limit. This method cancels the leading-order noise contribution as long as the noise is weak and Markovian. Unlike PEC, this method is biased and heuristic but may require fewer circuits for the reconstruction. This method was recently demonstrated to scale up to 27 qubits and still reconstruct observables. Whether this method can be combined with PEC, which gives an unbiased estimation, remains an open question.
    Yet another alternative approach to error mitigation. Again, more research is needed. And it may not work in all use cases.
  34. More general (non-Markovian) noise can be mitigated using the virtual distillation technique. … virtual distillation can quadratically suppress the contributions of errors. However, this method introduces at least a factor of two overhead in the number of qubits and gates.
    Yet another alternative approach to error mitigation. Again, more research is needed. And it may not work or be optimal in all use cases.
  35. We anticipate error mitigation to continue to be relevant when error-corrected QPUs with a hundred or more logical qubits become available.
    Plenty of caveats. Again, more research is needed.
  36. This leads to the interesting possibility of combining error correction and mitigation. A concrete proposal by Piveteau, et al. leverages the ability to realize noisy logical T-gates with fidelity comparable to or exceeding that of physical (unencoded) gates. Applying error mitigation protocols at the logical level to cancel errors introduced by noisy T-gates enables one to simulate universal logical circuits without resorting to state distillation. This may considerably reduce the hardware requirements for achieving a quantum advantage. However, error mitigation comes at the cost of an increased number of circuit executions.
    Tantalizing prospects, but caveats as well. Again, more research is needed.
  37. We can extend the scope of near-term hardware to compensate for other shortcomings such as a limited number of qubits or qubit connectivity by using circuit knitting techniques. This refers to the process of simulating small quantum circuits on a quantum computer and stitching their results into an estimation of the outcome of a larger quantum circuit. As was the case with error mitigation, known circuit knitting techniques apply to a restricted class of quantum algorithms that aim to estimate the expected value of observables. … circuit cutting … entanglement forging … quantum embedding.
    Circuit knitting has some potential, but caveats as well. Again, more research is needed. And it may not work or be optimal in all use cases. It may be a reasonable tool to have in the toolkit for extreme cases, but it would be a shame if too many quantum algorithm designers and quantum application developers need to exert extra effort to resort to it with any frequency.
  38. Heuristic quantum algorithms can be employed near-term to solve classical optimization, machine learning, and quantum simulation problems. These fall into two categories — algorithms that use kernel methods and variational quantum algorithms (VQA).
    Again, tantalizing prospects, but caveats as well. Again, more research is needed. And again, it may be a reasonable tool to have in the toolkit for extreme cases, but it would be a shame if too many quantum algorithm designers and quantum application developers need to exert extra effort to resort to it with any frequency.
  39. At present, there is no mathematical proof that VQA can outperform classical algorithms in any task. In fact, it is known that VQA based on sufficiently shallow (constant depth) variational circuits with 2D or 3D qubit connectivity can be efficiently simulated on a classical computer. This rules out a quantum advantage. Meanwhile, the performance of VQA based on deep variational circuits is severely degraded by noise.
    VQA is generally simply a short-term stopgap measure and not a sure path to quantum advantage for the long term.
  40. However, as the error rates of QPUs decrease, we should be able to execute VQA in the intermediate regime where quantum circuits are already hard to simulate classically but the effect of noise can still be mitigated.
    So, VQA might indeed have some value eventually, but only for some use cases, and only once the hardware advances enough to sufficiently reduce the error rate — to around 99.99% (four nines, what I call a near-perfect qubit), but that’s not today or in the very near term. Once again, tantalizing, but with caveats.
  41. Variational quantum time evolution (VarQTE) algorithms, pioneered by Li and Benjamin, could be an alternative to simulate the time evolution of these classically hard instances given near-term noisy QPUs. … The fact that VarQTE algorithms are heuristics and therefore lack rigorous performance guarantees raises the question of how to validate them. This becomes particularly important for large problem sizes where verifying a solution of the problem on a classical computer becomes impractical.
    Once again, tantalizing, but with caveats. And they may not work or be optimal in all use cases.
  42. although VarQTE lacks a rigorous justification, one may be able to obtain a posteriori bounds on its approximation error for some specific problems of practical interest.
    Once again, tantalizing, but with caveats. And won’t work or be optimal in all use cases.
  43. the only known way to realize large-scale quantum algorithms relies on quantum error-correcting codes. The existing techniques based on the surface code are not satisfactory due their poor encoding rate and high cost of logical non-Clifford gates. Addressing these shortcomings may require advances in quantum coding theory such as developing high-threshold fault-tolerant protocols based on quantum LDPC codes and improving the qubit connectivity of QPUs beyond the 2D lattice. Supplementing error correction with cheaper alternatives such as error mitigation and circuit knitting may provide a more scalable way of implementing high-fidelity quantum circuits.
    Nice summary of the problem of dealing with errors. And emphasizes the need for further research.
  44. near-term quantum advantage should be possible by exploring less expensive, possibly heuristic versions of the algorithm considered. Those heuristic quantum algorithms lack rigorous performance guarantees, but they may be able to certify the quality of a solution a posteriori and offer a way to tackle problems that cannot be simulated classically.
    Yes, it’s possible, but… it would be a real shame if that’s the best we can do a few years from now.
  45. We believe these general guidelines define the future of quantum computing theory and will guide us to important demonstrations of its benefits for the solution of scientifically important problems in the next few years.
    Some encouraging words, but with a lot of caveats behind them.
  46. We believe there will be near-term advantage using a mixture of error mitigation, circuit knitting and heuristic algorithms. On a longer time frame, partially error-corrected systems will become critical to running more advanced applications and further down the line, fault-tolerant systems running on not-as-yet fully explored LDPC codes with non-local checks will be key.
    Fair summary of IBM’s overall strategy on the algorithm front.
  47. we need hardware with more qubits capable of higher fidelity operations.
    Amen! I place higher emphasis on the need for higher fidelity.
  48. We need tight integration of fast classical computation to handle the high run-rates of circuits needed for error mitigation and circuit knitting, and the classical overhead of the error correction algorithm afterwards. This drives us to identify a hardware path that starts with the early heuristic small quantum circuits and grows until reaching an error-corrected computer.
    Fair summary of IBM’s overall hardware architectural goals. Not really, but that’s the way they chose to put it.
  49. The first step in this path is to build systems able to demonstrate near-term advantage with error mitigation and limited forms of error correction. Just a few years ago, QPU sizes were limited by control electronics cost and availability, I/O space, quality of control software, and a problem referred to as “breaking the plane”, i.e., routing microwave control and readout lines to qubits in the center of dense arrays. Today, solutions to these direct barriers to scaling have been demonstrated, which has allowed us to lift qubit counts beyond 100 — above the threshold where quantum systems become intractably difficult to simulate classically and examples of quantum advantage become possible. The next major milestones are (1) increasing the fidelity of QPUs enough to allow exploration of quantum circuits for near-term quantum advantage with limited error correction and (2) improving qubit connectivity beyond 2D — either through modified gates, sparse connections with non-trivial topologies, and/or increasing the number of layers for quantum signals in 3D integration — to enable the longer term exploration of efficient non-2D LDPC error-correction codes. These developments are both required for our longer term vision, but can be pursued in parallel.
    Good summary of the steps IBM needs to take — enhancing qubit fidelity and qubit connectivity, although they aren’t clear if the enhanced qubit fidelity is general any-to-any connectivity or just focused on error correction for logical qubits.
  50. Scaling to larger systems also involves scaling classical control hardware and the input/output (I/O) chain in and out of the cryostat. This I/O chain, while still needing substantial customization for the exact QPU being controlled, consists of high volumes of somewhat more conventional devices; for example, isolators, amplifiers, scaled signal delivery systems, and more exotic replacements such as non-ferrite isolators and quantum limited amplifiers that may offer performance, cost, or size improvements. These components have enormous potential for being shared between various groups pursuing quantum computing, and in some instances can be purchased commercially already. However, assembling these systems at the scale required today, let alone a few years time, requires a high volume cryogenic test capability that does not currently exist in the quantum ecosystem, creating a short-term need for vertically-integrated manufacturing of quantum systems. The challenge here is establishing a vendor and test ecosystem capable of scaled, low-cost production — a challenge made difficult by the fact that the demand is somewhat speculative.
    The design and manufacturing challenges of scaling up to ever-larger quantum computing systems.
  51. Currently, each group building large QPUs has their own bespoke control hardware. Given the radically different control paradigms and requirements, it is unlikely that the analog front-ends of these systems could ever be shared. However, there is a common need for sequencing logic (branching, local and non-local conditionals, looping) at low-cost and low-power for all types of quantum computers, not just solid-state. These will likely need to be built into a custom processor — an Application Specific Integrated Circuit or ASIC — as we scale to thousands of qubits and beyond. On top of this, the software that translates a quantum circuit into the low-level representation of this control hardware is becoming increasingly complex and expensive to produce. Reducing cost favors a common control platform with customized analog front ends. Open-specification control protocols like OpenQASM3 are already paving the way for this transformation.
    Expressing a desire to reduce the cost of developing new quantum computer systems, especially as size scales up.
  52. Reaching near-term quantum advantage will require taking advantage of techniques like circuit knitting and error mitigation that effectively stretch the capabilities of QPUs — trading off additional circuit executions to emulate more qubits or higher fidelities. These problems can be pleasingly parallel, where individual circuits can execute totally independently on multiple QPUs, or may benefit from the ability to perform classical communication between these circuits that span multiple QPUs.
    Classical parallelization of quantum processors.
  53. control hardware that is able to run multiple QPUs as if they were a single QPU with shared classical logic
    I think this means they could execute multiple shots of the same circuit in parallel, but that’s not completely clear.
  54. split a single QPU into multiple virtual QPUs to allow classical parallelization of quantum workloads
    Sounds too good to be true — will the multiple virtual QPUs really all run in parallel, or will the classical control logic have to bounce between them with only one running at any moment (time-slicing)? Clarification is needed.
  55. I won’t discuss modularity here — it’s important, but was covered extensively in the original roadmap back in May 2022, although there may be some additional detail in this IBM paper that I overlooked.
  56. a very long-range optical “quantum network” t to allow nearby QPUs to work together as a single quantum computational node (QCN) … t type modularity involves microwave-to-optical transduction to link QPUs in different dilution refrigerators.
    In addition to linking multiple QPUs within a single dilution refrigerator — called dense modularity, m, QPUs can be linked between dilution refrigerators, called transduction modularity, t.
  57. on-chip non-local couplers c for LDPC codes
    Although it says that these on-chip non-local couplers are intended to enable LDPC codes (low-density parity check), it’s not clear if that is their only use or whether these non-local couplers work to connect any two qubits on a chip, which would enable true any-to-any connectivity. Clarification is needed on this point.
  58. The technologies that enable both dense modularity and long-range couplers, once developed and optimized, will ultimately be ported back into the qubit chip to enable non-local, non-2D connectivity. These on-chip nonlocal c couplers will ultimately allow implementation of high-rate LDPC codes, bringing our long-term visions to completion.
    Once again, this is a tantalizing question — whether these long-range couplers are intended only for error correction within the chip or also support full any-to-any connectivity within the chip. Clarification is needed on this point.
  59. With these four forms of modularity, we can redefine “scale” for a quantum system by
    n = ([(q m)l]t) p
    where
    n is the number of qubits in the entire modular and parallelized quantum system. The system is comprised of QPUs made from m chips, each QPU having q × m qubits. The QPUs can be connected with l t quantum channels (quantum parallelization), with l of them being microwave connections and t optical connections. Finally, to enable things like circuit cutting and speeding up error mitigation, each of these multi-chip QPUs can support classical communication, allowing p classical parallelizations.
    The microwave connections are between QPUs within a single dilution refrigerator, while the optical connections are between QPUs in separate dilution refrigerators.
  60. A practical quantum computer will likely feature all five types of modularity — classical parallelization, dense chip-to-chip extension of 2D lattices of qubits (m), sparse connections with non-trivial topology within a dilution refrigerator (l), non-local on-chip couplings for error correction ©, and long-range fridge-to-fridge quantum networking (t) (Table I).
    A good summary of the five types of modularity.
  61. TABLE I. Types of modularity in a long-term scalable quantum system
    p — Real-time classical communication for Classical parallelization of QPUs
    m — Short range, high speed, chip-to-chip for Extend effective size of QPUs
    l — Meter-range, microwave, cryogenic for Escape I/O bottlenecks, enabling multi-QPUs
    c — On-chip non-local couplers for Non-planar error-correcting code
    t — Optical, room-temperature links for Ad-hoc quantum networking
    Again, the c on-chip non-local coupler is new since the roadmap.
  62. Performing calculations on a system like this with multiple tiers of connectivity is still a matter of research and development
    A gentle reminder that a lot of research is still needed.
  63. While the jury is still out on module size and other hardware details, what is certain is that the utility of any quantum computer is determined by its ability to solve useful problems with a quantum advantage while its adoption relies on the former plus our ability to separate its use from the intricacies of its hardware and physics-level operation. Ultimately, the power provided by the hardware is accessed through software that must enable flexible, easy, intuitive programming of the machines.
    Reminder that both hardware and software are critical, and that ultimately it is quantum advantage for useful problems that really matters.
  64. Quantum computing is not going to replace classical computing but rather become an essential part of it. We see the future of computing being a quantum-centric supercomputer where QPUs, CPUs, and GPUs all work together to accelerate computations. In integrating classical and quantum computation, it is important to identify (1) latency, (2) parallelism (both quantum and classical), and (3) what instructions should be run on quantum vs. classical processors. These points define different layers of classical and quantum integration.
    Simple description of IBM’s model of integration of classical and quantum computing, which they call a quantum-centric supercomputer.
  65. A quantum circuit is a computational routine consisting of coherent quantum operations on quantum data, such as qubits, and concurrent (or real-time) classical computation. It is an ordered sequence of quantum gates, measurements, and resets, which may be conditioned on and use data from the real-time classical computation. If it contains conditioned operations, we refer to it is as a dynamic circuit. It can be represented at different levels of detail, from defining abstract unitary operations down to setting the precise timing and scheduling of physical operations.
    This is IBM’s latest definition for a quantum circuit. Kind of complicated, but they’re trying to capture the notion of dynamic circuits.
  66. With this extended quantum circuit definition, it is possible to define a software stack. Fig. 6 shows a high level view of the stack, where we have defined four important layers: dynamic circuits, quantum runtime, quantum serverless, and software applications. At the lowest level, the software needs to focus on executing the circuit. At this level, the circuit is represented by controller binaries that will be very dependent on the superconducting qubit hardware, supported conditional operations and logic, and the control electronics used. It will require control hardware that can move data with low latency between different components while maintaining tight synchronization.
    Introducing the quantum software stack.
  67. FIG. 6. The quantum software stack is comprised of four layers, each targeting the most efficient execution of jobs at different levels of detail. The bottom layer focuses on the execution of quantum circuits. Above it, the quantum runtime efficiently integrates classical and quantum computations, executes primitive programs, and implements error mitigation or correction. The next layer up (quantum serverless) provides the seamless programming environment that delivers integrated classical and quantum computations through the cloud without burdening developers with infrastructure management. Finally, the top layer allows users to define workflows and develop software applications.
    I won’t go into any further details on IBM’s model of the quantum software stack. The May 2022 roadmap update had a fair amount of that detail already.
  68. For superconducting qubits, real-time classical communication will require a latency of ∼100 nanoseconds. To achieve this latency, the controllers will be located very close to the QPU. Today, the controllers are built using FPGAs to provide the flexibility needed, but as we proceed to larger numbers of qubits and more advanced conditional logic, we will need ASICs or even cold CMOS.
    A peek into the future of the raw hardware for control of the QPU.
  69. The runtime compilation would update the parameters, add error suppression techniques such as dynamical decoupling, perform time-scheduling and gate/operation parallelization, and generate the controller code. It would also process the results with error mitigation techniques, and in the future, error correction.
    Description for the Quantum Runtime layer of the stack.
  70. Fortunately, error mitigation is pleasingly parallel, thus using multiple QPUs to run a primitive will allow the execution to be split and done in parallel.
    Hints of IBM’s future error mitigation efforts.
  71. Each layer of the software stack we just described brings different classical computing requirements to quantum computing and defines a different set of needs for different developers. Quantum computing needs to enable at least three different types of developers: kernel, algorithm, and model developers. Each developer creates the software, tools, and libraries that feed the layers above, thereby increasing the reach of quantum computing.
    The paper goes on to describe these three types of developers in a little more detail than the May 2022 roadmap update.
  72. In putting all of this together and scaling to what we call a quantum-centric supercomputer, we do not see quantum computing integrating with classical computing as a monolithic architecture. Instead, fig. 9 illustrates an architecture for this integration as a cluster of quantum computational nodes coupled to classical computing orchestration.
    IBM’s model of integration as a cluster of quantum computational nodes rather than monolithic integration.
  73. orchestration would be responsible for workflows, serverless, nested programs (libraries of common classical+quantum routines), the circuit knitting toolbox, and circuit compilation.
    A little more detail on IBM’s model of orchestration.
  74. In conclusion, we have charted how we believe that quantum advantage in some scientifically relevant problems can be achieved in the next few years. This milestone will be reached through (1) focusing on problems that admit a super-polynomial quantum speedup and advancing theory to design algorithms — possibly heuristic — based on intermediate depth circuits that can outperform state-of-the-art classical methods, (2) the use of a suite of error mitigation techniques and improvements in hardware-aware software to maximize the quality of the hardware results and extract useful data from the output of noisy quantum circuits, (3) improvements in hardware to increase the fidelity of QPUs to 99.99% or higher, and (4) modular architecture designs that allow parallelization (with classical communication) of circuit execution. Error mitigation techniques with mathematical performance guarantees, like PEC, albeit carrying an exponential classical processing cost, provide a mean to quantify both the expected run time and the quality of processors needed for quantum advantage. This is the near-term future of quantum computing.
    Putting it all together as IBM’s strategy for getting to quantum advantage in a few years.
  75. Progress in the quality and speed of quantum systems will improve the exponential cost of classical processing required for error mitigation schemes, and a combination of error mitigation and error correction will drive a gradual transition toward fault-tolerance.
    Fault tolerance won’t be a single, discrete milestone. To be honest, their fault tolerance story is a bit too much of a mish-mash for my taste. For now, for the near term, the focus needs to be on pushing towards near-perfect qubits — 3.5 to 4 to 4.5 to 5 nines of qubit fidelity — until full error correction becomes feasible. A mish-mash of error suppression and manual error mitigation is just going to be a real nightmare for any non-elite designers of quantum algorithms and developers of quantum applications. But, The ENIAC Moment will likely require super-elite quantum experts, The Lunatic Fringe, using some degree of manual error mitigation — and near-perfect qubits — to achieve some interesting degree of quantum advantage. Either way, we’re not talking about quantum computing for average technical professionals in the next few years.
  76. Classical and quantum computations will be tightly integrated, orchestrated, and managed through a serverless environment that allows developers to focus only on code and not infrastructure. This is the mid-term future of quantum computing.
    The vision of quantum computing a few years out.
  77. Finally, we have seen how realizing large-scale quantum algorithms with polynomial run times to enable the full range of practical applications requires quantum error correction, and how error correction approaches like the surface code fall short of the long term needs owing to their inefficiency in implementing non-Clifford gates and poor encoding rate. We outlined a way forward provided by the development of more efficient LDPC codes with a high error threshold, and the need for modular hardware with non-2D topologies to allow the investigation of these codes. This more efficient error correction is the long-term future of quantum computing.
    The vision even further out. Full quantum error correction will be required, but current approaches probably won’t cut it. The term non-2D topologies appears to be shorthand for achieving at least some degree of any-to-any connectivity, but I’d like to see it expressed more explicitly. It’s starting to get a bit too vague and fuzzy for my taste. And lots more research seems required.

Some other general comments

  1. No mention or discussion of shot count or circuit repetitions and the desirability of having a large number of quantum processors to reduce the wall clock time for a large number of shots — many tens of thousands, hundreds of thousands, or even millions of repetitions.
  2. No mention of the Quantum Volume (QV) metric, its limitations, or a replacement metric.
  3. No mention of pursuing full any-to-any connectivity for an entire quantum processor.
  4. No mention of achieving non-local connectivity for all qubits across chips for a multi-chip quantum processor. Just the connectivity of designated qubits at the edge of the chip.
  5. No mention of granularity for phase and probability amplitude. Fine granularity is needed.
  6. No mention or setting of expectations of the number of physical qubits per logical qubit. 57 or 65 as in a previous paper? 1,000? More?
  7. For years now IBM has been telling us that the more limited qubit connectivity of their new heavy hex qubit topology was essential to their error correction strategy. Now, all of a sudden, out of the blue, they are touting on-chip non-local couplers as the optimal path to error correction. We’ll have to see what story they are telling next year and the year after as the technology evolves. It would seem that the research on error correction is still not settled, so it is likely premature for IBM or anybody else to be telling an error correction story as if it were a firm commitment rather than a mere placeholder for future research results.
  8. IBM is still not giving us a firm sense or even a rough sense of what the end point for their quantum error correction efforts will be — will logical qubits be absolutely perfect with absolutely no errors, or will there be some residual error. Based on what I know and can gather, there likely will be some residual error — the question is what magnitude will it have. Will the residual error be one in a million, one in a billion, one in a trillion, one in a quadrillion, or what? Will it be a fixed residual error rate, or can users tune the configuration to trade off performance, capacity, and error rate? IBM should endeavor to provide us with an error correction end point story which sets some sort of expectation, even if still somewhat rough. And we need some sense of how that story might be expected to evolve as the hardware and architecture evolves. Even if expectations can’t be set precisely, we at least need an approximate placeholder for expectations.
  9. Technically, this IBM paper focuses only on superconducting qubits, and doesn’t necessarily apply to other qubit technologies. Indeed, what is the future of trapped ion, neutral atom, silicon spin, topological, or other qubit technologies?
  10. Overall, I feel that their architecture has gotten too complicated, with too many options, trying to compensate for near-term hardware issues. Just give us near-perfect qubits, full any-to-any qubit connectivity, and at least a million gradations for phase and probability amplitude — as a starter, so we can do some basic but practical applications, and save any complexity for later, after all of the basic stuff is working robustly.

Physics World interview of Jay Gambetta

A few tidbits of additional color on the IBM roadmap and paper can be found in a recent interview of IBM executive Jay Gambetta by Physics World:

Late-breaking additional work on quantum error correction by IBM

After I finished writing this informal paper and was just about to post it, IBM posted a blog post which detailed some further insight into IBM’s thinking and work on quantum error correction:

Some of it is already covered by the paper from September 14th, but some of it appears to be even newer.

I won’t attempt to detail it all here, but the bottom line is that quantum error correction is a very active area of research. It’s no surprise that IBM’s roadmap update from May did not offer a detailed roadmap of milestones for error correction when so much research is still pending and nobody knows how much additional research will be needed.

This leads me back to my conviction of more than two and a half years ago that near-perfect qubits — four to five nines of qubit fidelity, or maybe 3.5 nines for some quantum algorithms — are the most beneficial area of focus since they are simultaneously useful in their own right and provide a better foundation for whatever error correction approach is ultimately chosen.

As IBM even noted in their September 14th paper, “a quantum processing unit (QPU) with two-qubit gate fidelity of 99.99% can implement circuits with a few thousand gates to a fair degree of reliability without resorting to error correction.” Yes, we need to support quantum algorithms with more than a few thousand gates, eventually, but 3.5 to four nines of qubit fidelity will get us to a very interesting milestone while we patiently wait for the holy grail of full, automatic, and transparent quantum error correction to arrive.

My review and comments on IBM’s 2022 roadmap update from May

And to reiterate, although I won’t delve into IBM’s 2022 roadmap update here, I have already reviewed it and given my summary and comments on it:

--

--

Jack Krupansky
Jack Krupansky

No responses yet