Enhancing DiVincenzo’s Criteria for Quantum Computing to Enable Post-NISQ Quantum Computing

Jack Krupansky
7 min readJul 26, 2023

DiVincenzo’s criteria for quantum computing were a decent contribution back in 2000, but are in need of an upgrade now that quantum computing has advanced far beyond where it was in 2000 and threatens to show signs of usefulness (although still much more in the way of promises than actuality.) The overall goal is to enable post-NISQ quantum computing to achieve at least significant if not dramatic quantum advantage for non-trivial production-scale practical real-world applications and to deliver significant business value relative to classical solutions. This informal paper (a brief note) proposes a modest enhancement to the DiVincenzo criteria, plus some areas for future enhancement.

Caveat: My comments here are all focused only on general-purpose quantum computers. Some may also apply to special-purpose quantum computing devices, but that would be beyond the scope of this informal paper. For more on this caveat, see my informal paper:

First, here’s DiVincenzo original paper, vintage 2000:

  • The Physical Implementation of Quantum Computation
  • David P. DiVincenzo, IBM
  • After a brief introduction to the principles and promise of quantum information processing, the requirements for the physical implementation of quantum computation are discussed. These five requirements, plus two relating to the communication of quantum information, are extensively explored and related to the many schemes in atomic physics, quantum optics, nuclear and electron magnetic resonance spectroscopy, superconducting electronics, and quantum-dot physics, for achieving quantum computing.
  • Submitted February 25, 2000
  • https://arxiv.org/abs/quant-ph/0002077

See his paper for more detail, but here’s the abbreviated list of the five criteria (keeping his wording intact) for quantum computation:

  1. A scalable physical system with well characterized qubits.
  2. The ability to initialize the state of the qubits to a simple fiducial state, such as |000…>.
  3. Long relevant decoherence times, much longer than the gate operation time.
  4. A “universal” set of quantum gates.
  5. A qubit-specific measurement capability.

DiVincenzo’s criteria can be seen as describing even the most basic general-purpose quantum computers. Technically, they may describe a somewhat broader class of quantum computers, including at least some, but not necessarily all, special-purpose quantum computing devices.

The enhancements proposed here in this informal paper are intended to raise the bar from merely any general-purpose quantum computer to the level of capabilities for what I call a practical quantum computer, which I describe in my in informal paper:

The overall goal of the suggested enhancements is to enable post-NISQ quantum computing, as described in my informal paper:

Oversimplifying, the overall goal is to achieve at least significant if not dramatic quantum advantage for non-trivial production-scale practical real-world applications and to deliver significant business value relative to classical solutions.

I suggest the following important enhancements to DiVincenzo’s criteria:

  1. Relatively low error rate. What I call near-perfect qubits. Especially for 2-qubit gates. And for qubit measurement as well. Not long ago, a 1% error rate (two nines) was a real challenge. Even 0.5% (2.5 nines) is a challenge today. We need to get past a 0.1% error rate (three nines) as soon as possible, and we’re close, but it’s still tantalizingly just beyond our reach, although we might hit it for some devices by the end of the year (2023.) But we really need to achieve a 0.01% (four nines) or even a 0.001% error rate (five nines) to achieve practical quantum computing. Error mitigation helps a little, but not enough. Full quantum error correction promises a lot, but in my view will never be achieved — we need to achieve what I call near-perfect qubits instead.
  2. Full any-to-any (all-to-all) qubit connectivity. For 2-qubit gates. Any two qubits can be used in a single 2-qubit gate, regardless of the qubit topology. Trapped-ion qubits support this easily. Technically, one can simulate this for superconducting transmon qubits using SWAP networks, but I see that as a very poor solution, and unlikely to work well for most applications. What’s really needed is some sort of quantum state bus or dynamically routable couplers. In any case, this requirement is essential for more advanced and complex algorithms, such as those using quantum Fourier transforms and quantum phase estimation, especially to support quantum computational chemistry.
  3. Fine granularity of phase and probability amplitude. A handful of gradations or even 16, 32, or 64, is okay for toy algorithms but not for production-scale quantum algorithms intending to deliver any dramatic or even merely significant quantum advantage. Thousands, even millions of gradations are needed. Especially needed for quantum Fourier transforms and quantum phase estimation, especially to support quantum computational chemistry.
  4. Significantly large maximum circuit size. Raw coherence time is not enough — short gate execution time is needed as well. Coherence time needs to be divided by gate execution time to get maximum circuit size. Dozens or hundreds of gates are fine for toy algorithms, but not for production-scale algorithms, where 1,000 to 2,500 are a good start. I don’t imagine achieving millions let alone billions of gates, but maybe tens of thousands, and certainly several thousand or even ten thousand as a goal. Maybe this isn’t adding a new criteria so much as clarifying or replacing DiVincenzo’s current but vague coherence time criterion — “much longer than the gate operation time”, where dozens or even a hundred gates qualify as “much” but much more than that are needed.
  5. Capable of non-trivial quantum Fourier transform and quantum phase estimation. At least 20 to 24 qubits, as a start (20 to 24 in and 20 to 24 out, for a total of 40 to 48 qubits.) I’m not sure how much more can realistically be achieved, even ideally, but this is a fair initial goal. This should follow from the other four criteria, but just to emphasize the goal, especially to support quantum computational chemistry. The overall goal is to achieve at least significant if not dramatic quantum advantage for non-trivial production-scale practical real-world applications and to deliver significant business value relative to classical solutions.
  6. Classical simulation for non-trivial circuits. Including configurable qubit topology and noise models. When real quantum hardware is not yet available or is too noisy, as well as to enable debugging of quantum algorithms since quantum state is not visible during execution of quantum circuits.

Some open areas for future enhancement — or at least further research:

  1. Dynamic circuits. Mid-circuit measurement, reset, and classical code execution. Fairly new, but may be well-enough understood to move it to my main list of required enhancements.
  2. Infrastructure software. Not quantum-specific per se, but quantum needs it. Again, may be well-enough understood to move it to my main list of required enhancements.
  3. Scale. Clarify what we are talking about as a required minimum — 48, 64, 80, 96, 128, 256, 512, 1K, 2K, 4K, 8K, 16K, 32K, 64K, 128K qubits, or whatever. Personally, I don’t see anything beyond 256 as being practical for practical quantum computing. Maybe set a minimum, such as 48, to deter wasting energy on toy algorithms.
  4. Higher-level quantum programming model. Current gate-level programming model leaves a lot to be desired relative to the rich and sophisticated Turing machine model of classical computers with its rich control structures and rich data types. Ultimately, we need a true universal quantum computer.
  5. Rich collection of high-level quantum algorithmic building blocks. Nobody should be coding quantum algorithms at the individual gate level — except those designing the lowest level quantum algorithmic building blocks.
  6. Rich collection of high-level quantum design patterns. Nobody should have to start their quantum algorithms and quantum algorithms from scratch.
  7. Quantum-native high-level programming language. Support a higher-level quantum programming model, high-level quantum algorithmic building blocks, and high-level quantum design patterns.
  8. Analysis tools to confirm scalability of a quantum algorithm based on input data and input parameters, and a target qubit topology and noise model. Such as detecting dependence on fine granularity of phase and probability amplitude.
  9. Analysis tools to determine shot count (circuit repetitions) needed for a quantum algorithm based on input data and input parameters, and a target qubit topology and noise model.
  10. Rich collection of application-level support for interpreting quantum circuit results. Nobody should have to reinvent the wheel for interpretation and statistical analysis of the results of executing quantum circuits from scratch, especially for circuit repetitions (shot count).
  11. Rich collection of high-level quantum application frameworks. Nobody should have to start their quantum applications from scratch.
  12. Modularity. Ability to daisy-chain an arbitrary number of modules to produce a larger quantum computer. Modular design of quantum computer systems should be the norm.
  13. Quantum networking and distributed quantum computing. Connecting separate quantum computer systems and enabling transfer of quantum state as well as executing 2-qubit gates between systems. Enable quantum algorithms and quantum applications to communicate and share data between quantum computer systems. At least for local area quantum networks — wide area quantum networks is a much more distant and uncertain vision.
  14. Error correction. Huge area. See below.

Error correction is a huge area where enhancements are needed, such as:

  1. Better qubits. The better… the better. Never-ending process of improvement.
  2. Longer coherence time and shorter gate execution time. Push out the threshold where decoherence errors begin to overwhelm ability to mitigate them.
  3. Error mitigation. Early and simplistic manual techniques. Although more recently, some people are treating the term as encompassing other, more advanced techniques and anything short of full quantum error correction (QEC) as being error mitigation.
  4. Error suppression. General capability to reduce errors at the gate and pulse execution level. IBM and Q-CTRL.
  5. Zero-noise extrapolation (ZNE).
  6. Probabilistic error cancellation (PEC).
  7. Full quantum error correction (QEC). The holy grail of automatic and transparent error correction which enables fault-free logical qubits and fault-tolerant quantum computing in general. I personally don’t see it as practical (see “Why I’m Rapidly Losing Faith in the Prospects for Quantum Error Correction”), but the topic needs to be raised and researched further. DiVincenzo does discuss the matter at length under the “universal” set of quantum gates requirement, but he should have made it a top-level criterion. I recommend sticking with my proposal for low error rate and near-perfect qubits as a better alternative for the indefinite future.
  8. Other. Plenty of room for further research in dealing with errors at all levels.

--

--