Nines of Qubit Fidelity

Jack Krupansky
47 min readJun 10, 2021

Qubit fidelity is an urgent matter for quantum computing. Even with a large enough number of qubits, the fidelity of the qubits is a key gating factor in how useful a quantum computer can be. This informal paper will discuss the terminology used to discuss qubit fidelity as well as the many issues which arise around qubit fidelity. Nines are a shorthand and simply defer to orders of magnitude, powers of ten. Actually, nines are the order of magnitude of the inverse of the error rate. An error rate of one in a thousand — one in ten to the third power — would be three nines of qubit fidelity (99.9%.)

A few quick examples:

  1. 90% error-free operation = 10% error rate (0.10) = one nine of qubit fidelity.
  2. 99% error-free operation = 1% error rate (0.01) = two nines of qubit fidelity.
  3. 99.9% error-free operation = 0.1% error rate (0.001) = three nines of qubit fidelity.
  4. 99.99% error-free operation = 0.01% error rate (0.0001) = four nines of qubit fidelity.
  5. 98% error-free operation = 2% error rate (0.02) = 1.8 nines of qubit fidelity.
  6. 95% error-free operation = 5% error rate (0.05) = 1.5 nines of qubit fidelity.
  7. 99.3% error-free operation = 0.7% error rate (0.07) = 2.3 nines of qubit fidelity.

A small subset of the material here was already posted, embedded in my paper on fault-tolerant quantum computing — although this paper adds quite a bit of additional material:

But that material is worthy of general application beyond fault-tolerant quantum computing and quantum error correction per se. In fact, it is just as useful — if not even more essential — for quantum computing before error correction and logical qubits are available.

Topics discussed in this informal paper:

  1. Qubit fidelity is all about getting correct results and minimizing errors.
  2. Qubit fidelity is freedom from worry about errors in the results of a quantum computation.
  3. Qubit fidelity is the degree of confidence in the results of a quantum computation.
  4. Perfect qubits would be best but they aren’t available.
  5. Quantum error correction (QEC) might achieve perfect qubits but it isn’t yet available.
  6. Types of errors and their sources.
  7. Failure versus error.
  8. Qubit fidelity includes gate fidelity.
  9. Qubit fidelity is gate fidelity.
  10. Fidelity and reliability are approximate synonyms.
  11. Qubit fidelity and qubit reliability are approximate synonyms.
  12. Error rate.
  13. Error rate as an integer.
  14. Error rate as a fraction.
  15. Error rate as a decimal number.
  16. Error rate as a percentage.
  17. Error-free operation.
  18. Error-free operation as a decimal number.
  19. Error-free operation as a percentage.
  20. Nines of error-free operation.
  21. Qubit fidelity (reliability).
  22. Nines of qubit fidelity — the degree of perfection.
  23. Fractional nines of qubit fidelity.
  24. Nines of qubit reliability.
  25. Can error rate ever be zero (or nines ever be infinite)?
  26. Roots of nines in classical hardware availability.
  27. Low, typical, high for error rates
  28. Per-qubit error rate.
  29. Per-qubit fidelity.
  30. Overall qubit fidelity.
  31. Single versus two qubit gate fidelity.
  32. Measurement fidelity.
  33. Why is measurement so error-prone?
  34. Composite fidelity.
  35. Benchmark test for composite fidelity.
  36. Effective qubit fidelity.
  37. Should bad qubits be ignored when calculating qubit fidelity?
  38. A major fly in the ointment: SWAP networks for connectivity.
  39. Optimizing qubit placement to reduce SWAP networks.
  40. Qubit fidelity for Google Sycamore Weber processor.
  41. Nuances of nines.
  42. Application-specific nines of qubit fidelity.
  43. Application requirements for qubit fidelity.
  44. What will qubit fidelity indicate about accuracy and gradations of probability amplitudes?
  45. What will qubit fidelity indicate about accuracy and gradations of phase?
  46. Might probability and phase have different qubit fidelities?
  47. Noisy qubits.
  48. Perfect qubits.
  49. Logical qubits.
  50. Near-perfect qubits.
  51. How close to perfect is a near-perfect qubit?
  52. Qubit fidelity vs. result accuracy.
  53. Relevance of qubit fidelity to The ENIAC Moment of quantum computing.
  54. Relevance of qubit fidelity to quantum error correction.
  55. Relevance of qubit fidelity to The FORTRAN moment of quantum computing.
  56. How many nines of qubit fidelity will be needed for quantum Fourier transform and quantum phase estimation?
  57. Relevance of qubit fidelity to achieving dramatic quantum advantage.
  58. Impact of qubit fidelity on shot count (circuit repetitions).
  59. Quantum error correction requires less qubit fidelity, but it’s a tradeoff with capacity due to limited numbers of physical qubits.
  60. Regular calibration needed for qubit fidelity.
  61. Impact of coherence relaxation curves on qubit fidelity for deeper circuits.
  62. Configure simulators to match nines of real machines.
  63. Where should vendors report error rates and qubit fidelity?
  64. Nines for quantum computer system availability.
  65. Nines of reliability.
  66. Need a roadmap for nines.
  67. IBM’s Quantum Volume versus qubit fidelity.
  68. Separating qubit fidelity from Quantum Volume enables greater flexibility in characterizing performance.
  69. Summary and conclusions.

Qubit fidelity is all about getting correct results and minimizing errors

A quantum computer achieves results by executing quantum logic gates on the quantum state of qubits. As long as the correct results are produced and measured, we would say that the computer and its qubits have high fidelity.

Too many errors in the results, and we would say that the quantum computer has low fidelity.

Fidelity is a continuous spectrum from very low fidelity to very high fidelity.

Qubit fidelity is freedom from worry about errors in the results of a quantum computation

More succinctly, qubit fidelity is freedom from worry about errors in the results of a quantum computation.

Qubit fidelity is the degree of confidence in the results of a quantum computation

Another way of putting it is that qubit fidelity is the degree of confidence in the results of a quantum computation.

Perfect qubits would be best but they aren’t available

Classical computer hardware is very close to being perfect (most of the time), but quantum computer hardware is far from that reality. Perfect qubits would be great, but they simply cannot be realized with current quantum computing technology.

Quantum error correction (QEC) might achieve perfect qubits but it isn’t yet available

Quantum error correction (QEC) is a clever scheme to implement so-called logical qubits which are virtually perfect qubits using some type of coding scheme to use many imperfect physical qubits to implement each logical qubit.

Quantum error correction is a great idea and has great promise, but it simply isn’t here today nor will it be in the near future, and likely not for a number of years. Maybe in three to five years or even seven years or longer.

For more information on quantum error correction, logical qubits, and fault-tolerant quantum computing in general, consult this paper:

Types of errors and their sources

There are two broad areas of errors in quantum computations:

  1. Errors which occur within individual qubits, even when completely idle.
  2. Errors which occur when operations are performed on qubits. Qubits in action.

There are many types of errors, the most common being:

  1. Decoherence. Gradual decay of values (quantum state) over time. Even when idle.
  2. Gate errors. Each operation on a qubit introduces another potential degree of error.
  3. Measurement errors. Simply measuring a qubit has some chance of failure.

There are many sources of errors, the most common being:

  1. Environmental interference. Electromagnetic radiation, thermal, acoustical, mechanical (shocks, subtle vibrations, and earth tremors), electrical (power supplies and power sources.) Even despite the best available shielding.
  2. Crosstalk between supposedly isolated devices. Absolute isolation is not assured.
  3. Noise in control circuitry. Noise in the classical digital and analog circuitry which controls execution of gates on qubits.
  4. Imperfections in the manufacture of qubits.

Failure versus error

Generally, failure is an all or nothing proposition — an operation is a complete success or a complete failure, with no room for shades of gray or partial failure. But in quantum computing, values are represented as quantum states which are represented by complex numbers, which are pairs of real numbers, not even discrete integers, and certainly not strict binary 0 or 1, which admits the possibility of a wide range of errors beyond the simple binary possibilities of complete success and complete failure.

Further, operations — quantum logic gates — are expressed as unitary transform matrices — matrices populated with complex numbers, using real numbers of arbitrary precision. Worse, some of those real numbers are approximations of irrational numbers, such as pi, the square root of 2, or exponentials (e^ix). So, there is the prospect or even the inevitability that quantum operations themselves will have errors even before they are executed on the quantum states of actual qubits.

In fact, it’s questionable whether, in theory, even an ideal quantum computer can be expected to ever achieve absolute success or absolute failure.

There is plenty of room for a wide range of errors in quantum computations.

So, it’s important to consider both failure and errors when looking at quantum computing.

Generally, outright failure would be completely unacceptable. Call it gross failure.

Generally, errors would tend to be a bit more subtle than outright failures. Typically a small percentage of errors or an occasional error.

In any case, outright or gross failures and subtle errors are generally lumped together in analysis and specification of errors, error rates, and reliability or fidelity.

So, for the purposes of this informal paper, failure and error should generally be treated as synonyms.

Qubit fidelity includes gate fidelity

Technically, qubit fidelity in the purest sense is the capacity of the qubit to maintain its quantum state over time — nominally referred to as coherence, and the separate concept of gate fidelity — the ability to assure that a quantum logic gate is correctly executed, are distinct, as is the distinct concept of measurement error, but this paper views them collectively as qubit fidelity — covering any factor that affects the correctness of the results of a quantum computation.

In short, qubit fidelity is the degree of confidence you can have in the results of your quantum computation — at least in terms of hardware errors, as opposed to any bugs or software errors in your logic or algorithm.

Qubit fidelity is gate fidelity

Just to be clear, when people refer to qubit fidelity they are for all intents and purposes referring to gate fidelity. Sure, the fidelity of the actual qubit alone does matter as well, but it is the fidelity of execution of quantum logic gates that drives overall qubit fidelity for algorithms and applications.

Coherence time of qubits is an important factor and does limit maximum circuit depth, but subject to the limitations of coherence, it is the gate errors which will have the major impact on qubit fidelity.

Fidelity and reliability are approximate synonyms

Fidelity and reliability are approximate synonyms and can generally be used interchangeably.

Qubit fidelity and qubit reliability are approximate synonyms

Qubit fidelity and qubit reliability are approximate synonyms as well. This paper generally refers to qubit fidelity.

Error rate

The error rate for a quantum computer or even a single qubit — or of any system in general, can be conceptualized in one of five ways:

  1. The number of errors which occur per unit of time.
  2. The amount of time before an error can be expected to occur.
  3. The number of operations which can be performed before an error might be expected to occur.
  4. The fraction or percentage of operations which are error-free.
  5. The fraction or percentage of operations which fail or are in some way faulty — in error.

The latter two are the most common use, at least in quantum computing.

The error rate for qubit operations is expressed either as a fraction of 1.0 or as a percentage and represents the fraction of operations which are not error-free. Alternatively that same number can be referred to as the probability that a given operation might not be error-free.

This error rate can also be referred to as the qubit reliability or qubit fidelity subtracted from 1.0 or from 100% when expressed as a percentage.

Some examples of error rate and qubit reliability (or qubit fidelity):

  1. 1.0 or 100% — all operations fail (or have errors). 0% qubit fidelity.
  2. 0.50 or 50% — half of operations fail (or have errors). 50% qubit fidelity.
  3. 0.10 or 10% — one in ten operations fail (or have errors). 90% qubit fidelity.
  4. 0.05 or 5% — one in twenty operations fail (or have errors). 95% qubit fidelity.
  5. 0.02 or 2% — one in fifty operations fail (or have errors). 98% qubit fidelity.
  6. 0.01 or 10E-2 or 1% — one in a hundred operations fail (or have errors). 99% qubit fidelity.
  7. 0.001 or 10E-3 or 0.1% — one in a thousand operations fail (or have errors). 99.9% (three nines) qubit fidelity.
  8. 0.0001 or 10E-4 or 0.01% — one in ten thousand operations fail (or have errors). 99.99% (four nines) qubit fidelity.
  9. 0.00001 or 10E-5 or 0.001% — one a hundred thousand operations fail (or have errors). 99.999% (five nines) qubit fidelity.
  10. 0.000001 or 10E-6 or 0.0001% — one in a million operations fail (or have errors). 99.9999% (six nines) qubit fidelity.
  11. 0.0000001 or 10E-7 or 0.00001% — one in ten million operations fail (or have errors). 99.99999% (seven nines) qubit fidelity.
  12. 0.00000001 or 10E-8 or 0.000001% — one in a hundred million operations fail (or have errors). 99.999999% eight nines) qubit fidelity.
  13. 0.000000001 or 10E-9 or 0.0000001% — one in a billion operations fail (or have errors). 99.9999999% (nine nines) qubit fidelity.

Forms for expressing fidelity

Qubit fidelity can be expressed in a number of ways:

  1. Error rate as an integer.
  2. Error rate as a fraction.
  3. Error rate as a decimal number.
  4. Error rate as a percentage.
  5. Error-free operation as a decimal number.
  6. Error-free operation as a percentage.
  7. Nines of error-free operation.

Each of these will be discussed in the following sections.

Error rate as an integer

The error rate can be expressed as an integer to indicate how many operations could be expected to occur before an error is likely. Such as:

  1. 1 in 10. An error every 10 operations (on average.)
  2. 1 in 100. An error every 100 operations.
  3. 1 in 1,000. An error every 1,000 operations.
  4. 1 in 1,000,000. An error every one million operations.

Error rate as a fraction

The error rate can be expressed as a fraction with a numerator of 1 and a denominator of the error rate as an integer. Such as:

  1. 1/10. An error every 10 operations.
  2. 1/100. An error every 100 operations.
  3. 1/1,000. An error every 1,000 operations.
  4. 1/1,000,000. An error every one million operations.

Error rate as a decimal number

The error rate as a fraction can be expressed as the decimal equivalent of the fraction:

  1. 0.1. An error every 10 operations.
  2. 0.01. An error every 100 operations.
  3. 0.001. An error every 1,000 operations.
  4. 0.000001. An error every one million operations.

Error rate as a percentage

The decimal error rate can be expressed as a percentage by multiplying by 100. This is the percentage of operations which are expected to fail or otherwise have errors:

  1. 10%. An error every 10 operations.
  2. 1%. An error every 100 operations.
  3. 0.1%. An error every 1,000 operations.
  4. 0.0001%. An error every one million operations.

Error-free operation

The goal is for a quantum computer to be able to execute a large number of operations over an extended period of time with minimal, or even no, errors. So, ultimately, this is the true metric we are interested in.

Error-free operation is the complement of the error rate:

  1. Error-free operation as a decimal number = 1.0 minus the error rate as a decimal number.
  2. Error-free operation as a percentage = 100% minus the error rate as a percentage.

Error-free operation as a decimal number

Error-free operation as a decimal number is 1.0 minus the error rate as a decimal number.

For example:

  1. Error rate of 0.1. Error-free operation = 1.0 minus 0.1 = 0.9.
  2. Error rate of 0.01. Error-free operation = 1.0 minus 0.01 = 0.99.
  3. Error rate of 0.001. Error-free operation = 1.0 minus 0.001 = 0.999.

Error-free operation as a percentage

Error-free operation as a percentage is 100% minus the error rate as a percentage.

For example:

  1. Error rate of 10%. Error-free operation = 100% — 10% = 90%.
  2. Error rate of 1%. Error-free operation = 100% — 1% = 99%.
  3. Error rate of 0.1%. Error-free operation = 100% — 0.1% = 99.9%.

Nines of error-free operation

A rough approximation of fidelity (reliability) is the number of nines (digit “9”) in the error-free operation as a percentage.

For example:

  1. 90% error free = one nine.
  2. 99% error free = two nines.
  3. 99.9% error free = three nines.
  4. 99.96% error free = 3.6 nines. See Fractional nines of qubit fidelity.

Qubit fidelity (reliability)

The reliability (fidelity) of a qubit is characterized as the percentage of error-free operations:

  1. Qubit fidelity for an error rate of 0.1 = 90%.
  2. Qubit fidelity for an error rate of 0.01 = 99%.
  3. Qubit fidelity for an error rate of 0.001 = 99.9%.
  4. Qubit fidelity for an error rate of 0.0001 = 99.99%.

Qubit fidelity as a decimal number

In some contexts, qubit fidelity might be expressed as a decimal number:

  1. Qubit fidelity for an error rate of 0.1 = 0.9.
  2. Qubit fidelity for an error rate of 0.01 = 0.99.
  3. Qubit fidelity for an error rate of 0.001 = 0.999.
  4. Qubit fidelity for an error rate of 0.0001 = 0.9999.

Nines of qubit fidelity — the degree of perfection

The degree of perfection of a qubit can be measured using so-called nines — 9’s, which is the qubit fidelity (reliability) expressed as a percentage of error-free operation, such as:

  1. One nine. Such as 90%, 98%, 97%, or maybe even 95%. One error in 10, 50, 33, or 20 operations.
  2. Two nines. Such as 99%, 99.5%, or even 99.8%. One error in 100 operations.
  3. Three nines. Such as 99.9%, 99.95%, or even 99.98%. One error in 1,000 operations.
  4. Four nines. Such as 99.99%, 99.995%, or even 99.998%. One error in 10,000 operations.
  5. Five nines. Such as 99.999%, 99.9995%, or even 99.9998%. One error in 100,000 operations.
  6. Six nines. Such as 99.9999%, 99.99995%, or even 99.99998%. One error in one million operations.
  7. Seven nines. Such as 99.99999%, 99.999995%, or even 99.999998%. One error in ten million operations.
  8. And so on. As many nines as you wish.

Whether more than seven nines can be achieved or how much further than seven nines can be achieved is unknown at this time.

Fractional nines of qubit fidelity

Error rates are not always as clean and tidy as 1 in N operations where N is an integer power of ten. In such cases we can have fractional nines of qubit fidelity, where we have some number of nines followed by one or a few decimal digits which are less than 9 (1 to 8), such as:

  1. 98%, 97%, 95% — 1.8, 1.7, or 1.5 nines. One error in 50, 30, or 20 operations, in contrast to 90% (1 nine) which is one error in 10 operations or 99% (2 nines) which is one error in one hundred operations.
  2. 99.8%, 99.7%, 99.5% — 2.8, 2.7, or 2.5 nines. One error in 500, 300, or 200 operations, in contrast to 99% (2 nines) which is one error in 100 operations or 99.9% (3 nines) which is one error in 1,000 operations.
  3. 99.98%, 99.97%, 99.95% — 3.8, 3,7, 3.5 nines. One error in 5,000, 3,000, or 2,000 operations, in contrast to 99.9% (3 nines) which is one error in 1,000 operations or 99.99% (4 nines) which is one error in 10,000 operations.

Nines of qubit reliability

Nines of qubit reliability is simply a reference to nines of qubit fidelity.

Can error rate ever be zero (or nines ever be infinite)?

In keeping with the principles of quantum mechanics, I’m wondering if the error rate for a qubit can ever even theoretically be absolutely 0.0, and similarly whether the nines of qubit fidelity can ever be infinite. Or, as I suspect, there is some minimum error rate, call it epsilon, which can never be exceeded since it is really simply the quantum uncertainty of performing any operation or any measurement whenever a quantum effect is involved.

If so, the maximum number of nines of qubit fidelity would be roughly the complement of the logarithm base 10 of that minimum epsilon error rate as a percentage plus 2. Or the complement of the logarithm base 10 of that minimum epsilon error rate as a decimal.

So, if the minimum error rate (epsilon) as a percentage was ten to the minus 20, the maximum number of nines would be 22 (20 plus 2.)

Or just to test that math, if epsilon was 0.001, one in a thousand, or 0.1%, then the corresponding number of nines would be 3, which is the logarithm base ten of 0.1, which is minus 1, complemented to plus 1, plus 2, which gives 3 as the number of nines.

Whether there is indeed such a minimum error rate, is unknown, at least to me, at this time.

But it does seem inescapable to me, based on the nature of quantum mechanics and observations of quantum effects.

This also leaves me wondering whether even the best quantum error correction (QEC) can ever achieve absolutely perfect qubits. Maybe QEC has some clever trick to avoid the issue — or maybe not. Whether a quantum error correction coding scheme has its own epsilon for logical qubits, which may or may not differ from any possible epsilon of the underlying physical qubits is unknown, to me, at this time.

Roots of nines in classical hardware availability

I first encountered this concept of nines in the context of availability of classical computing systems, particularly uptime — where service without interruption for 99.999% of the time is considered 5 nines reliability.

Low, typical, high for error rates

Alas, there is no single observable which can be measured to get a single overall error rate for all qubits of an entire quantum computer or even a single qubit.

Instead, a large number of measurements must be taken and statistically analyzed to report as three numbers:

  1. Low. The lowest error rate.
  2. High. The highest error rate.
  3. Typical. The average or most typical error rate.

The typical error rate is the closest you can get to a single metric for error rate, but it has to be reported with the caveat that it really is only typical or average but not the error rate which will be seen in all cases or all situations. In other words, no algorithm or application can absolutely rely on it in all situations.

Measurements must be taken in two ways:

  1. Per-qubit. Each qubit can potentially have a different error rate.
  2. Multiple runs. Some number of runs of the measurement must be performed for each qubit. There may be variability between runs.

Per-qubit error rate

Technically, each qubit can have its own error rate, a per-qubit error rate.

How much of this detail will be reported for a particular quantum computer will vary from vendor to vendor. Eventually, there should be a standard, but at present there is not.

Per-qubit fidelity

Since each qubit can have its own error rate, each qubit should have its own qubit fidelity. So we should two distinct measures of qubit fidelity:

  1. Per-qubit fidelity. One measure for each qubit, measuring only that qubit.
  2. Overall qubit fidelity. An overall measure which averages (in some unspecified fashion) the fidelities of all qubits to come up with a single overall measure of qubit fidelity. Technically, it should be a distribution with standard statistical characteristics — low, typical, high, etc., and maybe 50% and 90% measures as well.

Overall qubit fidelity

As just mentioned in the preceding section, we will have both per-qubit measures of qubit fidelity and some overall average of qubit fidelity.

Unfortunately, there isn’t great clarity as to how exactly the overall measure of qubit quality should be measured or computed.

Single versus two qubit gate fidelity

Unfortunately not all quantum logic gates have the same qubit fidelity. In particular, single-qubit gates are usually significantly higher fidelity than two-qubit gates.

Both qubit fidelities are important.

Is one more important than the other? Maybe, in some situations. Some algorithms and some applications may be much more sensitive to one or the other.

The bottom line is that both single-qubit fidelity and two-qubit fidelity must be reported.

Still, it would be nice to settle on some single metric that gives an overall, rough sense of qubit fidelity. The possibilities:

  1. The best case.
  2. The worst case.
  3. The average of the best and worst cases.
  4. The range of best to worst case.

Measurement fidelity

For whatever reasons, measuring a qubit, sometimes called readout, tends to have a significantly lower fidelity than even two-qubit gates.

So, a separate metric for measurement (readout) of qubits is needed.

Even worse, it is common that the measurement error for a 1 state tends to be significantly lower than for a 0 state.

Still, it would be nice to settle on some single metric that gives an overall, rough sense of qubit fidelity for measurement. The possibilities:

  1. The best case.
  2. The worst case.
  3. The average of the best and worst cases.
  4. The range of best to worst case.

That said, I’m reluctant to pick an approach right now that is based on current machines since hardware advances are progressing at a healthy clip, so measurement error is likely to see reductions in the future.

Why is measurement so error-prone?

Just a placeholder to reemphasize how disturbing it is that measurement is so prone to error compared to the execution of quantum logic gates.

Of course, just because this is true for current machines and current architectures doesn’t mean it will still be true for future machines and future architectures.

Composite qubit fidelity

Given the significant number of disparate metrics for qubit fidelity, it would still be useful to derive a single, rough metric that summarizes qubit fidelity in a single number.

The possibilities:

  1. The best case.
  2. The worst case.
  3. Some sort of blended or weighted average of the full range of cases.
  4. The range of best to worst case.

The really tough call is measurement error. If measurement error commonly dwarfs single and two-qubit gate error rates — at least for fairly shallow quantum circuits, it’s hard to argue vigorously against using that worst case error rate as the limiting factor for a quantum computer.

But, it all depends on circuit depth and circuit composition as well. Sure, for a shallow circuit the measurement error is likely to dominate, but for a deep circuit the per-gate errors add up quickly so that they could very well be the dominant source of error.

For now, I’m torn between two choices:

  1. Typical two-qubit error rate (fidelity). Any interesting circuit will have plenty of two-qubit gates. Presume sufficient depth that measurement error is the lesser issue.
  2. Measurement error rate. For shallow circuits.

I lean towards the former since I lean towards focusing on production-scale applications which presumably will have fairly deep circuits.

Ultimately we have two competing approaches:

  1. Select one of the many metrics as the preferred metric.
  2. Some magic formula to combine all of the many metrics into a single, composite metric.

That said, I’m still hopeful that some interesting and useful alternative for composite fidelity will be discovered.

No, IBM’s Quantum Volume metric is not a viable alternative. See the two sections, IBM’s Quantum Volume versus qubit fidelity and Separating qubit fidelity from Quantum Volume enables greater flexibility in characterizing performance near the end of this paper.

Sorry, but this discussion of composite qubit fidelity was a little simplistic — it ignored the issue of SWAP networks needed to overcome limited qubit connectivity, which can add substantially to gate errors. See the discussion in a subsequent section, A major fly in the ointment: SWAP networks for connectivity.

Benchmark test for composite qubit fidelity

Rather than some simplistic formula for calculating composite qubit fidelity from all of the raw qubit fidelities, maybe a simple benchmark test could be designed which is used to calculate the effective qubit fidelity in terms of how close the final results from the test match expected results.

The elements of the benchmark would be:

  1. A modest number of qubits. Possibly even only three, or maybe five.
  2. A circuit of modest depth. Possible five to ten gates deep. Enough to accumulate errors.
  3. A mix of both single and two-qubit gates. Maybe three quarters single-qubit gates.
  4. Measurement of a fair fraction of the qubits. Possibly as few as three to five of the qubits, possibly half of them, possibly three quarters of them, or maybe all of them. Enough to see measurement errors.
  5. A moderate number of circuit repetitions (shot count). Enough to achieve a reasonable statistical distribution of results.
  6. Calculate the overall composite error rate and qubit fidelity. Compare actual measured results to expected results. Takes into account single-qubit fidelity, two-qubit fidelity, and measurements.

That final calculation would represent the composite qubit fidelity, also known as effective qubit fidelity.

It might make sense to have multiple sizes for the benchmark:

  1. Small. 5–8 qubits.
  2. Medium. 12 to 28 qubits.
  3. Large. 32 to 40 qubits.
  4. Extra large. 50 to 80 qubits.

Sorry, but this discussion of benchmarking qubit fidelity was a little simplistic — it ignored the issue of SWAP networks needed to overcome limited qubit connectivity, which can add substantially to gate errors. See the discussion in a subsequent section, A major fly in the ointment: SWAP networks for connectivity.

No, IBM’s Quantum Volume metric is not a viable alternative since it requires classical simulation of the quantum circuit, which is not practical for quantum computers with 50 or so qubits, and it gives an odd composite metric (roughly qubit count times circuit depth) which actually masks rather than highlights any specific measure of qubit fidelity.

Effective qubit fidelity

Effective qubit fidelity is simply a synonym for the composite qubit fidelity described in the preceding sections, either:

  1. Select one of the many metrics as the preferred metric.
  2. Some magic formula to combine all of the many metrics into a single, composite metric.
  3. A benchmark test to empirically derive effective average qubit fidelity across all of the individual metrics.

Should bad qubits be ignored when calculating qubit fidelity?

Some qubits simply don’t work well at all. It would be a shame to drag down the nines of the entire machine just due to a few bad qubits. So it would seem to make sense to discount, ignore, and block out the bad qubits. In fact, preferably, flat out don’t use them at all.

Or, maybe give two metrics — one with the bad qubits and one without them.

Maybe individual applications could set a required fidelity threshold so that only the best qubits are used and counted in overall composite qubit fidelity.

But there are three distinct use cases:

  1. Some applications require higher qubit fidelity.
  2. Some have lower required fidelity.
  3. Some may not have a threshold at all.

So what default threshold should be used?

Maybe the threshold should be a percentage of atypical fidelity to permit or discard.

What should be the general threshold for atypical qubit fidelity? Some possibilities:

  1. Maybe look at nines for the best 20% of the overall machine. Anything more than two nines from that 20% would be atypical.
  2. Or maybe 50% — or allow the system operator to configure the minimum number of qubits to set the atypical threshold.

A major fly in the ointment: SWAP networks for connectivity

My analysis above concerning calculation or derivation of composite qubit fidelity ignored a major factor: the need for SWAP networks to compensate for very limited qubit connectivity.

Except for trapped-ion quantum computers which support true any to any connectivity — any two qubits anywhere in the qubit lattice can directly be used in a 2-qubit quantum logic gate, the execution of a 2-qubit quantum logic gate for two qubits which are not physically adjacent requires the use of a so-called SWAP network to move the quantum state of one or both of the two non-adjacent qubits so that the quantum states of the two original qubits are now in fact residing in two physically-adjacent qubits where a 2-qubit quantum logic gate can be directly performed.

The execution of a SWAP network occurs one qubit pair at a time. Each SWAP operation, to swap the quantum states of two physically-adjacent qubits, typically requires execution of three CNOT quantum logic gates. Each CNOT gate incurring yet another 2-qubit gate error. The gate errors can accumulate rapidly, so significant qubit fidelity is needed.

The exact sequence of SWAP operations will need to be determined by a so-called routing algorithm. That routing can be performed manually, but automated tools are preferable. Routing may require swapping just a few qubits or maybe a dozen or more qubits.

And all of this is done merely as a precursor to performing a desired 2-qubit gate.

I won’t try to derive a complete model for deriving the effective composite qubit fidelity when n qubits must be swapped, but it involves the following elements:

  1. n steps are needed — quantum state must be moved a distance of n qubits.
  2. Each step, a SWAP operation, is actually three CNOT gates.
  3. Each CNOT gate is a 2-qubit gate which has a typical error rate.
  4. Finally, the desired 2-qubit gate can be executed.

The thing to keep in mind is that in addition to the fact that a single SWAP network can be rather large and introduce a large error, there may be a relatively large number of non-adjacent 2-qubit gates in a complex quantum circuit.

Net-net, the SWAP networks needed for a large quantum circuit might reduce the effective nines of qubit fidelity by two or three nines. So, you may need physical qubits with five nines of qubit fidelity to achieve three or even only two nines of net, effective qubit fidelity.

SWAP networks can be complicated, so although they can be constructed by hand, it is preferable to use automated compilers or optimizers to translate logical qubit references into a SWAP network and a final physical qubit reference.

Optimizing qubit placement to reduce SWAP networks

Another important design consideration for quantum circuits is placement of qubits, so that the need for SWAP networks can be reduced or even eliminated. Optimal placement can dramatically reduce the size of each SWAP network.

Placement optimization can be performed by hand, but automated compilers or optimizers are more attractive.

How to factor all of this into estimating the effective qubit fidelity of a machine is an interesting challenge. Especially since some applications may incur relatively minimal SWAP networks even as other applications incur dramatic SWAP networks.

Qubit fidelity for Google Sycamore Weber processor

Google recently published the technical datasheet for its Sycamore Weber processor:

The Performance section of that datasheet provides quite a few error rates, including:

  1. Low, typical, and high error rates. Given as percentages. For each of the following categories.
  2. Single-qubit gate error rates. Isolated.
  3. Two-qubit gate error rates. Both isolated and parallel.
  4. Readout (measurement) error for the 0 state. Both isolated and parallel.
  5. Readout (measurement) error for the 1 state. Both isolated and parallel. Roughly three times greater than readout error for the 0 state.

Unfortunately, they don’t give an overall summary measure of qubit fidelity.

Here are just a couple of the error metrics, both as reported by Google as error percentages and error-free percentages, and as nines — the latter two calculated by me:

  1. Typical isolated single-qubit error rate: 0.1% = 99.9% = three nines.
  2. Typical isolated two-qubit error rate: 0.9% = 99.1% = 2.1 nines.
  3. Typical parallel two-qubit error rate: 1.4% = 98.6% = 1.86 nines.
  4. Typical readout 0 isolated error rate: 1.1% = 98.9% = 1.89 nines.
  5. Typical readout 0 simultaneous error rate: 2.0% = 98.0% = 1.8 nines.
  6. Typical readout 1 isolated error rate: 5.0% = 95.0% = 1.5 nines.
  7. Typical readout 1 simultaneous error rate: 7.0% = 93.0% = 1.3 nines.

Wow, readout (measurement) has a fairly low fidelity.

Google also provides heatmaps detailing the error rate for each qubit — per-qubit fidelity.

Google does not present qubit fidelity as nines, just error rate as percentages. I calculated nines myself.

Overall, I would say that Google offers 1.3 nines of qubit fidelity since that is the worst case (actually, the typical case for the worst metric), even though single-qubit operations do in fact offer three nines of qubit fidelity.

Nuances of nines

As illustrated in the preceding section on qubit fidelity for the Google Sycamore Weber processor, there can be quite a few nuances of qubit fidelity.

On the one hand, it’s great to have all of these nuances available, but it’s unfortunate that the overall performance of the machine cannot be summarized with a single metric — or at least a relatively small number of summary metrics.

Application-specific nines of qubit fidelity

Every application and application category will tend to have its own pattern of usage of qubits and quantum logic gates. The nines of qubit fidelity will be the same in all applications, but the particular qubit usage of quantum circuit patterns will give each application its own overall application error rate. Although it won’t be terribly useful to compare the aggregate error rate between disparate applications, it will be useful to compare the aggregate error rates for similar applications, multiple runs of the same application, or runs of the same application on new versions of the quantum hardware.

The overall application-specific nines would result in the fidelity of the overall application results. Technically this is really the accuracy of the results, but it can conveniently be expressed in nines form.

One can also work backwards, starting with the desired overall aggregate error rate for the application to get an estimate of the raw qubit fidelity which will likely be needed to achieve the desired result accuracy.

Application requirements for qubit fidelity

Part of designing any new quantum algorithm or quantum application should be consideration for estimating at least the general ballpark of qubit fidelity which will be required for the algorithm or application to deliver acceptable results. Such application requirements for qubit fidelity can then be compared against the specifications of candidate hardware to determine if it is even worth trying to run the algorithm or application on a particular quantum computer system.

Application developers and users should be able to answer this simple question:

  • How many nines of qubit fidelity does your algorithm or application require?

Granted, it could be difficult to accurately estimate qubit fidelity requirements, but the effort is needed.

Simple, brute force trial and error to see if an algorithm or application can run on a particular quantum computer is a really bad and downright unprofessional approach to software development.

What will qubit fidelity indicate about accuracy and gradations of probability amplitudes?

The two basis states of each qubit have probabilities which are continuous values such that the sum of the two probabilities is exactly 1.0 — the probability for each basis state being the square of its probability amplitude. Presumably the fidelity of a qubit will determine how accurate these probabilities are maintained.

It is an open question how many gradations of probability are supported, either in theory or for a particular qubit technology. Similarly, it is an open question how many gradations of probability are supported for a given qubit technology at a given qubit fidelity.

One would hope or expect that more nines of qubit fidelity would mean more gradations of probabilities. Whether or to what extent that is true is… unknown.

The number of gradations of probability are important for algorithms which perform amplitude amplification and probability amplitude estimation.

Hopefully, further research will provide some indication of gradations of probability based on the number of nines of qubit fidelity.

Gradations and accuracy of probability can presumably be determined empirically for a given qubit technology, but it would be better to be able to have a concise pair of formulas based on nines of qubit fidelity for a particular qubit technology. Or better yet, a universal pair of formulas which apply to all qubit technologies given its nines of qubit fidelity.

What will qubit fidelity indicate about accuracy and gradations of phase?

Similar to the preceding question about the accuracy and gradations of probability for the basis states, a similar question arises for the accuracy and gradations of the phase of a qubit based on the qubit fidelity.

Presumably the fidelity of a qubit will determine how accurate the phase of a qubit is maintained.

It is an open question how many gradations of phase are supported, either in theory or for a particular qubit technology. Similarly, it is an open question how many gradations of phase are supported for a given qubit technology at a given qubit fidelity.

One would hope or expect that more nines of qubit fidelity would mean more gradations of phase. Whether or to what extent that is true is… unknown.

The number of gradations of phase are important for algorithms which perform quantum Fourier transforms and quantum phase estimation.

Hopefully, further research will provide some indication of gradations of phase based on the number of nines of qubit fidelity.

Gradations and accuracy of phase can presumably be determined empirically for a given qubit technology, but it would be better to be able to have a concise pair of formulas based on nines of qubit fidelity for a particular qubit technology. Or better yet, a universal pair of formulas which apply to all qubit technologies given its nines of qubit fidelity.

Even better, it would be convenient if the accuracy and gradations of probabilities and phase were identical based on nines.

For more discussion of issues related to gradations of phase, consult this paper:

Might probability and phase have different qubit fidelities?

It’s unclear whether the fidelities of qubit basis state probabilities and phase are similar, identical, or relatively different.

It sure would be nice if they were identical or reasonably similar.

But there is the possibility that they could be independent and only somewhat similar or possibly even very dissimilar.

More research is needed.

Noisy qubits

A noisy qubit, characteristic of noisy intermediate-scale quantum (NISQ) quantum computers, has a relatively low fidelity (reliability.) Certainly not a high number of nines. Maybe not even two or three nines. Four or more nines of qubit fidelity is likely to no longer be considered a noisy qubit.

Some noisy qubit fidelities:

  1. 70% — not even a single nine.
  2. 85% — still not even a single nine.
  3. 90% — 1 nine.
  4. 95% — still only a single nine, or 1.5 nines.
  5. 99% — 2 nines. Still fairly noisy for many applications.
  6. 99.9% — 3 nines. Not terribly noisy, but still too noisy for some applications

Perfect qubits

The ideal qubit, the perfect qubit, which currently does not exist and may never exist, at least not in the next 10 years, would have absolutely no errors for 100% fidelity (reliability.) The concept of nines will no longer be relevant, but you could say that a perfect qubit has infinite nines of fidelity.

Logical qubits

Although truly perfect qubits will almost certainly not be feasible, ever, the advent of quantum error correction (QEC) will enable logical qubits which for all intents and purposes will be considered perfect qubits. Or at least they will have a very large number of nines of qubit fidelity (very low error rate.)

A simplistic view on logical qubits is that with logical qubits nines are effectively infinite (or at least very large), or in fact, nines are no longer relevant since the error rate will be either absolutely zero or close enough to zero that few will notice.

For more on logical qubits, read:

Near-perfect qubits

We can’t really expect to achieve a perfect qubit, but we can come close, maybe even close enough that some, many, or even most applications can make do with such a near-perfect qubit.

How many nines of qubit fidelity would a near-perfect qubit have? There’s no definitive answer — it’s whatever an application needs to return results which meet the accuracy requirements of the application.

Near-perfect qubits are likely to have qubit fidelity in the range of three to five nines. Whether two nines might be sufficient for some applications is debatable. Whether some applications require more than six to nine nines is also quite debatable.

How close to perfect is a near-perfect qubit?

There are two distinct purposes for near-perfect qubits:

  1. To enable quantum error correction for logical qubits.
  2. To enable applications using raw physical qubits on NISQ devices.

Not every application will need the same number of nines of qubit fidelity.

The degree of perfection needed for an application on a NISQ device will vary greatly from application to application:

  1. Shallow depth circuits will require fewer nines.
  2. Deeper circuits will require more nines.

Granted, generalization is risky, but generally, I would say that near-perfect qubit reliability will lie between three and five nines — 99.9% to 99.99% to 99.999%. Greater reliability would be highly desirable, but much harder to achieve.

Again, some applications will require greater accuracy, more nines. It’s conceivable that some applications may require six to nine or even twelve nines of fidelity.

Qubit fidelity vs. result accuracy

Qubit fidelity is not the same as application result accuracy. Every application category, every application, and every user of every application will have their own requirements for the accuracy of the results of a quantum computation. But that doesn’t tell you anything about what fidelity a qubit or gate will require to achieve the desired result accuracy. For example, very deep circuits will quickly add up errors so that an incredibly high fidelity will be required to achieve any kind of accuracy. A relatively shallow circuit may not require much qubit fidelity at all to achieve modest to moderate result accuracy.

At present there is no single magic formula to calculate implied result accuracy from qubit fidelity — or vice versa. I suspect that each application category or even each application will need its own formula.

Worst case, the algorithm designer or application developer will be required to perform brute force tests to determine what actual result accuracy is actually achieved for qubits of a specified fidelity.

Hopefully advanced simulators will make it easy to run repeated tests with a range of qubit fidelities to check the result accuracy for each qubit fidelity in the range.

Relevance of qubit fidelity to The ENIAC Moment of quantum computing

The ENIAC Moment of quantum computing would mark the milestone of the first quantum computer capable of running a production-scale application and achieving a dramatic quantum advantage over classical computers. It is expected that quantum error correction (QEC) will not yet be available, at least not with sufficient capacity of logical qubits needed for a production-scale application. This means that the algorithm designers and application developers will have to make do with less than perfect qubits. Outright noisy qubits are unlikely to be of sufficient fidelity to support production-scale applications, so near-perfect qubits will be needed.

How close to perfect will near-perfect qubits need to be to enable The ENIAC Moment of quantum computing? That’s a great unknown which is quite debatable — and will vary between applications. Since we’re talking about production-scale and quantum advantage, something in the range of four to seven nines of qubit fidelity may be needed.

Since automatic and transparent quantum error correction (QEC) will not yet be available (by definition, since its availability would be associated with The FORTRAN Moment), some combination of manual error mitigation and high nines of qubit fidelity will be required.

An elite technical staff will likely be needed to cope with error mitigation and to otherwise work around limitations related to manual error mitigation and difficulties achieving required application result accuracy with limited qubit fidelity.

The bottom line is that a fairly high number of nines of qubit fidelity will be required to achieve The ENIAC Moment.

Relevance of qubit fidelity to quantum error correction

Although The ENIAC Moment will require relatively high nines of qubit fidelity, it is expected that various quantum error correction schemes will be able to utilize relatively noisy qubits — a low number of nines — to achieve the perfect fidelity of logical qubits.

Whether one or two nines of qubit fidelity will be sufficient for quantum error correction is questionable, but theoretically possible, but more likely three or four nines will be required.

Relevance of qubit fidelity to The FORTRAN moment of quantum computing

The FORTRAN Moment of quantum computing is predicated on full support for quantum error core — so that non-elite technical staff can develop relatively sophisticated quantum algorithms and applications without the need to worry about manual error mitigation or even how many nines of qubit fidelity are needed to achieve required application result accuracy.

Qubit fidelity will still be relevant since it will determine how many physical qubits will be needed for each logical qubit, which in turn determines the capacity or number of logical qubits which can be supported for a given number of physical qubits.

How many nines of qubit fidelity will be needed for quantum Fourier transform and quantum phase estimation?

Quantum Fourier transform (QFT) and quantum phase estimation (QPE) are two of the most powerful algorithmic building blocks for quantum algorithms, but unfortunately they are not practical at present due to the very low qubit fidelity of current and near-term NISQ quantum computers. The question is how many nines of qubit fidelity would be needed for QFT and QPE to become practical for the kind of production-scale applications needed to achieve dramatic quantum advantage. Quick answer: unknown, at present.

More research is needed. Researchers are allowing themselves to get distracted by variational methods, which unfortunately will likely never achieve dramatic quantum advantage.

Most likely, QFT and QPE will be absolute requirements to achieve dramatic quantum advantage.

The number of nines needed will depend on the number of qubits being used in a single QFT or QPE. I personally haven’t tried to work out a formula, but I suspect that the minimum number of nines is likely to be at least four or five — 99.99% or 99.999%. Maybe some applications could get by with three nines — 99.9%, but it’s very unlikely that any production-scale application could get by with only two nines — 99% for QFT or QPE.

For much larger numbers of qubits in a single QFT or QPE, six to eight nines may be required — 99.9999% or 99.99999%.

Whether QFT or QPE would be practical for nine to twelve nines of qubit fidelity is rather questionable for near-perfect qubits. For anything beyond six nines, full, true quantum error correction (QEC) with logical qubits is likely to be required. Physical qubits with three to five nines of qubit fidelity are likely to be sufficient to enable QEC even if insufficient to enable QFT and QPE without QEC.

These are all very rough, speculative estimates — as I said at the outset, much more research is needed. And without that research, along with research to achieve much higher nines of qubit fidelity, dramatic quantum advantage will likely remain elusive for the indefinite future.

Relevance of qubit fidelity to achieving dramatic quantum advantage

The real bottom line for qubit fidelity is whether it is sufficient to enable a quantum computer to achieve a dramatic quantum advantage over classical computing.

Qubit fidelity will need to be sufficient to achieving these milestones of quantum computing in order to achieve dramatic quantum advantage:

  1. The ENIAC Moment. The first significant production-scale application with a dramatic quantum advantage. But super-elite technical staff will be required to cope with the technical challenges.
  2. The FORTRAN Moment. Sufficient to enable quantum error correction (QEC) and logical qubits for production-scale applications. Advanced hardware and sophisticated algorithm libraries will enable non-elite technical staff to make dramatic progress and easily achieve dramatic quantum advantage for a wide range of applications.
  3. Quantum Fourier transform (QFT) and quantum phase estimation (QPE) for production-scale applications. Many applications will need QFT and QPE to achieve sufficient accuracy of results — and to achieve dramatic quantum advantage.

Impact of qubit fidelity on shot count (circuit repetitions)

One of the key parameters for execution of a quantum circuit is shot count or circuit repetitions, which is the number of times the execution of a quantum circuit must be repeated. The application can then perform a statistical analysis of the distribution of the quantum results, in order to determine which particular result is the most likely result, the so-called expected value.

Repetition of the circuit is needed for two reasons:

  1. Low qubit fidelity. Errors which affect and corrupt the results.
  2. Probabilistic nature of most interesting quantum computations. Even if qubits and gates were ideal and perfect, superposition causes probabilistic results.

This topic is discussed in great detail in this paper:

The only point here is how different levels of qubit fidelity will impact how many circuit repetitions will be necessary.

  1. Low qubit fidelity (few nines). Higher shot count needed.
  2. High qubit fidelity (more nines). Lower shot count needed.
  3. Very high qubit fidelity (many nines). Few circuit repetitions needed.
  4. Perfect qubits (logical qubits). Only a single execution of a circuit is needed.

But those are only the repetitions needed due to qubit fidelity. Additional circuit repetitions may be needed depending on the probabilities associated with the possible valid outcomes of an error-free execution of the circuit.

In short, shot count has two factors, one due qubit fidelity and the other one due to the probabilistic nature of the circuit even with perfect (logical) qubits.

At present, there is no magic, one size fits all formula to calculate shot count based on qubit fidelity (nines). Each application category or application will have its own rules of thumb for mapping qubit fidelity to shot count.

The best I can do here now is to raise the issue and indicate that algorithm designers and application developers should be aware of the issue and plan to spend some significant amount of time planning for and experimenting with shot count.

A first step would be to assess how close to perfection qubits would have to be to have such high confidence in the measured results of the quantum circuit that only a single execution would be needed, or at most a handful of repetitions just to double-check. If that many nines of qubit fidelity are not available, then multiplying by a factor of ten for each nine of qubit fidelity which is not available might give a good ballpark estimate of how many shots will be needed. Whether it should be a full factor of ten for each missing nine or a multiple of ten will depend on the algorithm and the application.

For example, if it is estimated that six nines are needed (99.9999% fidelity) but the quantum computer only provides two nines (99% fidelity), that means four nines of fidelity are missing and four factors of ten — 10⁴ or 10,000 — are needed for the shot count just to compensate for low qubit fidelity. Again, whether the base factor is ten, five, or twenty, or more or less, will depend on the algorithm and application.

Quantum error correction requires less qubit fidelity, but it’s a tradeoff with capacity due to limited numbers of physical qubits

Part of the appeal of quantum error correction (QEC) is that it is theoretically possible to construct perfect logical qubits using physical qubits which are of relatively low fidelity. But there is a tradeoff between lower physical qubit fidelity and logical qubit capacity since it will be quite some time before quantum computers have more than fairly limited capacities of physical qubits, let alone an interesting capacity of logical qubits.

To put it simply, lower qubit fidelity means more physical qubits per logical qubit, but more physical qubits per logical qubit means fewer total logical qubits for a given limit of total physical qubits.

Initial experimental logical qubits are likely to have a fairly high number of physical qubits per logical qubit due to relatively low fidelity of physical qubits. This means that initial experimental quantum computers with logical qubits are likely to have a very low number of logical qubits. Even getting to 5 or 8 or 12 or 16 logical qubits will be monumental undertakings. With 65 physical qubits per logical qubit, 5, 8, 12, and 16 logical qubits would require 325, 520, 780, and 1,040 physical qubits. 128 logical qubits would require 8,320 physical qubits.

As hardware research and engineering progress, qubit fidelity will increase, as will capacity of physical qubits. QEC will get a double benefit from that hardware advancement, with fewer physical qubits needed for each logical qubit due to the higher fidelity of the physical qubits, and substantially greater capacity of logical qubits since fewer physical qubits are needed for each logical qubit and substantially greater physical qubits are available.

It will be interesting to see how these fidelity and capacity trends evolve as quantum hardware advances. Improvements in qubit fidelity may slow dramatically after easy early gains are achieved, but even relatively small gains in qubit fidelity provide a multiplier effect on top of total physical qubit capacity gains.

In summary, yes, logical qubits can be constructed using low-fidelity physical qubits, but you won’t be able to get a useful capacity of logical qubits — sufficient to achieve dramatic quantum advantage — for the foreseeable future.

Regular calibration needed for qubit fidelity

Unfortunately, you can’t just assemble a quantum computer, turn on the power and, presto, qubits magically have the expected fidelity. Even in normal and best operation qubit fidelity can decay or fluctuate over the course of a day. This necessitates periodic calibration testing and adjustment to assure that qubits are able to achieve their best fidelity.

How often should calibration occur? Great question, but there is no definitive answer. My recollection from a couple of years ago is that IBM was calibrating their machines twice a day.

I suspect that it will all depend on the specific hardware and technology being used. Some possibilities:

  1. Once a day.
  2. Twice a day.
  3. Every eight hours.
  4. Every six hours.
  5. Every four hours.
  6. Every two hours.
  7. Every hour. Probably too frequent, especially if calibration is expensive.
  8. Set a threshold for results of a test application and recalibrate whenever a regular and frequent run of the test application (hourly?) fails to deliver correct results some percentage of the time for some reasonable number of circuit repetitions.

Impact of coherence relaxation curve on qubit fidelity for deeper circuits

Coherence time is not an absolute 100% or nothing — there is no perfect coherence until time expires and then coherence falls off a cliff. Rather, it’s a curve, so that coherence decays gradually until it’s so bad that it’s not worth continuing to execute the circuit. Exactly how far out on that curve you can go before execution of gates is problematic is unclear and will vary for different qubit technologies, different implementations, and different algorithms and applications.

This informal paper won’t go into the gory details of energy relaxation and dephasing time. The only important issue here is that the coherence of the qubit quantum state will decay over time, and that somehow that decay should be taken into account when measuring or estimating the qubit fidelity of a quantum computer.

Qubit coherence is usually characterized by three measurements:

  1. T1 — energy relaxation time. How long before a fair percentage of qubits in the 1 state will have decayed to the 0 state.
  2. T2 — dephasing time measured using the Hahn experiment. How long before a fair percentage of qubits will have their phase randomized.
  3. T2* — dephasing time measured using the Ramsey experiment. How long before a fair percentage of qubits will have their phase randomized.

Generally, the coherence will be roughly in the range of 50% to 66% at T1, T2, and T2*, implying an error rate of 0.33 to 0.50. Such a high error rate is unlikely to be acceptable for a deep production-scale quantum circuit, implying that circuit designers and application developers need to endeavor to keep their total circuit execution time well short of T1, T2, and T2*.

How much shorter that T1, T2, and T2*? Unclear, unknown, and it will vary from algorithm to algorithm and application to application.

The curve is indeed a curve, not a straight, linear line. And the shape and slope of that curve will vary from machine to machine.

The best that the designers and implementers of a machine can do is to attempt to minimize decoherence by maximizing coherence, and to clearly document the coherence relaxation curves for their hardware. Somebody, at some level, besides the machine’s designers and implementers, will have to decide how far out on those curves, even well short of the official T1, T2, and T2* coherence times, a given algorithm or application can afford to go before net effective qubit fidelity becomes too low to deliver correct results.

Unfortunately, this makes it difficult to derive a single measure of qubit fidelity — it will be dependent on circuit depth. Very shallow circuits will have much closer to the maximum qubit fidelity of the machine. Circuits of moderate depth will have a somewhat lower qubit fidelity. And circuits of much greater depth will have significantly diminished qubit fidelity.

I suspect that the best approach to documenting this will be to pick a standard, nominal circuit depth (10? 20?) and measure qubit fidelity for that circuit depth, and then have a table that lists percentages to discount qubit fidelity as circuit depth increases. Even if the curve is not very linear, short sections of the curve are likely to be close to linear and amenable to linear interpolation, so that a handful of circuit depths can provide most of the information that an algorithm designer or application developer should need to get a decent handle on qubit fidelity at a particular circuit depth.

Alternatively, or in addition, provide a table of nines of qubit fidelity and give the corresponding maximum circuit depth that can maintain that number of nines of qubit fidelity for the time needed to execute to that depth.

One caveat to all of this: This discussion focuses on uncorrected errors. The game changes completely when utilizing quantum error correction (QEC) and logical qubits. In theory the difficulties with coherence relaxation completely go away — at least in theory, but practice is still an open question since we don’t yet have implementations of full quantum error correction.

Configure simulators to match nines of real machines

It would be very helpful if classical quantum simulators could be easily configured with noise models which very closely match the nines of qubit fidelity of real quantum computers. This would allow classical quantum simulators to be used as debugging aids even if real quantum computers are available.

Also, this would allow testing of algorithms in advance of the construction of proposed new quantum computers with their expected nines of qubit fidelity.

Also, it would be helpful to be able to modestly adjust the nines of the simulator configuration to determine the impact on algorithm results. First, to reduce the nines to see what less-capable machines an algorithm might successfully be run on. Second, to increase the nines to see the impact on algorithm result accuracy to assess whether a failing algorithm (poor result accuracy) could be run successfully on a modestly more capable machine if and when it should become available.

Where should vendors report error rates and qubit fidelity?

In my own model, each machine produced by a vendor would have two key documents:

  1. Principles of operation. Everything algorithm designers and application developers need to know about how a machine works so that they can develop functioning algorithms and applications. But not performance or implementation details.
  2. Implementation specification. All details about implementation of the machine, particularly limitations and performance.

The principles of operation may cover a family of machines organized around common principles and common architectural elements, while the implementation specification is likely to be machine-specific.

Generally, details about error rates probably belong in the implementation specification.

Qubit fidelity is a gray area. For sure, details of qubit fidelity should be in the implementation specification. But some sense of the rough overall qubit fidelity — such as the average nines for qubits and gates — probably needs to be stated in the principles of operation as well so the designers of algorithms and developers of applications have some sense of how limited or capable the machine is — not the specific details, but at least some general statement of how noisy or how near to perfect qubits and gates are.

In my view, a single composite measure of nines of qubit fidelity belongs in the principles of operation. Detailed specifics of the nines of qubits and gates, including per-qubit nines, belong in the implementation specification.

There are no standards at present for documenting either the principles of operation or the implementation specification of quantum computers.

I have proposed a framework for documenting principles of operation:

Some brief comments on implementation specifications are included in that proposal.

As an example of current practice, Google just recently published a “datasheet” for their Sycamore Weber quantum processor which includes qubit and gate error rates:

The Performance section of that datasheet provides quite a few error rates, including per-qubit error rates.

Nines for quantum computer system availability

It’s not related to qubit fidelity, but the notion of quantum computer system availability would seem to make sense, comparable to system availability for classical computer systems.

We could speak of the uptime of a quantum computer system, such as a server in the cloud. Service without interruption for 99.999% of the time would be considered 5 nines reliability.

For more on system availability for classical computer systems:

Nines of reliability

Nines of reliability in the context of quantum computing is simply a reference to either:

  1. Nines of qubit fidelity.
  2. Nines of quantum computer system availability.

Presume the former, especially when discussing algorithms and applications, unless context seems to be referring to the reliability or availability of the overall quantum computer system.

Need a roadmap for nines

It sure would be nice to have a roadmap of milestones for achieving each increment of nines for qubit fidelity, but that’s not practical at this time.

Instead, I would simply request vendors to include milestones for increments of nines in their own roadmaps.

Just to be clear, I’m referring to composite qubit fidelity, which is some sort of a weighted average of single-qubit gates, two-qubit gates, and measurements — not just for the easy single-qubit gates. Single-qubit gates are much easier, but only show a small portion of the overall picture of qubit fidelity.

The important milestones:

  1. 1 nine — 90% — we may actually be there.
  2. 1.5 nines — 95% — possibly within a year.
  3. 2 nines — 99% — probably reachable within a year or two.
  4. 2.5 nines — 99.5% — maybe 2–3 years.
  5. 3 nines — 99.9% — more of a 3 to 4-year goal.
  6. 3.5 nines — 99.95% — 4-year goal.
  7. 4 nines — 99.99% — more of a 4 to 5-year goal.
  8. 5 nines — 99.999% — a 5-year goal.
  9. 6 nines — 99.9999% — a 5 to 7-year goal.
  10. 8 nines — a pipe dream for now.
  11. 9 nines — ditto.
  12. 12 nines — will this be possible? Maybe, but some serious redesign would be needed.

IBM’s Quantum Volume versus qubit fidelity

The IBM Quantum Volume metric measures overall performance for the largest square circuit which meets some threshold of accuracy of the final result, while the focus of this paper is simply the fidelity of a single qubit. These are two distinct metrics.

Granted, qubit fidelity does impact overall performance, which is why qubit fidelity is important, but qubit fidelity is a much more focused and specific metric, while quantum volume focuses on a larger, overall system performance goal.

Quantum volume is also limited to strictly square circuits — count of qubits is the same as circuit depth, not allowing for algorithms which may be deeper but use fewer qubits or shallower but use more qubits. Measuring qubit fidelity separately from qubit count allows for greater diversity of circuit depth.

Also, quantum volume is only valid up to only about 50 qubits, and possibly less (maybe 45, 40, 38, 35, or even less) since it requires a full classical simulation of the full quantum circuit being tested, while the qubit fidelity metric proposed here does not require any simulation.

For more on the 50-qubit limitation, read this paper:

Separating qubit fidelity from Quantum Volume enables greater flexibility in characterizing performance

The single metric approach of IBM’s Quantum Volume metric masks much of the underlying complexity of adequately characterizing a wide variety of quantum algorithms. Providing the user with both qubit count and qubit fidelity enables the user to characterize a wider variety of algorithm topologies, especially those which are much more shallow or much deeper than strictly square circuits.

Algorithm designers have a wide variety of techniques and tricks that they can use for laying out the qubit topologies of their algorithms, including layouts that can minimize the need for SWAP networks.

In some cases it may be useful to separately characterize qubit measurement fidelity, or to keep single-qubit, two-qubit, and qubit measurement fidelity as distinct metrics. Hopefully, for many algorithms the three distinct metrics can be composed into a single, composite qubit fidelity metric, but having them available separately as well provides much greater flexibility than the single-metric approach of IBM’s Quantum Volume metric.

Having SWAP network fidelity as a separate metric also enables greater flexibility for characterizing the performance of algorithms where the algorithm designer is able to pay greater attention to carefully laying out the qubit topology to minimize SWAPs needed for connectivity.

Summary and conclusions

  1. Quantum computers are very error prone.
  2. This isn’t going to change much anytime soon.
  3. Near-perfect and logical qubits are coming, but not so soon.
  4. There are many sources of errors and different types of errors.
  5. Many metrics of fidelity (error rates) are needed.
  6. All of these metrics should be clearly documented
  7. Difficult to come up with a magic formula to reduce all of these metrics into a single, composite metric of qubit fidelity.
  8. But there is value in reducing all of the metrics into a single metric.
  9. Potential for a benchmark test to deduce effective average qubit fidelity.
  10. SWAP networks to overcome connectivity limitations can greatly complicate modeling of average gate fidelity.
  11. Coherence relaxation curves drive qubit fidelity for deeper circuits.
  12. Different applications may have a different focus on which single metric matters most.
  13. Nines are a convenient and easy to use summary of qubit fidelity.
  14. Vendors need to do a much better job of documenting the qubit fidelity of both their current hardware and each milestone in their roadmaps for future hardware.
  15. Algorithm designers and application developers should endeavor to characterize the qubit fidelity requirements of their algorithms and applications.
  16. IBM’s Quantum Volume metric is insufficient — it doesn’t expose qubit fidelity as a separate metric which is a critical need for algorithm design and application development.
  17. Much more research is required, both to improve the fidelity of future qubit hardware and to better characterize the qubit hardware we have today and in the near future.
  18. Much more research is needed for tools and techniques for algorithm designers and application developers as they grapple with qubit fidelity issues, including for advanced techniques such as Quantum Fourier transform (QFT) and quantum phase estimation (QPE).

--

--