# What Is a Near-perfect Qubit?

*Near-perfect qubit* is a vague and ambiguous but useful term in quantum computing for describing *qubit fidelity*. It is somewhere between a *noisy qubit* and a *perfect logical qubit* or *fault-tolerant qubit*, no longer so noisy, but still not quite as perfect as with full quantum error correction (QEC). In short, a near-perfect qubit is a poor man’s perfect logical qubit. This informal paper defines and explores the concept of near-perfect qubits.

Near-perfect qubit is not a standard or widely-used term, but is instead my own *proposal*. I’m not claiming to have invented the term or the concept, but I am certainly promoting the use of the term.

A fair amount of the material in this paper has been excerpted, adapted, and expanded from my longer paper on fault-tolerant quantum computing, quantum error correction (QEC), and logical qubits:

**Preliminary Thoughts on Fault-Tolerant Quantum Computing, Quantum Error Correction, and Logical Qubits**- https://jackkrupansky.medium.com/preliminary-thoughts-on-fault-tolerant-quantum-computing-quantum-error-correction-and-logical-1f9e3f122e71

**Topics discussed in this informal paper:**

- In a nutshell — noisy qubits, perfect logical qubits, and near-perfect qubits
- Noisy qubits — in the short term
- Perfect logical qubits — for the long run
- Near-perfect qubits — for the medium term
- Near-perfect qubits — to accelerate the path to perfect logical qubits
- Logical qubits require near-perfect qubits
- Achieving near-perfect qubits would obviate some not insignificant fraction of the need for full quantum error correction
- Near-perfect qubits will be good enough — true fault tolerance will not be needed, generally
- Perfect logical qubits are the preferred solution, but they simply aren’t available now or any time soon
- Near-perfect qubits are more practical — quantum error correction is too hard
- There are two distinct purposes for near-perfect qubits
- Near-perfect qubits are of value in their own right, even without quantum error correction
- So, it’s a win-win to keep pushing towards more-perfect (near-perfect) qubits
- Very limited initial capacities of logical qubits will greatly limit their use
- Even with logical qubits, some applications may benefit from the higher performance of near-perfect physical qubits
- Even with logical qubits, larger applications may need to operate directly on near-perfect physical qubits
- Even with near-perfect qubits, the nuances of subtle remaining errors may make it a game of Russian Roulette
- Some algorithms and applications will simply need the clarity and certainty of perfect logical qubits
- My own preference is for near-perfect qubits over overly-complex quantum error correction
- It’s a real race — quantum error correction vs. near-perfect qubits — the outcome is unclear
- Are all logical qubits perfect logical qubits? Yes!
- Error rates and qubit fidelity
- Nines of qubit fidelity
- Probabilities and cumulative effects, shot counts and circuit repetitions
- Terms and definitions
- Informal levels of qubit fidelity
- How close to perfect is a near-perfect qubit?
- Near-perfect logical qubits and near-perfect physical qubits
- Error-free qubits — either perfect logical qubits or near-perfect qubits
- Effectively error-free qubits — near-perfect qubits with a very high qubit fidelity (very low error rate)
- Fault-free qubits — synonym for error-free qubits — either perfect logical qubits or near-perfect qubits
- Stable qubits — synonym for error-free qubits — either perfect logical qubits or near-perfect qubits
- NISQ devices (NISQ quantum computers, NISQ quantum processors)
- Actually, most quantum computers have been NSSQ devices — small-scale, not NISQ intermediate-scale
- NISQ has always been just a stepping stone, not a platform for real applications
- Beyond NISQ — NPISQ and FTISQ
- NPSSQ and NPLSQ for near-perfect small-scale and large-scale quantum devices
- Post-NISQ quantum computers — near-perfect qubits or perfect logical qubits
- Post-noisy quantum computers
- Near-perfect qubits are an essential requirement for achieving practical quantum computing
- Background for how I arrived at near-perfect qubits
- Prior art?
- How noisy is noisy for a noisy qubit?
- How close to perfect is a near-perfect qubit?
- Are three nines good enough for a near-perfect qubit?
- Are 3.5 nines good enough for a near-perfect qubit?
- Are six nines good enough for a perfect logical qubit?
- Doesn’t the tiny residual error of even the best error-corrected qubits make them near-perfect qubits?
- Even near-perfect qubits crumble near the limits of coherence times
- Caveat: Near-perfect qubits are still limited by coherence time
- Even with perfect logical qubits, some applications may benefit from the higher performance of near-perfect physical qubits
- Near-perfect qubits are likely needed to achieve quantum advantage
- Whether quantum advantage can be achieved with only near-perfect qubits remains an open question
- Near-perfect physical qubits may be sufficient to achieve The ENIAC Moment for niche applications
- Likely need logical qubits to achieve The FORTRAN Moment for widespread adoption of quantum computing
- Irony: By the time qubits get good enough for efficient error correction, they may be good enough for many applications without the need for error correction
- Do we really need quantum error correction if we can achieve near-perfect qubits? Probably, eventually
- Maybe we don’t really need quantum error correction if we can engineer and mass produce near-perfect qubits
- Unclear if non-elite staff can solve production-scale problems with near-perfect qubits
- Prospects for near-perfect qubits
- When can we expect near-perfect qubits?
- Vendors need to publish roadmaps for near-perfect qubits
- Nobody expects near-perfect qubits imminently, so we should maintain a focus on pursuing quantum error correction
- Which is likely to come first, full quantum error correction or near-perfect qubits?
- Are trapped-ion qubits closer to near-perfect? Not quite
- Are neutral-atom qubits in the same ballpark as trapped-ion qubits for qubit fidelity? Unclear
- Real applications should remain based on simulation until near-perfect qubits are available
- How close to perfect must near-perfect qubits be to enable perfect logical qubits?
- How many nines will become the gold standard for near-perfect qubits to enable logical qubits? It remains to be seen
- Let application developers decide between near-perfect qubits and perfect logical qubits
- Near-perfect qubits are a poor man’s perfect logical qubits, but for many applications that will be good enough
- Possibility of a Quantum Winter if we don’t get near-perfect qubits within two years
- More on perfect logical qubits
- Summary and conclusions

# In a nutshell — noisy qubits, perfect logical qubits, and near-perfect qubits

Grossly over-simplifying, but capturing the essence of the distinctions, *qubit fidelity* is a spectrum from *terrible* to *great*, with three interesting broad general buckets for the spectrum:

**Noisy qubits.**Generally no more than a few dozen quantum operations can be performed before an error occurs. At best, somewhere between a hundred and a thousand operations before an error.**Near-perfect qubits.**Thousands of operations can be performed before an error occurs. At best, up to ten thousand operations before an error occurs.**Perfect logical qubits.**No limit to the number of operations which can be performed — without*any*errors. At least a million operations without error. Possibly many millions, a billion, even a trillion, or more.

Near-perfect qubits are an essential requirement for achieving *practical quantum computing*, both necessary and sufficient. Full quantum error correction (QEC) is not needed or available in the relatively near and medium-term, and noisy NISQ qubits won’t cut it.

Near-perfect qubit is not a standard or widely-used term, but is instead my own *proposal*. I’m not claiming to have invented the term or the concept, but I am certainly promoting the use of the term.

For a discussion of how I arrived at my focus on near-perfect qubits, see the section *Background for how I arrived at near-perfect qubits*.

# Noisy qubits — in the short term

Noisy qubits suck. We only use them because we have no other choice. Actually, we do have a choice — classical quantum simulation — but when it comes to real quantum hardware, we have no choice, at present.

The qubits of current and near-term quantum computers are inherently *noisy*, meaning that you can’t perform many operations on them before you start getting errors.

That’s what everybody is stuck with in the near term.

Or, you can use a simulator to get either ideal, perfect qubits, or use whatever noise model you wish.

# Perfect logical qubits — for the long run

Physicists and computer scientists have a proposed solution called *quantum error correction* (QEC) which completely eliminates the errors, but… implementation of that proposal is going to take a number of years — three to five years or even longer.

It’s very complex and requires a large number of qubits — many physical qubits for each perfect logical qubit, at a time when qubits are a very scarce commodity.

At least dozens of noisy physical qubits are harnessed to implement each fully-corrected, perfect *logical qubit*.

For greater detail on logical qubits, seemy longer paper on fault-tolerant quantum computing, quantum error correction (QEC), and logical qubits:

**Preliminary Thoughts on Fault-Tolerant Quantum Computing, Quantum Error Correction, and Logical Qubits**- https://jackkrupansky.medium.com/preliminary-thoughts-on-fault-tolerant-quantum-computing-quantum-error-correction-and-logical-1f9e3f122e71

# Near-perfect qubits — for the medium term

*Near-perfect qubits* are an admittedly *stopgap measure* to achieve a large fraction of the benefit of full quantum error correction at a small fraction of the cost.

Qubit fidelity continues to improve as each year ticks by.

Physical qubits are never going to be absolutely perfect, but they are on track to be close enough within another one to three years to permit a much larger number of quantum operations to be performed before any errors might occur.

# Near-perfect qubits — to accelerate the path to perfect logical qubits

As an added benefit, as the qubit fidelity of near-perfect qubits continues to rise, that will reduce the number of physical qubits required to implement each logical qubit under quantum error correction. So, efforts to pursue near-perfect qubits will not only not detract from pursuit of quantum error correction, but will enhance and accelerate progress towards full, automatic, and transparent quantum error correction and the resultant perfect logical qubits.

Granted, the pace of improvements of raw physical qubits can be painfully slow, but the future of quantum error correction is predicated on near-perfect physical qubits anyway. And sometimes, occasionally, a quantum leap of progress occurs.

# Logical qubits require near-perfect qubits

Just to reemphasize the point from the previous section, logical qubits do effectively require near-perfect qubits.

Sure, technically, theoretically, you could produce logical qubits using only *noisy physical qubits*, but there would be three severe downsides to doing so:

**More physical qubits would be needed for each logical qubit.**Too many.**Given that physical qubits will continue to be relatively scarce, this means that for a given number of physical qubits you would get fewer logical qubits.**Too few.**The residual error even after full quantum error correction (QEC) would be greater.**Too great.

In short, a lose-lose-lose proposition. Not very appealing.

So, when it comes to pursuing logical qubits using quantum error correction, don’t bother with noisy physical qubits — focus on near-perfect qubits.

# Achieving near-perfect qubits would obviate some not insignificant fraction of the need for full quantum error correction

Some algorithms and applications will inherently require full quantum error correction (QEC), but achieving near-perfect qubits would obviate some not insignificant fraction of the need for full quantum error correction. More-perfect qubits are a win-win — better for near-term quantum computers and result in more efficient quantum error correction.

Put simply, once near-perfect qubits are readily available, most algorithms and applications won’t actually need true fault-*tolerant* qubits since there won’t be many or maybe even any faults to tolerate.

# Near-perfect qubits will be good enough — true fault tolerance will not be needed, generally

As a gross generality, it will likely be true that near-perfect qubits will be good enough, and that true fault tolerance will not be needed. But, as with all generalities, there will likely be exceptions.

# Perfect logical qubits are the preferred solution, but they simply aren’t available now or any time soon

The life of a quantum algorithm designer or quantum application developers will be much easier using perfect logical qubits, but they simply aren’t available now or any time soon. It may be three to five or even seven years before perfect logical qubits are readily available in any reasonable quantity.

# Near-perfect qubits are more practical — quantum error correction is too hard

Achieving perfect logical qubits using quantum error correction (QEC) will be a real watershed moment for quantum computing, opening up the floodgates for development of quantum algorithms and quantum applications by non-elite technical staff, but that’s still an event way off in the distant future, far over the near-term horizon. Quantum error correction is simply way too hard. But in the meantime, incrementally advancing towards greater degrees of near-perfect qubits will soon be within reach.

Three to 3.5 nines of qubit fidelity should be available over the coming year or two, approaching four nines within the next two years after that.

# There are two distinct purposes for near-perfect qubits

Just to summarize the previous sections, there are two distinct purposes for near-perfect qubits:

**To enable quantum error correction (QEC) for logical qubits.**In the long run.**To enable complex applications using raw physical qubits until quantum error correction becomes available.**In the near to medium term.

Those two distinct purposes overlap and the latter facilitates the former, but they will continue to remain distinct for at least some more advanced algorithms and applications even once quantum error correction does become available.

# Near-perfect qubits are of value in their own right, even without quantum error correction

Since not all algorithms and applications will require the full perfection of perfect logical qubits, near-perfect qubits are of value in their own right, even without quantum error correction.

# So, it’s a win-win to keep pushing towards more-perfect (near-perfect) qubits

Since near-perfect qubits are usable as-is for some applications and also enable more-efficient quantum error correction, it’s a win-win to keep pushing towards more-perfect (near-perfect) qubits — even after quantum error correction is actually achieved.

# Very limited initial capacities of logical qubits will greatly limit their use

Logical qubits will greatly facilitate many applications, but very limited initial capacities of logical qubits will mean that any application needing a significant number of qubits will have to make do with physical qubits.

The good news is that the level of quality needed to enable logical qubits will assure that physical qubits will have near-perfect quality. Still, working with physical qubits will be limited to the most sophisticated, most elite quantum algorithm designers and quantum application developers.

# Even with logical qubits, some applications may benefit from the higher performance of near-perfect physical qubits

Even once logical qubits do become available and in sufficient quantities, some applications may benefit from the higher performance of near-perfect physical qubits.

It will be a difficult tradeoff, but the availability of this tradeoff will facilitate a richer landscape of quantum algorithms and quantum applications.

# Even with logical qubits, larger applications may need to operate directly on near-perfect physical qubits

Even once logical qubits become commonplace, there may still be a need or desire for higher performance and larger applications which operate directly on near-perfect physical qubits, without the performance overhead or more limited capacity of logical qubits.

Again, it will be a difficult tradeoff, but one whose availability will facilitate a richer landscape of quantum algorithms and quantum applications.

# Even with near-perfect qubits, the nuances of subtle remaining errors may make it a game of Russian Roulette

There may well be projects which can achieve success with raw physical near-perfect qubits, but the nuances of subtle remaining errors may make it a game of Russian Roulette, with some teams succeeding spectacularly even while some teams fail spectacularly, and no way to know in advance which is more likely.

Logical qubits will eliminate this intense level of uncertainty and anxiety.

Alternatively, even higher levels of qubit fidelity for near-perfect qubits will tend towards minimizing these risks, even though never quite eliminating them, although at some stage the remaining residual error rate may be so small that almost nobody notices. But almost nobody won’t always be the same as absolutely nobody.

# Some algorithms and applications will simply need the clarity and certainty of perfect logical qubits

As close to perfect as near-perfect physical qubits might be, some quantum algorithms, quantum applications, quantum algorithm designers, quantum application developers, or even organizations will simply need the absolute clarity and absolute certainty of perfect logical qubits. Where *good enough* simply isn’t good enough.

There may be financial regulations, human life, or national security at risk. Arguments can be made, but maybe there are absolute red lines which cannot be crossed — or which must be crossed.

Or it may simply be that the people and organization are just *risk averse*. Whatever. In any case, perfect logical qubits will satisfy the need even if near-perfect qubits do not.

# My own preference is for near-perfect qubits over overly-complex quantum error correction

Personally, I prefer more simple system designs. Full quantum error correction is in fact more general and less problematic, but at a cost of being very complex. My own preference is for the simplicity and lower cost of near-perfect qubits over overly-complex quantum error correction.

Despite my current belief that full, automatic, and transparent quantum error correction is the only way to go in the long run and is coming sooner than many expect, I still prefer near-perfect qubits, both for the near and medium term before full quantum error correction becomes widely available, and even in the long run for very high-end applications which can significantly benefit from working with raw physical qubits.

# It’s a real race — quantum error correction vs. near-perfect qubits — the outcome is unclear

There’s absolutely no clarity which will happen first, near-perfect qubits sufficient for most quantum algorithms and quantum applications, or physical qubits with a low-enough error rate — and in sufficient quantity — to enable full quantum error correction sufficient for most quantum algorithms and quantum applications.

It’s a real race — quantum error correction vs. near-perfect qubits, and the outcome is unclear.

# Are all logical qubits perfect logical qubits? Yes!

There’s no real need to redundantly refer to *perfect logical qubits*. They really are simply *logical qubits*. I personally use the redundant term *perfect logical qubit* to emphasize the point that with logical qubits algorithm designers and application developers no longer have to worry about qubit errors.

# Error rates and qubit fidelity

An error rate is the number of operations you can perform before an error is likely. Or, for a given number of operations, the fraction of those operations which are likely to be in error.

*Error rates* and *qubit fidelity* are commonly described as *nines of qubit fidelity* — the number of 9’s in the reliability of the qubits.

An error rate of 1% or one in a hundred is a reliability of 99% or *two nines*.

An error rate of 0.01% or one in a ten thousand is a reliability of 99.99% or *four nines*.

Nines can be fractional as well, with 99.95 midway between three and four nines — 3.5 nines.

For more on nines of qubit fidelity, see the next section, *Nines of qubit fidelity*.

# Nines of qubit fidelity

*Nines of qubit fidelity* are a convenient form for expressing qubit reliability and error rates. Basically, count the nines in the reliability, with fractional nines as well:

- 90% = one nine. One error in ten operations.
- 95% = 1.5 nines. One error in twenty operations.
- 96% = 1.6 nines. One error in twenty five operations.
- 98% = 1.8 nines. One error in fifty operations.
- 99% = two nines. One error in a hundred operations.
- 99.5% = 2.5 nines. One error in five hundred operations.
- 99.9% = three nines. One error in a thousand operations.
- 99.95% = 3.5 nines. One error in five thousand operations.
- 99.99% = four nines. One error in ten thousand operations.
- 99.999% = five nines. One error in a hundred thousand operations.
- 99.9999% = six nines. One error in a million operations.
- 99.9999999% = nine nines. One error in a billion operations.
- 99.9999999999% = twelve nines. One error in a trillion operations.

For more details of *nines of qubit fidelity*, qubit reliability, and error rates, see my paper:

# Probabilities and cumulative effects, shot counts and circuit repetitions

No single quantum algorithm (circuit) is going to have millions or billions of gates, but these are probabilities of errors, so they also take into account how many times an application might repeat a given circuit before an error might occur. Many thousands or even millions of circuit repetitions are possible for a given circuit or collection of circuits used by an application.

As an example, suppose your quantum circuit had 50 gates and your quantum computer had an error rate of one in a hundred — two nines, 99% reliability. You might presume that all fifty gates could always be executed before hitting that one in a hundred error, but it’s a probability, so it could happen more frequently or less frequently on occasion.

Further, you may have a *shot count* or *circuit repetitions* of 1,000 or even 10,000, so that the total number of gates executed is 50,000 or 500,000. That error rate of 1% means you could see 500 or even 5,000 errors.

In theory, Shor’s factoring algorithm might have millions or even tens of millions of gates, but that’s not a typical quantum algorithm. And it may never be feasible, at least at the scale of factoring 4096-bit encryption keys.

For more on shot counts and circuit repetitions, see my paper:

*Shots and Circuit Repetitions: Developing the Expectation Value for Results from a Quantum Computer*- https://jackkrupansky.medium.com/shots-and-circuit-repetitions-developing-the-expectation-value-for-results-from-a-quantum-computer-3d1f8eed398

# Terms and definitions

We’ve already discussed these terms informally in this paper, but here are their more formal definitions:

**Noisy qubits**. Qubits which have relatively low fidelity. Errors are relatively common. Significant effort is needed to cope with or to mitigate the errors. Only the most elite quantum algorithm designers and quantum application developers can work well with noisy qubits. Essentially*all*current and near-term quantum computers. The qubits used in a*NISQ device*— noisy intermediate-scale quantum device. For more detail, see the*NISQ devices*section later in this paper.**Near-perfect qubits.**Qubits which are close enough to perfect for some or even most quantum algorithms and quantum applications. Little effort is needed to cope with or to mitigate the errors. Less elite quantum algorithm designers and quantum application developers can work well with near-perfect qubits. Some or even most quantum algorithms and quantum applications can accept quantum results as if there were no errors. Not currently available, but expected within the next two to three years, sooner for some applications. Technically, this would no longer be a NISQ device but a*NPISQ device*— NP = near-perfect. For more detail see the*Beyond NISQ — NPISQ and FTISQ*section later in this paper.**Perfect logical qubits.**Qubits which utilize quantum error correction (QEC) to achieve virtually perfect reliability. May not be absolutely perfect, but few if any quantum applications would be able to detect any errors. No longer requires elite skills for quantum algorithm design and quantum application development. Suitable for widespread adoption of quantum computing. This would no longer be a NISQ device, but a*FTISQ device*— FT = fault-tolerant.

Some additional, more specialized terms:

**Corrected qubits.**A synonym for*perfect logical qubits*, to indicate that*quantum error correction*is in effect.**Stabilized qubits.**A synonym for*perfect logical qubits*, to indicate that*quantum error correction*is in effect.**Error-free qubits.**Either*perfect logical qubits*or*near-perfect qubits*, either suitable for execution of a reasonably deep and complex quantum circuit without any significant chance of encountering errors.**Fault-free qubits.**Synonym for*error-free qubits*.**Stable qubits.**Synonym for*error-free qubits*. They could be explicitly*stabilized*— under quantum error correction (*stabilized qubits*), or actually be inherently*stable*due to being*near-perfect*.**Algorithmic qubits.**Term introduced by IonQ — “*we introduce Algorithmic Qubits (AQ), which is defined as the largest number of effectively perfect qubits you can deploy for a typical quantum program*.” Seems reasonably close to my notion of a*near-perfect qubit*.

General quantitative characterization:

**Noisy qubits**. Reliability from 65% to 99%. Up to two nines.**Near-perfect qubits.**Reliability from 99.99% to 99.999%. Four to five nines.**Perfect logical qubits.**Reliability of 99.99999% or better. Minimum of seven nines.

The boundaries are rather ambiguous and subjective. For example:

**Noisy qubits**. Could go as high as 99.9% or even 99.95%. Up to three or even 3.5 nines.**Near-perfect qubits.**Could go as low as 99.9% or 99.95% for some applications — three to 3.5 nines. Could go as high as 99.9999 — six nines. In the extreme, it could go as high as nine or even twelve nines.**Perfect logical qubits.**Could go as low as 99.9999 — six nines. Could stay above 99.9999999% or even 99.9999999999 — nine nines or even twelve nines.

Who can use them:

**Noisy qubits**. Significant effort is needed to cope with or to mitigate the errors — or a very high tolerance for errors. Only the most elite quantum algorithm designers and quantum application developers can work well with noisy qubits. Or individuals merely wishing to experiment with the technology, doing no more than*toy*algorithms.**Near-perfect qubits.**Little effort is needed to cope with or to mitigate the errors. Less-elite quantum algorithm designers and quantum application developers can work reasonably well with near-perfect qubits, at least sometimes, at least for simpler quantum algorithms and quantum applications.**Perfect logical qubits.**No longer requires elite skills for quantum algorithm design and quantum application development. Suitable for widespread adoption of quantum computing.

For much more detail on perfect logical qubits, corrected qubits, and stabilized qubits, see my paper:

**Preliminary Thoughts on Fault-Tolerant Quantum Computing, Quantum Error Correction, and Logical Qubits**- https://jackkrupansky.medium.com/preliminary-thoughts-on-fault-tolerant-quantum-computing-quantum-error-correction-and-logical-1f9e3f122e71

# Informal levels of qubit fidelity

These are simply some informal categories so that we have some common terminology or language to talk about rough scenarios for qubit error rates (adapted from my paper

**Extremely noisy.**Not really usable. But possible during the earliest stages of developing a new qubit technology. May be partially usable for testing and development, but not generally usable. Less than 60% reliable.**Very noisy.**Not very reliable. Need significant shot count to develop a statistical average for results. Barely usable. Less than 70% reliable.**Moderately noisy.**Okay for experimentation and okay if rerunning multiple times is needed. Not preferred, but workable. 75% to 80% reliable.**Modestly noisy.**Frequently computes correctly. Occasionally needs to be rerun. Reasonably usable for NISQ prototyping, but not for production-scale real-world applications. 85% to 96% reliable.**Slightly noisy.**Usually gives correct results. Very occasionally needs to be rerun. 97.5% to 99.5% reliable. Generally,*two nines of qubit fidelity*, but could be up to 2.5 nines, three nines, or even 3.5 nines.**Near-perfect qubit.**Just short of perfect qubit. Rare failures, but enough to spoil perfection. Generally, four to five nines of fidelity — 99.99% to 99.999% reliability. Maybe three to 3.5 nines for some applications. Could be either a*near-perfect physical qubit*or a*near-perfect logical qubit*, but generally it will be the former unless it is clear from the context.**Near-perfect logical qubit.**An error-corrected qubit which has some non-trivial residual error rate.**Near-perfect physical qubit.**Generally redundant, but may also be trying to distinguish from a near-perfect logical qubit which has some non-trivial residual error rate.**Virtually-perfect qubit.**No detectable errors, or so infrequent to be unnoticeable by the vast majority of applications. Comparable to current classical computing, including ECC memory. Possibly as low as six nines, but generally nine to twelve nines or more, even fifteen nines.**Logical qubit.**An ensemble of noisy or near-perfect qubits which utilize quantum error correction (QEC) to achieve an extremely low or even nonexistent error rate. I sometimes use the term*perfect logical qubit*, which is redundant.**Perfect physical qubit.**The mythical ideal. No errors. Ever. No error correction required. Not likely to be practical or even theoretically achievable — under quantum mechanics there is always some non-zero uncertainty.

# How close to perfect is a near-perfect qubit?

As the discussion so far should have made clear, there is no definitive answer to this question.

To some extent it’s subjective. Different quantum algorithms and quantum applications can have different requirements for qubit fidelity. What’s close enough to perfect for one may not be close enough for another.

The only real goal is to be close enough to perfect for *many* if not *most* quantum algorithms and quantum applications.

The notion of being close to perfect for *all* quantum algorithms and quantum applications is more of an ideal rather than a realistic and achievable goal.

# Near-perfect logical qubits and near-perfect physical qubits

Generally, the term *near-perfect qubit* refers to a single *physical qubit* without any error correction (QEC). To be more explicit, this is a *near-perfect physical qubit*.

But even an *error-corrected qubit* — a so-called *perfect logical qubit* — can have at least a tiny residual error rate. It may feel natural to consider this a *near-perfect logical qubit*.

Personally, I would only use the term *near-perfect logical qubit* if the residual error was not so tiny, such as six nines, or maybe even somewhat less than six nines, such as 5.5 nines, or maybe just a little higher, such as seven nines. But I wouldn’t be inclined to use the term for very tiny residual error rates such as nine to twelve nines. Still some perfectionists might insist on absolute technical accuracy.

But even if one chooses to refer to near-perfect logical qubits, I would generally refrain from referring to error-corrected qubits as near-perfect qubits.

# Error-free qubits — either perfect logical qubits or near-perfect qubits

In truth, all that most quantum algorithm designers and quantum application developers really care about is that their quantum circuits execute without any significant chance of encountering errors. If that can occur with near-perfect qubits most of the time, that’s fine. If it requires full quantum error correction and perfect logical qubits, so be it. Whichever approach works can be referred to as *error-free qubits* (or *fault-free qubits* or *stable qubits*) — quantum circuits of some significant depth and some significant complexity can be reliably executed without encountering errors.

Note that *fault-free qubits* and *stable qubits* are exact synonyms for *error-free qubits*.

Over time, near-perfect qubits will get increasingly closer to perfect, with decreasing error rates.

The term *error-free qubits* is also intended to refer to *corrected qubits* when quantum error correction is being used.

# Effectively error-free qubits — near-perfect qubits with a very high qubit fidelity (very low error rate)

At some point, the error rate of near-perfect qubits will be so small (e.g., five nines — one error in 100,000 operations) that most quantum algorithm designers and quantum application developers won’t even notice. At that stage, you could say that near-perfect qubits are *effectively error-free qubits*.

# Fault-free qubits — synonym for error-free qubits — either perfect logical qubits or near-perfect qubits

If the error rate for near-perfect qubits is low enough, such as five nines — one error in 100,000 operations, they can be considered *error-free qubits*, also known as *fault-free qubits*, an exact synonym.

The term *fault-free qubit* is also intended to refer to *corrected qubits* when quantum error correction is being used.

# Stable qubits — synonym for error-free qubits — either perfect logical qubits or near-perfect qubits

If the error rate for near-perfect qubits is low enough, such as five nines — one error in 100,000 operations, they can be considered *error-free qubits*, also known as *stable qubits*, an exact synonym.

The term *stable qubit* is also intended to refer to *stabilized qubits* or *corrected qubits* when quantum error correction is being used.

# NISQ devices (NISQ quantum computers, NISQ quantum processors)

*NISQ quantum computers* or *NISQ devices* or *NISQ quantum processors* are quantum computers based on *noisy qubits*. The term was proposed by CalTech Prof. John Preskill:

*For this talk, I needed a name to describe this impending new era, so I made up a word:**NISQ**. This stands for**Noisy Intermediate-Scale Quantum**. Here “intermediate scale” refers to the size of quantum computers which will be available in the next few years, with a number of qubits ranging from 50 to a few hundred.**Quantum Computing in the NISQ era and beyond**https://arxiv.org/pdf/1801.00862.pdf*

Nominally *IS* or *intermediate-scale* is supposed to mean 50 or more to hundred of qubits, but NISQ is commonly used to refer to all current and near-term quantum computers, even those with fewer than 50 qubits. In my own terminology, I would call most of them *NSSQ devices* — *Noisy Small-Scale Quantum devices*.

There’s no definitive quantitative definition for noisiness of NISQ qubits. Generally, noisy enough to make life rather unpleasant for quantum algorithm designers and quantum application developers. Or even more generally, *any current qubit* — they’re *all* noisy at present.

Personally, from what I have seen, a 1% error rate (99% reliability or two nines of qubit fidelity) seems to be the best that can be achieved with current and near-term NISQ quantum computers. But there is no general consensus on this, as far as I can tell.

# Actually, most quantum computers have been NSSQ devices — small-scale, not NISQ intermediate-scale

Technically, as shown by Preskill’s definition above, *intermediate-scale* starts at 50 qubits, while most existing quantum computers have had substantially fewer than 50 qubits. I personally refer to these as NSSQ devices — noisy *small-scale* quantum devices.

# NISQ has always been just a stepping stone, not a platform for real applications

From the get-go, NISQ (and NSSQ!) should have been couched as an evolutionary hardware development path towards the *near-perfect qubits* needed to support perfect logical qubits using quantum error correction, rather than a platform for directly implementing quantum applications with noisy qubits. And, of course, now it looks as if near-perfect qubits themselves should be sufficient to support many or even most quantum applications, without the need to wait for the perfect logical qubits at all.

People certainly have performed a lot of prototyping and experimentation using so-called NISQ devices (actually most have been NSSQ since they have been *small-scale* — under 50 qubits), but nothing even remotely approaching production-scale practical real-world quantum applications.

# Beyond NISQ — NPISQ and FTISQ

What terms should we use for quantum computers based on *near-perfect qubits* if NISQ refers to noisy qubits? My proposal is *NPISQ* for *near-perfect intermediate-scale device*. I have a comparable term for full fault-tolerant quantum computers: *FTISQ*.

And this begs the question of whether *beyond NISQ* also means *beyond intermediate scale*, meaning more than a few hundred qubits. That’s unclear — some would insist yes, others would not insist so.

My response is to propose specific terms for beyond an intermediate-scale of qubits — *large-scale*, which I abbreviate as *LS*. So we have *NPLSQ* and *FTLSQ* for *near-perfect large-scale quantum* devices and *fault-tolerant large-scale quantum* devices respectively, for more than a few hundred to thousands of qubits. And you can have *NLSQ devices* if you really do want to build quantum computers with more than a few hundred to thousands of *noisy* qubits.

See my full proposal:

*Beyond NISQ — Terms for Quantum Computers Based on Noisy, Near-perfect, and Fault-tolerant Qubits*- https://jackkrupansky.medium.com/beyond-nisq-terms-for-quantum-computers-based-on-noisy-near-perfect-and-fault-tolerant-qubits-d02049fa4c93

# NPSSQ and NPLSQ for near-perfect small-scale and large-scale quantum devices

My proposal from the preceding section also proposes new terms for other ranges of qubit capacity beyond *intermediate-scale*, which is supposed to be reserved for 50 to hundreds of qubits:

**NPSSQ.**Near-perfect small-scale quantum device. Under 50 qubits.**NPISQ.**Near-perfect intermediate-scale quantum device. 50 to a few hundred qubits.**NPLSQ.**Near-perfect large-scale quantum device. More than a few hundred to thousands of qubits.

There are a variety of other more specialized terms in my proposal.

# Post-NISQ quantum computers — near-perfect qubits or perfect logical qubits

We’re in the so-called *NISQ era* of quantum computing since *all* qubits are currently *noisy qubits*. The preferred and presumed proper path out of NISQ devices is via *perfect logical qubits* enabled by quantum error correction (QEC).

But near-perfect qubits are also a perfectly legitimate path out of the NISQ era since most quantum algorithm designers and quantum application developers will no longer need to obsess at every turn over qubit fidelity and errors.

And this begs the question of whether *Post-NISQ* also means *beyond intermediate scale*, meaning more than a few hundred qubits. That’s unclear — some would insist yes, others would not insist so.

# Post-noisy quantum computers

As we have seen, *post-NISQ* is still a somewhat vague and ambiguous term. For most uses, the term *post-noisy* would probably be more accurate than post-NISQ since it explicitly refers to simply getting past noisy qubits, to fault-tolerant and near-perfect qubits rather than focus on how many qubits.

Some people would be perfectly happy to have 128 near-perfect qubits even though that is still *intermediate scale*.

Others would actually be happy with 48 near-perfect qubits even though that isn’t even enough to be considered intermediate scale.

# Near-perfect qubits are an essential requirement for achieving practical quantum computing

There is growing chatter about the coming of *practical quantum computing*. Granted, it’s a vague marketing and hype buzz term, but does have some value, indicating a more advanced level of sophistication where quantum computers can actually solve *production-scale practical real-world problems*.

I posit that near-perfect qubits are an *essential requirement* for achieving practical quantum computing. Some might prefer full quantum error correction (QEC), which is years away, while many others still persist in the misguided belief that it can be achieved using noisy NISQ qubits, but I assert that near-perfect qubits are both necessary and sufficient for achieving the goal of practical quantum computing.

Again, it’s a hype term, so it can mean whatever anyone wants it to mean, but I will promote the term as indicated here:

*A practical quantum computer is capable of solving a wide range of production-scale practical real-world problems through the use of near-perfect qubits.*

# Background for how I arrived at near-perfect qubits

So, how did I arrive at my current conception of *near-perfect qubits*?

First, near-perfect qubit is not a standard or widely-used term, but is instead my own *proposal*. I’m not claiming to have invented the term or the concept, but I am certainly promoting the use of the term.

I first formulated my own personal conception of a *near-perfect qubit* as I was researching and writing up my informal paper on fault-tolerant quantum computing, quantum error correction (QEC) and logical qubits:

I won’t go into any of the deeper details of quantum error correction here, such as *surface codes* or the *quantum threshold theorem* — more detail and even deeper references can be found in that paper.

Until that moment, I was convinced that full-blown quantum error correction (QEC) and perfect logical qubits were the only solution to the difficulties inherent in *noisy qubits*. It was a black and white model, admitting no shades of gray, no middle ground. Or so I thought, back then.

But a number of factors conspired to force me to gravitate towards what I began calling *near-perfect qubits*:

- Quantum error correction was an open research endeavor, not having a clear path or even a clear endpoint, and apparently doomed to taking some indeterminate number of years before it would come to fruition.
- Although quantum error correction could be accomplished with even relatively noisy qubits, the less noisy, the better.
- With very noisy qubits a very large number of noisy physical qubits would be needed to construct even a single perfect logical qubit.
- With much less noisy qubits far fewer noisy physical qubits would be needed to construct each logical qubit.
- So, even if perfect logical qubits were the intended and only acceptable goal, much less noisy qubits were the best path to get there.
- How much less noisy would be acceptable to pursue perfect logical qubits? No obvious or inherent limit, and the less noisy the fewer physical qubits required and hence the more logical qubits you could construct from a given number of physical qubits.
- Physical qubits continue to be expensive and a relatively scarce resource, even years into the future.
- So, we need to focus on making the best we can of a relatively limited number of physical qubits. Even 10,000 physical qubits would yield only 50 logical qubits if it took 200 physical qubits to construct each logical qubit.
- So, clearly there needed to be a high priority on less-noisy qubits even if perfect logical qubits were going to become the norm for fault-tolerant quantum computing.
- So, we needed to focus on less-noisy qubits. How much less noisy? A
*lot*less noisy. - Then it finally dawned on me that if our noisy qubits really were a
*lot*less noisy, maybe we actually could do a fair amount of quantum computation before we ran into errors. - And it also dawned on me that clearly less-noisy qubits were not as good as perfect logical qubits, but since we’re not on a path that will get us to perfect logical qubits any time soon — not in the next few years, if not longer, maybe less-noisy qubits could be a plausible stopgap measure for the years while we await the arrival of perfect logical qubits, at least for some applications which have less demanding qubit fidelity requirements (e.g., relatively shallow circuits or more tolerant of occasional errors.)
- The term
*less-noisy qubit*seemed too insufficient to express the degree of noise reduction that was required. - I decided that I wanted a term which was a lot closer to the intended target — perfect logical qubits.
- The term
*near-perfect qubit*seemed to fit the bill — an attempt to be as far from noisy as possible, but still not quite to the perfection of a logical qubit. - Further, there was a fivefold benefit to such near-perfect qubits: 1) they accelerate the path to quantum error correction and logical qubits, 2) they reduce the number of physical qubits needed to construct each logical qubit, 3) they increase the logical qubit capacity achieved for a given number of physical qubits, 4) they enable an interesting fraction of quantum algorithms and quantum applications to run correctly without the need for full quantum error correction, and, last but not least, 5) pursuing near-perfect qubits in the short and medium term is not a tradeoff at the expense of quantum error correction and logical qubits, which are in fact accelerated with every advance for near-perfect qubits. Such a deal! Who could resist?!
- So, as much as I was determined to continue pursuing fault-tolerant quantum computing, quantum error correction, and perfect logical qubits, I was irresistibly driven to conclude that pursuit of near-perfect qubits was a serious win-win situation for both the near term and the long term, and the medium term as well.

That conclusion forced me to change the structure of my paper on fault-tolerant quantum computing, quantum error correction, and perfect logical qubits.

I’ve been using the term ever since.

It has become my goto term.

It has turned out to be a very reliable workhorse.

# Prior art?

It is very possible that the term *near-perfect qubit *may have been in use before I came along, but I certainly had not seen any references to it, let alone common usage of it before I began using it in my paper.

I’m not claiming that I *invented* the term, but simply indicating that I independently arrived at it based on my experience as outlined above.

# How noisy is noisy for a noisy qubit?

First, the obvious worst cases:

**Under 50% reliability.**Too noisy to be useful for anything.**51–59% reliability.**Not quite as bad, but still unlikely to be useful for much of anything.**60–65% reliability.**Very marginal, but still…**66–69% reliability.**One in three error rate is getting closer to minimal utility.**70–74% reliability.**Getting warmer, some minimal utility, but still…

Getting closer:

**75–79% reliability.**One in four error rate could have some marginal utility, but still…**80–89% reliability.**One in five error rate is definitely getting warmer.

Semi-useful:

**90–94% reliability.**Useful for some minimal applications.**95–96.9% reliability.**Useful for even more applications.

Useful for an interesting subset of applications:

**97–97.5% reliability.**Fairly useful for a minority of applications.**97.6–98.4% reliability.**Not so noisy.**98.5–98.9% reliability.**Even better.

The outer fringe of noisiness:

**99% reliability.**Two nines. Not so noisy.**99.1–99.4% reliability.**Getting better. Useful for more applications. Whether to still consider this noisy is debatable and a fielder’s choice.**99.5–99.75% reliability.**Even more useful.**99.9% reliability.**Three nines of qubit fidelity is a clear fielder’s choice for being a near-perfect qubit.

# How close to perfect is a near-perfect qubit?

There’s no absolute definitive answer as to how close to perfect a near-perfect qubit must be.

It can vary.

It’s subjective.

It could be anywhere from three nines to six nines.

Nominally, I say four to five nines.

3.5 nines is a good working model for a practical minimum fidelity to be considered near-perfect.

Five nines is probably a good working model for a practical maximum fidelity to still be considered only near-perfect.

Generally, I would say that six nines is getting up there in the range to be considered the fidelity of a *perfect logical qubit* — one error in a million operations.

# Are three nines good enough for a near-perfect qubit?

It’s a fielder’s choice and depends on the particular application whether three nines of qubit fidelity (an error rate of 0.1% for a reliability of 99.9% — one error in a thousand operations) should be considered a near-perfect qubit.

Overall, I’d say flip a coin.

But I’ll stick with *four nines* as the most properly lower bound for a near-perfect qubit.

# Are 3.5 nines good enough for a near-perfect qubit?

It’s a fielder’s choice and depends on the particular application whether 3.5 nines of qubit fidelity (an error rate of 0.05% for a reliability of 99.95% — one error in five thousand operations) should be considered a near-perfect qubit.

Overall, I’d say this is better than a coin flip in favor of being a near-perfect qubit.

But I’ll stick with *four nines* as the most properly lower bound for a near-perfect qubit, although I won’t complain if anybody accepts 3.5 nines as being *good enough*.

# Are six nines good enough for a perfect logical qubit?

I suspect that six nines of qubit fidelity — one error in a million operations — should be quite good enough for many if not most applications requiring *error-free operation*.

Whether an error rate of one in a billion (nine nines) or even one in a trillion (twelve nines) is needed for a particular application is going to require a very careful analysis of the algorithm and application. But such error rates are very likely going to be quite acceptable for the vast majority of applications.

And if somebody wants to classify six nines as near-perfect rather than perfect, that’s fine. It’s probably another fielder’s choice rather than a definitive boundary.

# Doesn’t the tiny residual error of even the best error-corrected qubits make them near-perfect qubits?

Okay, technically, yes — even the best quantum error correction (QEC) will still leave at least some tiny residual error, so in that sense even the best error-corrected qubits are still not quite perfect, and hence are technically near-perfect qubits. Or, more properly, *near-perfect logical qubits*.

That said, I would say that such a tiny residual error, something less than one error in a billion operations is hardly worth treating as *merely* a near-perfect qubit, which is why I added the informal category of a *virtually-perfect qubit*.

The whole point of a near-perfect qubit is that there are still noticeable errors on rare occasions.

If the average user never notices errors, then as far as they are concerned the qubits are *as good as perfect*.

# Even near-perfect qubits crumble near the limits of coherence times

One unresolved difficulty with near-term quantum computers is what happens for deeper quantum circuits that begin to approach the coherence time for qubits.

I haven’t examined much data, but it is less of a cliff but more of a gradual decline, so that shorter circuits have fewer errors and longer circuits have more errors — even halfway or even only a third of the way into the coherence time.

The only point here is that it is something to be concerned with.

Or at least to not be surprised when longer circuits seem to get more errors than the nominal qubit fidelity would suggest.

# Caveat: Near-perfect qubits are still limited by coherence time

As wonderful as near-perfect qubits may seem, they have a key limitation that quantum error correction is able to surmount: coherence time.

Designers of quantum algorithms and developers of quantum applications must remain ever-vigilant to circuit depth and coherence time.

This can present a very severe limitation on the *scalability* of a quantum algorithm. The algorithm may work fine for a smaller number of qubits and relatively shallow circuit depth, and may even scale reasonably well for modest increments of size, but scaling could easily hit a wall as scaling continues. In short, *scale with care*.

Quantum error correction and logical qubits are able to extend the life of quantum state beyond that of an individual qubit. So even if a logical qubit is constructed from a collection of near-perfect physical qubits, each with their own limited coherence time, the logical qubit can persist much longer, if not indefinitely.

# Even with perfect logical qubits, some applications may benefit from the higher performance of near-perfect physical qubits

Even once perfect logical qubits become commonplace, there may still be a need or desire for higher performance and larger applications which operate directly on *near-perfect physical qubits*, without the performance overhead or more limited capacity of logical qubits.

It’s unclear at this time how much of a performance penalty might be required to implement logical qubits.

Granted, performance can be expected to improve over time, but initially it may be problematic, at least for some applications.

Similarly, the extreme number of physical qubits needed to implement each logical qubit will initially greatly limit the number of available logical qubits.

Granted, the number of available logical qubits can be expected to improve dramatically over time, but initially it is likely to be problematic for many applications.

# Near-perfect qubits are likely needed to achieve quantum advantage

Without near-perfect qubits, algorithms won’t likely be able to exploit a sufficient number of qubits and a sufficient depth of quantum circuits to actually achieve any dramatic quantum advantage or even substantial quantum advantage over classical computing.

So it’s fairly safe to say that near-perfect qubits will be needed to *enable quantum advantage*, whether it be via near-perfect qubits alone or by enabling full quantum error correction.

# Whether quantum advantage can be achieved with only near-perfect qubits remains an open question

The jury is out. Some may presume that only full quantum error correction will be enough to enable most algorithms and applications to achieve dramatic quantum advantage or at least substantial quantum advantage over classical computing solutions, even while others will be content to rely on only near-perfect physical qubits.

Whether quantum advantage can be achieved with only near-perfect qubits is an interesting and open question.

Capacity will be a significant limiting factor in transitioning to logical qubits. Initial systems supporting logical qubits will have very limited capacities — lots of physical qubits, but supporting only a few logical qubits. It may take a significant number of iterations before systems have a sufficient logical qubit capacity to enable full quantum advantage. But until then, it may be possible for a significant range of applications to achieve quantum advantage using near-perfect physical qubits.

In short, for some applications, yes, near-perfect qubits alone will enable quantum advantage. For other applications, no, near-perfect qubits alone will *not be sufficient* to enable quantum advantage.

# Near-perfect physical qubits may be sufficient to achieve The ENIAC Moment for niche applications

Logical qubits will greatly facilitate many applications, but very limited initial capacities of logical qubits will mean that any application needing a significant number of qubits will have to make do with physical qubits. The good news is that the level of quality needed to enable logical qubits will assure that physical qubits will have near-perfect quality. Still, working with physical qubits will be limited to the most sophisticated, most elite quantum algorithm designers and quantum application developers.

I suspect that larger numbers of near-perfect physical qubits may make it possible for such sophisticated, elite teams to finally achieve what I call *The ENIAC Moment* for quantum computing — quantum advantage for a production-scale practical real-world application.

Not many teams will have the aptitudes, skills, or talents to achieve the ENIAC moment with raw physical qubits, but a few may just well be able to do it.

The ENIAC moment will be a real breakthrough, but won’t herald an opening of the floodgates for quantum applications — that will require *The FORTRAN Moment*, which will probably require logical qubits.

Personally, I suspect that near-perfect qubits will be sufficient for small numbers of the most sophisticated elite quantum algorithm designers and quantum application developers to utilize manual error mitigation techniques to use near-perfect physical qubits to reach the ENIAC Moment for quantum computing where quantum advantage for a production-scale practical real-world application can be demonstrated. Quantum error correction may have also reached the finish line for small numbers of logical qubits, but the vast numbers of physical qubits needed to support moderate numbers of logical qubits may simply not yet be available in time to help reach The ENIAC Moment.

# Likely need logical qubits to achieve The FORTRAN Moment for widespread adoption of quantum computing

Although a few sophisticated, elite teams may well be able to achieve The ENIAC Moment for quantum computing — quantum advantage for a production-scale practical real-world application — that won’t help the more-average non-elite organization or development team. Most organizations and teams will require the greater convenience and greater reliability of logical qubits, as well as more advanced and approachable programming models, programming languages, and application frameworks. The confluence of all of these capabilities, underpinned by logical qubits, will enable what I call *The FORTRAN Moment* of quantum computing — where average, non-elite teams and organizations can tap into the power of quantum computing without requiring the higher level of sophistication needed to work with less than perfect physical qubits. This will finally enable the widespread adoption of quantum computing.

It is my view that logical qubits will indeed be required for *The FORTRAN Moment*. Sure, some more adventurous teams will continue to achieve quantum advantage for applications without logical qubits, but only at great cost and great risk. Many ambitious projects will be started, but ultimately fail as the complexity of dealing with the subtle remaining errors of near-perfect physical qubits eat away at projects like termites.

There may well be projects which can achieve success with raw physical near-perfect qubits, but the nuances of subtle remaining errors may make it a game of *Russian Roulette*, with some teams succeeding and some failing, and no way to know in advance which is more likely. Logical qubits will eliminate this intense level of uncertainty and anxiety.

# Irony: By the time qubits get good enough for efficient error correction, they may be good enough for many applications without the need for error correction

In truth, qubits can have a fairly high error rate and still be suitable for quantum error correction to achieve logical qubits, but that would require a dramatic number of noisy physical qubits to achieve each logical qubit, which limits the number of logical qubits for a machine of a given capacity of physical qubits. The twin goals are:

**Achieve logical qubits as quickly as possible.****Maximize logical qubits for a given number of physical qubits.**Achieve a low enough error rate for physical qubits so that only a modest number of physical qubits are needed for each logical qubit.

It’s a balancing act. We could achieve a very small number of logical qubits sooner with noisy qubits, but we would prefer a larger number of logical qubits — so that dramatic quantum advantage can be achieved (more than 50 logical qubits) — which means we would need a much smaller number of physical qubits per logical qubit, which means much less noisy qubits.

The net result is that beyond demonstrating a small number of logical qubits as a mere *laboratory curiosity*, achieving quantum advantage with logical qubits could mean that applications which don’t need the 100% perfection of logical qubits could run reasonably well on the much larger number of raw physical qubits which would otherwise be used to implement a much smaller number of logical qubits, too small to achieve any dramatic quantum advantage.

Granted, this won’t be true for all or necessarily most quantum applications, but maybe enough to enable some organizations to address production-scale applications well before machines have a large enough logical qubit capacity to achieve production-scale using logical qubits.

I’m not suggesting that people should bet on this outcome, but it is an intriguing possibility.

# Do we really need quantum error correction if we can achieve near-perfect qubits? Probably, eventually

Many algorithms and quantum applications will probably do fine with only near-perfect qubits, but ultimately some will still need or want perfect logical qubits.

It may in fact become true for some interesting subset of applications — that near-perfect qubits are *good enough* and that quantum error correction (QEC) is not strictly needed, but there will probably be plenty of applications — as well as quantum algorithm designers and quantum application developers — who will still require and benefit greatly from full, automatic, and transparent quantum error correction.

It will likely be true that many or most quantum algorithms and quantum applications won’t need perfect logical qubits and quantum error correction once near-perfect qubits are achieved, but as with all generalities there will likely be exceptions.

It will be an interesting split, and will likely evolve over time. Further, near-perfect qubits confuse the issue even more. A significant fraction of applications will be able to make do with near-perfect qubits, while others still require full quantum error correction, but the split, the dividing line, between the two will remain unclear for some time to come.

As near-perfect qubits continue to rise in qubit fidelity, undoubtedly more quantum algorithms and quantum applications will incrementally become feasible using only near-perfect qubits.

We can focus on asking and answering this question at each stage of improvement of qubits. The answer will remain “no” (most applications don’t need quantum error correction) for the indefinite future, but maybe someday, before we actually achieve quantum error correction and logical qubits, the answer might become “yes”, that there are some functional benefits to the absolute certainty of perfect logical qubits, at least for some algorithms and applications.

By the time quantum error correction is actually here and actually living up to expectations of perfection — and available in sufficient capacity for production-scale applications, commodity qubits may already be near-perfect, close enough to perfection that true quantum error correction doesn’t add that much additional value — and adds a lot of cost and reduces qubit capacity. That said, there will probably still be some quantum algorithms and quantum applications which really do need the extra certainty of perfect logical qubits.

# Maybe we don’t really need quantum error correction if we can engineer and mass produce near-perfect qubits

Although I agree with the analysis of the preceding section that there will always be quantum algorithms and quantum applications which really do need the higher and more reliable fidelity of quantum error correction, I also think that it will be debatable whether we need quantum error correction if we can engineer and mass produce near-perfect qubits.

Debate is good. And it doesn’t imply that the outcome of debate is one-sided or one size fits all.

Every organization and every team can have its own debate and arrive at its own answer.

# Unclear if non-elite staff can solve production-scale problems with near-perfect qubits

Although it is very likely that The ENIAC Moment can be achieved using near-perfect qubits, that will likely require *elite technical staff*. Quantum algorithms and quantum applications of any significant complexity — production-scale for practical real-world applications — will likely remain beyond the reach of non-elite technical staff until The FORTRAN Moment is reached.

Non-elite technical staff will likely be able to master smaller-scale and maybe even medium-scale projects using near-perfect qubits, but larger-scale and production-scale projects will likely require the significantly higher level of technical skill of more-elite technical staff to achieve successful results at that scale. For example, some significant degree of manual error mitigation and general technical cleverness may still be required for larger circuits and larger qubit counts.

# Prospects for near-perfect qubits

Although the goal of four to five nines of qubit fidelity for near-perfect qubits is clear, the timing is anything but clear.

Generally, qubit fidelity is taken to refer to the fidelity of 2-qubit quantum logic gates, such as the CNOT gate.

My discussion here is focused primarily on superconducting transmon qubits. Trapped-ion qubits should have somewhat higher fidelity — or so it is loosely claimed.

IBM quantum VP Jay Gambetta did recently tweet about achieving *three nines of qubit fidelity *on 27-qubit Falcon:

*Our Falcon R10 quantum processor is looking good. The**team just hit three nines**two-qubit gate fidelity on a large quantum system.*- October 4, 2021
- https://twitter.com/jaygambetta/status/1445098187799375875

A second tweet adds:

*Adding this to our best two-qubit fidelity definitely makes the future look good. Cant wait til we get this falcon out for our clients to use*- October 4, 2021
- https://twitter.com/jaygambetta/status/1445115380616335373

The only reservation I would express is that the tweeted chart is for the *Best* error rate, not the *average* error rate.

Eyeballing the chart as best I can, it seems as if **Falcon_r10** achieved a CNOT error rate of approximately 0.000825, which is 99.9175% reliability or three nines. Or more precisely 3.175 nines — a little better than three nines.

How to progress from 3.175 to four and then to five nines is unclear.

So, for now, the roadmap is:

**Short-term.**The next 18 months or so. 2.5, 2.75, three, 3.25, 3.5, and 3.75 nines.**Medium term.**Two years. Four nines.**Longer term.**Three to four or maybe five years. 4.25, 4.5, 4.75, and then five nines, maybe even 5.5 nines.

I look forward to being pleasantly surprised. But I also have the expectation that difficult achievements can face innumerable obstacles.

# When can we expect near-perfect qubits?

Read the preceding section for more details, but it’s relatively safe to presume that we will see at least a preliminary form of near-perfect qubits over the next 18 months.

# Vendors need to publish roadmaps for near-perfect qubits

Quantum computer hardware vendors haven’t yet even acknowledged this concept of *near-perfect qubits*. In addition to doing so, they need to transparently disclose a *roadmap* to achieving various levels of near-perfect qubit fidelity — how many nines at what milestones.

Near-perfect qubits are both necessary for production-scale quantum error correction and useful in their own right. But at present, no vendor of quantum computers has published a roadmap or timeline for how they expect to progress to achieving either perfect logical qubits or near-perfect qubits.

In particular, I want to see explicit milestones for:

**2.5 nines.****2.75 nines.****Three nines.****3.25 nines.****3.5 nines.****3.75 nines.****Four nines.****4.25 nines.****4.50 nines.****4.75 nines.****Five nines.****Anything above five nines.****Milestones for perfect logical qubits.**Including capacities.

I’ve only seen two instances of vendors talking about nines of qubit fidelity recently. Both from IBM….

First, IBM quantum VP Jay Gambetta did recently tweet about achieving *three nines of qubit fidelity *on 27-qubit Falcon:

*Our Falcon R10 quantum processor is looking good. The**team just hit three nines**two-qubit gate fidelity on a large quantum system.*- October 4, 2021
- https://twitter.com/jaygambetta/status/1445098187799375875

Second, in their recent Quantum Summit (2021) they mentioned:

*With gate fidelity at**four nines by 2024**The IBM Quantum State of the Union*- https://www.youtube.com/watch?v=-qBrLqvESNM

# Nobody expects near-perfect qubits imminently, so we should maintain a focus on pursuing quantum error correction

Even if near-perfect qubits do in fact become available in 18 months or so, we’re not there yet, so we have to maintain the focus on pursuing quantum error correction.

Okay, in actuality, we need to pursue both, especially since better near-perfect physical qubits make it easier to achieve better quantum error correction.

# Which is likely to come first, full quantum error correction or near-perfect qubits?

We really do need true, perfect logical qubits with full quantum error correction, but since that outcome is still far beyond the distant horizon, it’s reasonable to pin substantial hope on near-perfect qubits which might in fact be good enough to serve most needs of many quantum applications — or at least that’s the conjecture.

So, given that possibility, which is likely to come first, full quantum error correction or near-perfect qubits?

The position of this paper is that near-perfect qubits will very likely come first, in 18 months to two years. And depending on what degree of fidelity you need, even sooner, possibly over the next year or so for three nines of qubit fidelity.

# Are trapped-ion qubits closer to near-perfect? Not quite

The verbal claim is that *trapped-ion* qubits are significantly higher fidelity than superconducting transmon qubits, but I haven’t seen definitive data, yet.

Here’s a typical statement from Honeywell:

*The average single-qubit gate fidelity for this milestone was 99.991(8)%, the average two-qubit gate fidelity was 99.76(3)% with fully-connected qubits, and measurement fidelity was 99.75(3)%.***Honeywell Sets New Record For Quantum Computing Performance**- System Model H1 becomes first commercial system to achieve a quantum volume of 512 benchmark
- March 2021
- https://www.honeywell.com/us/en/news/2021/03/honeywell-sets-new-record-for-quantum-computing-performance

More recently, also from Honeywell:

*The average single-qubit gate fidelity for this milestone was 99.99(1)%, the average two-qubit gate fidelity was 99.72(6)%, and state preparation and measurement (SPAM) fidelity was 99.70(5)%.***Honeywell Sets Another Record For Quantum Computing Performance**- The System Model H1 becomes the first to achieve a demonstrated quantum volume of 1024
- July 2021
- https://www.honeywell.com/us/en/news/2021/07/honeywell-sets-another-record-for-quantum-computing-performance

The qubit fidelity was fairly comparable between the two experiments.

Three nines of qubit fidelity for *single-qubit* gates sounds good, but it is the two-qubit gate fidelity that really matters the most, and that is a little under three nines — 2.76 and 2.72 nines respectively for the two experiments.

That’s not so bad, but not spectacularly better than superconducting transmon qubits.

And not as good as the 3.125 nines that IBM just recently achieved in one experiment.

In short, while it may have been true that trapped-ion qubit fidelity was substantially better than superconducting transmon qubits a few years ago, superconducting transmon qubits have been catching up.

Still, this is only sketchy, anecdotal data, so I don’t have a firm view on this matter. And I do expect improvements in qubit fidelity over the next two years and beyond.

Note: These two reports do highlight that trapped-ion qubits *are* substantially further ahead on the *Quantum Volume* benchmark, which is due in large part not to qubit fidelity per se, but to *full any to any connectivity*, which is a key advantage for trapped-ion qubits. Superconducting transmon qubits are forced to rely on SWAP networks to tediously shuffle qubit state around to achieve nearest-neighbor connectivity, which takes a big hit on net qubit fidelity.

# Are neutral-atom qubits in the same ballpark as trapped-ion qubits for qubit fidelity? Unclear

*Neutral-atom qubits* are too new for us to know much about them. I presume that they are roughly in the same ballpark as trapped-ion qubits for qubit fidelity, but that is purely speculation on my part.

The three new vendors for neutral-atom quantum computers that I know of:

Hopefully some level of detail will be forthcoming in the months ahead.

# Real applications should remain based on simulation until near-perfect qubits are available

So, where do we go from here? Researchers and vendors should continue NISQ hardware development, but make it clear that the real goal is *near-perfect qubits* to enable automatic quantum error correction to enable perfect logical qubits — and for use on their own even before perfect logical qubits become available in reasonably large quantities. But *noisy qubits* are mere *stepping stones* towards near-perfect qubits, and *not* intended for actual application development, not now, not ever.

And quantum algorithm designers and quantum application developers should focus on the use of classical quantum simulators, not constantly trying to shoehorn and otherwise distort quantum algorithms to misguidedly fit into inappropriate NISQ devices.

Keep in mind, NISQ is just a developmental stepping stone to near-perfect quantum devices, not a destination suitable for serious application development.

# How close to perfect must near-perfect qubits be to enable perfect logical qubits?

Uhhh… it gets complicated. And you have to get into the *quantum threshold theorem*, so if you really do want to get into this level of detail, consult my longer paper on quantum error correction and logical qubits:

# How many nines will become the gold standard for near-perfect qubits to enable logical qubits? It remains to be seen

There’s no definitive answer on this yet. Maybe four nines. Maybe five nines. Maybe 3.5 nines is good enough.

I would go with a rule of thumb that whatever qubit fidelity for near-perfect qubits is good enough for most applications is probably a sufficient fidelity to be used as the basis for constructing logical qubits using quantum error correction (QEC).

Don’t hold me to that in an absolute sense, but it’s probably a good starting point as a rule of thumb for constructing perfect logical qubits.

# Let application developers decide between near-perfect qubits and perfect logical qubits

Hopefully, one day, we will have quantum computers that both support full quantum error correction with perfect logical qubits and provide near-perfect qubits. Then the individual application developer can decide which they need for their particular circumstances.

And maybe even allow them to be mixed and matched for different quantum circuits in the same application.

I suspect that it would be too much to ask to mix and match within the same quantum algorithm (quantum circuit.) But who knows.

I would hope that each quantum computer would permit a mix of quantum circuit types — sometimes near-perfect qubits and sometimes perfect logical qubits. In other words, permit the application or quantum algorithm designer to decide which mode a particular quantum algorithm (quantum circuit) should use, near-perfect physical qubits or perfect logical qubits enabled by quantum error correction (QEC). But, that’s only my own personal hope since no quantum computer hardware vendor has yet publicly committed to do so.

# Near-perfect qubits are a poor man’s perfect logical qubits, but for many applications that will be good enough

That’s about it in a nutshell. Perfect logical qubits are the ultimate goal, the prize, but getting there will be a long and winding road, beyond the patience of many. Near-perfect qubits are a shortcut. An imperfect shortcut to be sure, but a fulfilling consolation prize nonetheless.

Sure, in the end, near-perfect qubits may indeed be a second-rate solution, but for many applications that may indeed be *good enough*.

Besides, near-perfect qubits are a stepping stone to perfect logical qubits anyways. They are a win-win in any case. A no-lose proposition.

# Possibility of a Quantum Winter if we don’t get near-perfect qubits within two years

Although I’m generally optimistic that we can achieve near-perfect qubits within two years, or at least 3.5 nines of qubit fidelity, it’s quite possible that we might not, in which case it is very possible that deep disillusionment may set in, causing projects to stall, commitment to waver, investment to dry up, and progress, at least on the practical application front, to grind to a near-halt as people retreat and wait until near-perfect qubits or perfect logical qubits based on quantum error correction finally do arrive. In other words, for a *Quantum Winter* to set in.

Is such a Quantum Winter likely? I don’t think so, but it could still happen, easily.

If a Quantum Winter does come it will be likely caused by excessive *premature commercialization* — attempting to commercialize quantum computing long before all of the necessary research has been completed. At present, we’re still deep in *pre-commercialization* — focus on research, prototyping, and experimentation, with many technical questions, issues, and obstacles to be resolved before commercialization can even begin. But, there are plenty of emergent signs of premature commercialization. Rather than pursuing commercialization at this juncture we need to be doubling down on research to enable near-perfect qubits.

For more details on commercialization, pre-commercialization, and premature commercialization, see my paper:

*Prescription for Advancing Quantum Computing Much More Rapidly: Hold Off on Commercialization but Double Down on Pre-commercialization*- https://medium.com/@jackkrupansky/prescription-for-advancing-quantum-computing-much-more-rapidly-hold-off-on-commercialization-but-28d1128166a

How long might a Quantum Winter last? Indeterminate, but until the reason it started goes away — until near-perfect qubits or perfect logical qubits do arrive, when quantum algorithm designers and quantum application developers can indeed depend on qubits with fidelity of at least 3.5 in the typical quantum system.

Technically, such a Quantum Winter shouldn’t be all that bad since it really just means that people will have to rely on classical simulators rather than real hardware. In other words, focus on design of algorithms and applications rather than on deployment on real quantum hardware.

To be clear, a Quantum Winter based on the unavailability of near-perfect qubits is a very real possibility even if not a certainty or high likelihood.

# More on perfect logical qubits

*Perfect logical qubits* are a fascinating topic, but beyond the scope of this paper, which is focused on *near-perfect qubits*. For much more on perfect logical qubits, see my paper on quantum error correction and logical qubits:

# Summary and conclusions

- Noisy qubits suck. We only use them because we have no other choice. Actually, we do have a choice — classical quantum simulation — but when it comes to real quantum hardware, we have no choice, at present.
- Perfect logical qubits using quantum error correction (QEC) are the Holy Grail of quantum computing, but they simply are not within reach any time soon. Three to five years, maybe. Maybe seven years or even longer.
- Near-perfect qubits are a happy medium, much better than noisy qubits and they will be available much sooner and in higher capacity than perfect logical qubits.
- Near-perfect qubit is not a standard or widely-used term, but is instead my own
*proposal*. I’m not claiming to have invented the term or the concept, but I am certainly promoting the use of the term. - Near-perfect qubits are coming fairly soon. Over the next two years or so.
- Near-perfect qubits would generally have four to five nines of qubit fidelity. Maybe as few as 3.5 nines or even as few as three nines for some applications.
- Near-perfect qubits are not as good as perfect logical qubits, but they will be good enough for most quantum applications.
- Even when perfect logical qubits do become available, they will be in very limited capacities — many physical qubits for even a single logical qubit. Most complex algorithms will have to rely on near-perfect qubits — for years.
- Even when perfect logical qubits are available in larger capacities, there will still be algorithms which require even larger capacities or higher performance which are available only using raw physical near-perfect qubits.
- Near-perfect qubits will probably be sufficient to reach The ENIAC Moment of demonstrating a production-scale practical real-world quantum application.
- But perfect logical qubits will probably be required to reach The FORTRAN Moment where widespread adoption of quantum computing is possible for non-elite quantum algorithm designers and quantum application developers.
- Near-perfect qubits are likely needed to achieve any substantial degree of quantum advantage.
- Whether quantum advantage can be achieved with only near-perfect qubits alone remains an open question. For some applications, yes. For other applications, no.
- Near-perfect qubits are a poor man’s perfect logical qubits, but for many applications that will be good enough.
- Near-perfect qubits are an essential requirement for achieving
*practical quantum computing*, both necessary and sufficient. Full quantum error correction (QEC) is not needed or available in the relatively near and medium-term, and noisy NISQ qubits won’t cut it. - There is in fact a fair possibility of a Quantum Winter if we don’t get near-perfect qubits within two years. Not a certainty, but simply a possibility. If a Quantum Winter does occur, it will likely be the result of excessive premature commercialization, a diversion of resources and attention away from the critical research needed to bring near-perfect qubits to fruition.

For more of my writing: ** List of My Papers on Quantum Computing**.