# Thoughts on the IBM Quantum Hardware Roadmap

This informal paper documents my thoughts in recent months on IBM’s quantum hardware roadmap which they published on September 15, 2020, as well as their quantum software development and ecosystem roadmap published on February 4, 2021. This paper focuses primarily on the former — the hardware — more than the latter, although a future paper may delve into the latter and there is a section on the latter at the end of this paper.

Overall, we can all be grateful that IBM is giving us a view into their future, unfortunately it raises more questions than it answers. The roadmap makes clear that we’re still in the early innings for quantum computing — and not close to being ready for prime time with production-scale practical applications — or even close to achieving any significant degree of quantum advantage, the purpose for even considering quantum computing.

Most significantly, it fails to provide any insight into the two most crucial questions about quantum computing:

- When will quantum computers support production-scale applications?
- When will quantum computers achieve quantum advantage (or quantum supremacy) for production-scale applications?

**Topics covered in this paper:**

- Positive highlights.
- Negative highlights.
- My own interests.
- The IBM roadmap itself.
- Graphic for the IBM quantum hardware roadmap.
- Earlier hint of a roadmap.
- I’m not so interested in support software and tools.
- Too short — need more detail for longer-term aims, beyond 2023, just two years from now.
- Too brief — need more detail on each milestone.
- Limited transparency — I’m sure IBM has the desired detail in their internal plans.
- When will quantum error correction (QEC) be achieved?
- Need roadmap milestones for nines of qubit fidelity.
- Need roadmap milestones for qubit measurement fidelity.
- When might IBM get to near-perfect qubits?
- What will the actual functional transition milestones be on the path to logical qubits?
- Will there be any residual error for logical qubits or will they be as perfect as classical bits?
- Will future machines support only logical qubits or will physical qubit circuits still be supported?
- What functional advantages might come from larger numbers of qubits?
- Need milestones for granularity of phase and probability amplitude.
- Need timeframes and milestones for size supported for both quantum phase estimation and quantum Fourier transform.
- When will quantum chemists (among others) be able to rely on quantum phase estimation and quantum Fourier transform?
- When or will IBM support a higher-level programming model?
- When will larger algorithms — like using 40 qubits — become possible?
- When could a Quantum Volume of 2⁴⁰ be expected?
- When will IBM develop a replacement for the Quantum Volume metric?
- When will IBM need a replacement for the Quantum Volume metric?
- How large could algorithms be on a 1,121-qubit Condor?
- When might The ENIAC Moment be achieved?
- When might The FORTRAN Moment be achieved?
- When might quantum advantage be achieved?
- Will IBM achieve even minimal quantum advantage by the end of their hardware roadmap?
- How many bits can Shor’s algorithm handle at each stage of the roadmap?
- What applications or types of applications might be enabled in terms of support for production-scale data at each milestone?
- Not clear whether or when quantum networking will be supported.
- Quantum is still a research program at IBM — and much more research is required.
- Quantum computers are still a laboratory curiosity, not a commercial product.
- When will IBM offer production-scale quantum computing as a commercial product (or service)?
- Quantum Ready? For Who? For What?
- Quantum Hardware Ready is needed.
- Need for higher-quality (and higher-capacity) simulators.
- Need for debugging capabilities.
- Need for testing capabilities.
- Need for dramatic improvements in documentation and technical specifications at each milestone.
- Brief comments on IBM’s roadmap for building an open quantum software ecosystem.
- Maybe many of the milestones and details which interest me occur beyond the end of the current roadmap
- Heads up for other quantum computing vendors — all of these comments apply to you as well!
- Summary and conclusions.

# Positive highlights

I appreciate:

- IBM’s transparency on putting out such a roadmap.
- The view into the future beyond the next year or two — including a path to 1,000 qubits, a million qubits, and beyond.
- The mention of error correction and logical qubits.
- The mention of linking quantum computers to create a
*massively parallel quantum computer*. - The prospect of achieving 100 qubits sometime this year.

# Negative highlights

Unfortunately:

- Disappointing that it took so long to put the roadmap out. I first heard mention that they had a roadmap back in 2018.
- Raises more questions than it answers.
- Too short — need more detail for longer-term aims, beyond 2023, just two years from now.
- Too brief — need more detail on each milestone.
- Needs more milestones. Intermediate stages and further stages. I certainly hope that they are working on more machines than listed over the next three to five years.
- Other than raw number of qubits, roughly what can algorithm designers and application developers expect to see in the next two machines, 127-qubit Eagle — in the coming six months, by the end of 2021, and the 433-qubit Osprey in 2022? Obviously a lot can change over the next six to eighteen months, but some sort of expectations need to be set.
- Not clear when the quantum processing unit will become modular. When will there be support for more qubits than will fit on a single chip?
- Not clear when or whether multiple quantum computers can be directly connected at the quantum level. Comparable to a classical multiprocessor, either tightly-coupled or loosely-coupled.
- Not clear whether or when quantum networking will be supported.
- Silent as to when error correction and logical qubits will become available.
- No milestones given for the path to error correction and logical qubits. What will the actual milestones, the functional transitions really be?
- Silent as to when qubit counts will begin to refer to logical qubits. I’m presuming that all qubit counts on the current roadmap are for physical qubits.
- Silent as to milestones for capacities of logical qubits, especially for reaching support for practical, production-scale applications.
- Silent as any improvements in connectivity between qubits. Each milestone should indicate degree of connectivity. Will SWAP networks still be required? Will full any-to-any connectivity be achieved by some milestone?
- Silent as to milestones for improvements to qubit and gate fidelity. No hints for nines of qubit fidelity at each milestone.
- Silent as to milestones for improvements to qubit measurement fidelity.
- Silent as to when near-perfect qubits might be achieved. High enough fidelity that many algorithms won’t need full quantum error correction.
- Silent as to milestones for granularity of phase and probability amplitude.
- Silent as to when quantum chemists (among others) will be able to rely on quantum phase estimation and quantum Fourier transform of various sizes. When will quantum phase estimation become practical?
- Silent as to the metric to replace quantum volume, which doesn’t work for more than about 50 qubits. Can’t practically simulate a quantum circuit using more than about 50 qubits.
- Silent as to the stage at which quantum volume exceeds the number of qubits which can be practically simulated on a classical computer.
- Silent as to when larger algorithms — like using 40 qubits — will become possible. When could a Quantum Volume of 2⁴⁰ be expected.
- Silent as to how large algorithms could be on a 1,121-qubit Condor. What equivalent of Quantum Volume — number of qubits and depth of circuit — could be expected.
- Silent as to when quantum advantage might be expected to be achieved — for any real, production-scale, practical application. Should we presume that means that IBM doesn’t expect quantum advantage until some time after the end of the roadmap?
- Silent as to what applications or types of applications might be enabled in terms of support for production-scale data at each milestone.
- Silent on the roadmap for machine simulators, including maximum qubit count which can be simulated at each milestone. Silent as to where they think the ultimate wall is for the maximum number of qubits which can be simulated.
- Silent as to improvements in qubit coherence and circuit depth at each stage.
- Silent as to maximum circuit size and maximum circuit depth which can be supported at each stage.
- Silent as to how far they can go with NISQ and which machines might be post-NISQ.
- Silent as to when fault-tolerant machines will become available.
- Silent as to milestones for various intra-circuit hybrid quantum/classical programming capabilities.
- Open question: Will there be any residual error for logical qubits or will they be as perfect as classical bits?
- Open question: At some stage, will future machines support only logical qubits or will physical qubit circuits still be supported?
- Open question: What will be the smallest machine supporting logical qubit circuits?
- Silent as to debugging capabilities.
- Silent as to testing capabilities.
- It is quite clear that quantum computing is still a research program at IBM, not a commercial product suitable for production use.
- Silent as to when quantum computing might transition from mere laboratory curiosity to front-line commercial product suitable for production-scale use cases.
- Silent as to how much additional research, beyond the end of the current roadmap, may be necessary before a transition to a commercial product.
- Silent as to improvements in documentation and technical specifications at each milestone.

# My own interests

I wouldn’t necessarily expect IBM to put these milestones in its own roadmap, but they interest me nonetheless:

- When might quantum advantage be achieved — for any real, production-scale, practical application? For minimal quantum advantage (e.g., 2X, 10X, 100X), significant quantum advantage (e.g., 1,000X to 1,000,000X), and dramatic quantum advantage (one quadrillion X)?
- How close will each stage come to full quantum advantage? What fractional quantum advantage is achieved at each stage?
- What applications might achieve quantum advantage at each stage?
- What applications will be supported at each stage which weren’t feasible at earlier stages?
- Each successive stage should have some emblematic algorithm which utilizes the new capabilities of that stage, such as more qubits, deeper circuit depth, not just running the same old algorithms with the same number of qubits and circuit depth as for earlier, smaller machines.
- What functional advantages might come from larger numbers of qubits, beyond simply that algorithms can handle more data?
- Is there any reason to believe that there might be a better qubit technology (alternative to superconducting transmon qubits) down the road, or any reason to believe that no better qubit technology is needed? Does IBM anticipate that there might be a dramatic technology transition at some stage, maybe five, ten, or more years down the road?
- Does IBM anticipate that they might actually support more than one qubit technology at some stage? Like, trapped ion?
- When or will IBM support a higher-level programming model with higher-level algorithmic building blocks which makes it feasible for non-quantum experts to translate application problems into quantum solutions without knowledge of quantum logic gates and quantum states.
- When might The ENIAC Moment be achieved? First production-scale application.
- When might The FORTRAN Moment be achieved? Higher-level programming model which makes it easy for most organizations to develop quantum applications — without elite teams.
- How many bits can Shor’s algorithm handle at each stage of the roadmap?
- Need for a broad set of benchmark tests to evaluate performance, capacity, and precision of various algorithmic building blocks, such as phase estimation, along with target benchmark results for each hardware milestone.
- Milestones for optimizing various algorithmic building blocks, such as phase estimation, based on hardware improvements at each stage.
- The maximum size of algorithms which can correctly run on the physical hardware at each milestone but can no longer be classically simulated. Number of qubits and circuit depth. Maybe several thresholds for the fraction of correct executions. For now, this could parallel projections of log2(Quantum Volume) and estimate when log2(QV) exceeds the maximum classical quantum simulator capacity.

# The IBM roadmap itself

The IBM quantum *hardware roadmap* can be found here:

*IBM’s Roadmap For Scaling Quantum Technology*- September 15, 2020
- Jay Gambetta
- https://www.ibm.com/blogs/research/2020/09/ibm-quantum-roadmap/

The IBM quantum *software development and ecosystem roadmap* can be found here:

*IBM’s roadmap for building an open quantum software ecosystem*- February 4, 2021
- Karl Wehden, Ismael Faro, and Jay Gambetta
- https://www.ibm.com/blogs/research/2021/02/quantum-development-roadmap/

# Graphic for the IBM quantum hardware roadmap

I’m a text-only guy, so I won’t reproduce the graphic for the roadmap, but you can find it here — look for the blue diamonds:

*IBM’s Roadmap For Scaling Quantum Technology*- September 15, 2020
- Jay Gambetta
- https://www.ibm.com/blogs/research/2020/09/ibm-quantum-roadmap/

# Earlier hint of a roadmap

I haven’t been able to track down the original citation, but I believe it was sometime in 2018 that IBM publicly stated that quantum error correction was on their roadmap. So, that was a vague reference to a purported roadmap, but no actual roadmap was available to the public, until 2020.

# I’m not so interested in support software and tools

Support software and tools are obviously important, but I’m less concerned about them in this paper, which is more focused on hardware and the programming model for algorithms and applications.

# Too short — need more detail for longer-term aims, beyond 2023, just two years from now

In my opinion, the roadmap needs milestones for:

- 3 years.
- 5 years.
- 7 years.
- 10 years.
- 12 years.
- 15 years.
- 20 years.
- 25 years. Where is the technology really headed?

# Too brief — need more detail on each milestone

More than just a too-terse short phrase for key advancement and qubit count and code name. Not looking for any precise detail, especially years out, but at least rough targets, even if nothing more than rough percentage improvements expected at each stage. Graphs with trend lines would be appreciated.

- Qubit fidelity.
- Qubit lattice layout.
- Qubit connectivity.
- Gate cycle time.
- Qubit coherence.
- Maximum circuit depth.
- Maximum circuit size.
- Maximum circuit executions per second.

# Limited transparency — I’m sure IBM has the desired detail in their internal plans

It’s a little baffling that the IBM hardware roadmap has so little technical detail. I’m sure that their own internal plans and roadmaps have a lot of the level of detail that I suggest in this paper. Why exactly they refrain from disclosing that level of detail is unclear.

Actually, part of the motive is very clear, from past history and experience with IBM — this is just the way IBM works, always has, and probably always will. As IBM explicitly says in their software roadmap, “*As scientists, it’s not an easy decision to go public with such a transparent roadmap; we prefer to talk about our achievements, not our plans.*”

I would give IBM a letter grade of C-minus on transparency — and that’s being generous. And maybe also only because I hope they will take some of my criticism to heart and dramatically increase their transparency on roadmap plans for hardware and evolution of the programming model.

# When will quantum error correction (QEC) be achieved?

The IBM roadmap graphic does have *quantum error correction* listed as a *key achievement* for the *and beyond* stage, sometime beyond the 1,121-qubit Condor processor planned for 2023. I would have hoped to see at least *some* progress sooner, and *some* highlighting of milestones on the path to full quantum error correction and full support for error-free logical qubits.

The text of the roadmap has vague statements such as:

*… as we scale up the number of physical qubits, we will also be able to explore how they’ll work together as error-corrected logical qubits — every processor we design has fault tolerance considerations taken into account.**We think of Condor as an inflection point,**a milestone that marks our ability to implement error correction**and scale up our devices…*

That latter statement sounds promising, but the graphic doesn’t list error correction until the vague, nebulous, and unspecified *and beyond* stage *after* Condor, not for Condor itself. Is the graphic wrong — was quantum error correction supposed to be listed under Condor’s key achievements? Unknown, but interesting speculation.

# Need roadmap milestones for nines of qubit fidelity

There is no mention in the IBM roadmap of how many *nines* of qubit fidelity will be achieved and when and in what milestones.

Qubit fidelity includes:

- Coherence time.
- Gate errors. Both single-qubit and two-qubit.
- Measurement errors.

All three can and should be detailed separately, but an overall metric for qubit fidelity is needed as well.

Minimal milestones in nines of overall qubit fidelity:

- Two nines — 99%.
- Three nines — 99.9%.
- Four nines — 99.99%.
- Five nines — 99.999%.
- Six nines — 99.9999%.
- Whether IBM has intentions or plans for more than six nines of qubit fidelity should be specified. Seven, eight, nine, and higher nines of qubit fidelity would be great, but will likely be out of reach in the next two to four years.
- What maximum qubit fidelity, short of quantum error correction, could be achieved in the longer run, beyond the published roadmap, should also be specified.

For more on nines of qubit fidelity, see my own informal paper:

# Need roadmap milestones for qubit measurement fidelity

I didn’t realize this until recently, but simple measurement of qubits to get the results of a quantum computation is a very error-prone process. So even if qubit coherence is increased and gate errors are reduced, there are still measurement errors to deal with.

In fact, measurement errors may be more difficult to cure. After all, measurement is the transition from the quantum world to the classical world.

At least on Google’s Sycamore quantum processor, the measurement error rate is significantly greater than the gate error rate, although IBM’s qubit measurement error rate and how it compares to coherence and gate error rate is unknown, from public information.

I’m confident that IBM will be making improvements on the qubit measurement front, but we need to see qubit measurement fidelity (nines) shown on the roadmap for each hardware milestone.

Alternatively, if measurement fidelity is folded into an overall composite qubit fidelity metric, that may be sufficient, although having the various qubit fidelity metrics disaggregated would still be helpful and even better for some algorithms and applications.

# When might IBM get to near-perfect qubits?

Although quantum error correction is the long-term goal, a milestone along the way and an independent goal in its own right are *near-perfect qubits*, which have a high-enough qubit fidelity that they both support implementation of quantum error correction and they enable elite and highly-motivated technical teams to implement practical applications even before quantum error correction is available and supporting enough logical qubits to enable practical applications.

IBM has made no mention of near-perfect qubits, nor a roadmap towards them.

# What will the actual functional transition milestones be on the path to logical qubits?

IBM hasn’t given any specific milestones for the path to error correction and logical qubits. What will the actual milestones, the functional transitions really be?

What exactly will algorithm designers actually be able to do at each functional milestone?

Or will it actually be an all at once transition from nothing to perfection?

At a minimum, we need to see milestones based on the number of logical qubits.

Will the number of physical qubits per logical qubit be stable across all of the milestones? Or, will the number of physical qubits per logical qubit decline over time as qubit fidelity improves? What might the curve of logical qubit capacity look like? Will it be linear, shallow, steep, an exponential curve, or what?

We should have target dates for 5, 8, 10, 12, 16, 20, 24, 28, 32, 40, 48, 56, 64, 72, 80, 96, 128, 256, 512, and 1024 logical qubits — as a starter. Obviously 2K, 4K, 8K, 16K, 32K logical qubits and beyond are needed as well but equally obviously will take significantly longer.

# Will there be any residual error for logical qubits or will they be as perfect as classical bits?

It sure would be nice if logical qubits are as (seemingly) perfect and error-free as classical bits are, but I suspect that there will be some tiny residual error. IBM needs to set expectations in their roadmap.

Some possibilities:

- Six nines — one error in a million operations.
- Nine nieces — one error in a billion operations.
- Twelve nines — one error in a trillion operations.
- Fifteen nines — one error in a quadrillion operations.

Also, does IBM expect any residual error to evolve over time? Might there be some still-significant residual error in the early versions of logical qubits, but then some transition to error-free or very low error-rate in some subsequent stage?

Might there be some significant variance between machines even those produced at the same stage. For example might 2K and 8K processors produced at the same stage have significantly different residual errors? Might a machine with fewer logical qubits have lower residual errors?

Might differences in residual errors be due to variations in the number of physical qubits per logical qubit?

Will the user, the algorithm designer, or the application developer have any control over the residual error, such as configuring the number of physical qubits per logical qubit?

# Will future machines support only logical qubits or will physical qubit circuits still be supported?

Once logical qubits become available, IBM has not indicated whether future machines will support only logical qubits or still support physical qubit circuits.

Also, can a single application or single quantum circuit support both logical qubits and physical qubits, or only one or the other?

Or can at least a mix of circuit types be used in a single application even if an individual circuit is all-logical or all-physical?

# What functional advantages might come from larger numbers of qubits?

What functional advantages might come from larger numbers of qubits, beyond simply that algorithms can handle more data?

What consequences, such as applications which are enabled, could come from 100 qubits? Or 256, or 512, or 1,000?

It would be nice for IBM to annotate the milestones in terms of types of applications which might be enabled — and able to achieve dramatic quantum advantage for that application category.

# Need milestones for granularity of phase and probability amplitude

Quantum computational chemistry is an oft-touted application for quantum computers. Variational methods are currently being used as a stopgap measure, but ultimately quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve both precision of results and dramatic quantum advantage for performance, and both are critically dependent on granularity of the phase portion of quantum state. Very fine granularity for phase is needed. So, the roadmap should detail milestones for improvement of granularity of phase.

And granularity of probability amplitudes is in the same boat. I presume that phase and probability amplitudes will have roughly comparable gradations — although vendors have been silent on this matter.

In any case, the roadmap should detail milestones for improvement of granularity of both phase and probability amplitude.

For more detail on the issues related to granularity of phase and probability amplitudes, see my paper:

*Beware of Quantum Algorithms Dependent on Fine Granularity of Phase*- https://jackkrupansky.medium.com/beware-of-quantum-algorithms-dependent-on-fine-granularity-of-phase-525bde2642d8

# Need timeframes and milestones for size supported for both quantum phase estimation and quantum Fourier transform

The roadmap should indicate the timeframes in which both quantum phase estimation (QPE) and quantum Fourier transform (QFT) will become practical, indicating the size of QPE and QFT which will be supported in various timeframes and at various milestones.

In particular, I’m interested in these size milestones:

- 4-bit.
- 8-bit.
- 12-bit.
- 16-bit.
- 20-bit.
- 24-bit.
- 32-bit.
- 40-bit.
- 48-bit.
- 56-bit.
- 64-bit.
- 80-bit.
- 96-bit.
- 128-bit.
- 192-bit.
- 256-bit.

And some indications about expectations for 512, 1024, and 2048-bit and beyond.

# When will quantum chemists (among others) be able to rely on quantum phase estimation and quantum Fourier transform?

The roadmap should make it very clear when quantum chemists (among others) can begin relying on quantum phase estimation and quantum Fourier transform. Variational methods are a great stopgap measure, but quantum chemists (among others) need greater precision of results and the true, dramatic quantum advantage of quantum computing.

# When or will IBM support a higher-level programming model?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: when or will IBM support a higher-level programming model with higher-level algorithmic building blocks which makes it feasible for non-quantum experts to translate application problems into quantum solutions without knowledge of quantum logic gates and quantum states?

# When might The ENIAC Moment be achieved?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: at what stage would an elite team be the first to develop and test (even if not deploy into production) the first production-scale practical application? What I call The ENIAC Moment.

And just to be clear, I’m referring to real, practical applications, not contrived computer science laboratory experiments such as cross-entropy benchmarking.

This will be a very important milestone for all of quantum computing.

# When might The FORTRAN Moment be achieved?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: at what stage will it finally be easy for most organizations to develop quantum applications — without elite teams? What I call The FORTRAN Moment.

This presumes a much higher-level programming model, higher-level algorithmic building blocks, and some sort of high-level quantum programming language, as well as full support for quantum error correction (QEC) and error-free logical qubits. Some of that is beyond the scope of a hardware roadmap per se, but the point will the hardware be capable enough of supporting all of that.

And just to be clear, I’m referring to real, practical applications, not contrived computer science laboratory experiments such as cross-entropy benchmarking.

This will be a very important milestone for all of quantum computing.

# When will larger algorithms — like using 40 qubits — become possible?

Published quantum algorithms currently rarely utilize even more than a mere 20 qubits. I’m anxious to see larger algorithms, particularly:

- 24 qubits.
- 28 qubits.
- 32 qubits.
- 36 qubits.
- 40 qubits.
- 44 qubits.
- 48 qubits.
- 50 qubits.
- 56 qubits.
- 60 qubits.
- 64 qubits.
- And more.

But the big milestone for me will be to see real hardware capable of supporting 40-qubit algorithms.

That will still be small enough to fully simulate on a classical quantum simulator. So we could see simulation even before the hardware is available, but get confirmation when the hardware does become available.

In any case, I’d like to see each hardware milestone tagged with the algorithm size that is expected to be supported. Both number of qubits and maximum circuit depth.

And just to be clear, I’m referring to algorithms for real, practical applications, not contrived computer science laboratory experiments such as cross-entropy benchmarking.

For some examples of practical applications of quantum computing which are anticipated, see my paper:

*What Applications Are Suitable for a Quantum Computer?*- https://jackkrupansky.medium.com/what-applications-are-suitable-for-a-quantum-computer-5584ef62c38a

# When could a Quantum Volume of 2⁴⁰ be expected?

A presumption for support for 40-qubit algorithms is that we need hardware with a Quantum Volume (QV) of at least 2⁴⁰ (one trillion.)

I’d like to see hardware milestones tagged with expected Quantum Volume.

Right now, we can’t even tell if the 1,121-qubit Condor will support QV of 2⁴⁰. Ditto for the 433-qubit Osprey. And even for the 127-qubit Eagle.

# When will IBM develop a replacement for the Quantum Volume metric?

IBM’s Quantum Volume capacity metric will only work up to about 50 qubits since the metric requires a full classical simulation of the circuit and 50 qubits is roughly the limit for classical simulation of quantum circuits. A Quantum Volume of 2⁵⁰ would represent a quantum circuit with a depth of 50 quantum logic gates operating on 50 qubits and achieving acceptable results, which would require simulation of roughly one quadrillion quantum states.

Neither the roadmap nor any other public comments by IBM have given any hint of what metric might be used to replace Quantum Volume once their quantum computers are able to execute quantum circuits on 50 qubits for 50 gates and get acceptable results.

For more on the nature of this 50-qubit limit, see my paper:

*Why Is IBM’s Notion of Quantum Volume Only Valid up to About 50 Qubits?*- https://jackkrupansky.medium.com/why-is-ibms-notion-of-quantum-volume-only-valid-up-to-about-50-qubits-7a780453e32c

For IBM’s original paper introducing the Quantum Volume metric:

*Validating quantum computers using randomized model circuits*- Andrew W. Cross, Lev S. Bishop, Sarah Sheldon, Paul D. Nation, Jay M. Gambetta
- October 11, 2018
- https://arxiv.org/abs/1811.12926

# When will IBM need a replacement for the Quantum Volume metric?

IBM has not indicated at what stage on the roadmap they expect to be able to execute quantum circuits with acceptable results which can no longer be classically simulated, which is a key requirement for deriving the Quantum Volume metric.

My suspicion is that even IBM doesn’t expect to get to the 50-qubit and 50-gate depth limit of the Quantum Volume metric by the end of their current roadmap. They’ll certainly have enough qubits — and do already today, but qubit fidelity and gate error rates will continue to preclude quantum circuits with a depth of 50 and with acceptable results until some stage well after the end of their current roadmap.

Still, it sure would be nice if IBM could set expectations as to when this milestone might be achieved.

# How large could algorithms be on a 1,121-qubit Condor?

What equivalent of Quantum Volume (QV) — number of qubits and depth of circuit — could be expected for the 1,121-qubit Condor processor? I say equivalent of QV because technically actual QV requires a full classical circuit simulation, which will not be possible much beyond 50 qubits (and may not be practical much beyond 40 qubits or even 36–38 qubits.)

How large (both qubits and circuit depth) can we expect algorithms to be on Condor?

Could a full 1,121 of qubits be effectively used in a single algorithm? One would hope so, but I’d like to see an explicit statement.

After all, even the current 53-qubit and 65-qubit Hummingbird processors don’t have a Quantum Volume even close to approaching the use of all or even a simple majority of the available qubits.

# When might quantum advantage be achieved?

IBM’s roadmap simply doesn’t clue us in at all as to when they expect that *quantum advantage* might be achieved. Are we to conclude that they don’t expect it to be achieved until some time after the end of the roadmap?

For more on my own thoughts on quantum advantage, read my paper:

*What Is Quantum Advantage and What Is Quantum Supremacy?*- https://jackkrupansky.medium.com/what-is-quantum-advantage-and-what-is-quantum-supremacy-3e63d7c18f5b

As well as my more recent paper on *dramatic quantum advantage*:

*What Is Dramatic Quantum Advantage?*- https://jackkrupansky.medium.com/what-is-dramatic-quantum-advantage-e21b5ffce48c

In that latter paper I suggest three levels of quantum advantage — and some reasonable stepping stones along the way. I’d like to know what expectations IBM might set for achieving each of those levels relative to their hardware milestones:

**Minimal quantum advantage.**A**1,000X**performance advantage over classical solutions. 2X, 10X, and 100X (among others) are reasonable stepping stones.**Substantial or significant quantum advantage.**A**1,000,000X**performance advantage over classical solutions. 20,000X, 100,000X, and 500,000X (among others) are reasonable stepping stones.**Dramatic quantum advantage.**A**one quadrillion X**(one million billion times) performance advantage over classical solutions. 100,000,000X, a billion X, and a trillion X (among others) are reasonable stepping stones.

Granted, achieving such milestones will vary from application to application, but still, these are important milestones to track in terms of hardware performance.

IBM’s software roadmap is silent on this matter as well.

# Will IBM achieve even minimal quantum advantage by the end of their hardware roadmap?

There is no hint in the roadmap as to whether IBM will even come close to achieving *minimal quantum advantage* (1,000X) over a comparable classical solution by the end of the roadmap — 1,121-qubit Condor in 2023, or whether even minimal quantum advantage is relegated to the “*and beyond*” stages after Condor.

IBM may — or may not — manage to achieve some degree of *fractional minimal quantum advantage* by the end of the roadmap — maybe in the range of 2X, 10X, or 100X advantage over a comparable classical solution. But even that is speculation on my part — IBM is silent on this matter in their quantum hardware roadmap.

# How many bits can Shor’s algorithm handle at each stage of the roadmap?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: How many bits can Shor’s algorithm handle at each stage of the roadmap?

Although ultimately people are intensely curious about cracking 2048 and 4096-bit public encryption keys, in the near term, much smaller milestones are of interest:

- 5-bit.
- 6-bit.
- 7-bit.
- 8-bit.
- 10 bit.
- 12-bit.
- 16-bit.
- 20-bit.
- 24-bit.
- 32-bit.

And some indications about expectations for 64, 128, 256, 512, and 1024-bit and beyond.

As far as I can tell, no vendor is providing this information. That may be due primarily because Shor’s algorithm uses quantum Fourier transform and quantum phase estimation, which are impractical today and for the indefinite future. Still, it would be nice to know when it will be practical to get a handle on how capable machines are at each milestone.

And all of this begs the question of when a pure, clean, complete implementation of Shor’s algorithm is available *at all* on any machine. Most so-called implementations utilize various tricks and shortcuts to approximate Shor’s algorithm, but not the full algorithm in all of its glory.

So, I’d like to see an indication of when IBM’s future hardware is capable of full support for Shor’s algorithm at *any* input size, even four bits, but preferably 6–8 bits. That will be a *major milestone* since it will require 24 to 32 qubits and a fairly deep circuit, well beyond the capabilities of current hardware.

# What applications or types of applications might be enabled in terms of support for production-scale data at each milestone?

Looking at the IBM hardware roadmap, one is left wondering what applications or types of applications might be enabled at each successive hardware milestone. IBM’s hardware roadmap is silent in this regard.

Support for applications depends critically on supporting enough data to enable production-scale applications. That’s step one for application support — are there enough qubits.

But qubits alone are not enough. Qubit fidelity is also critical. Connectivity is critical.

Some categories of applications will depend on support for quantum Fourier transform and quantum phase estimation, so fine granularity of phase and probability amplitude will be critical.

Since applications are software, you might expect this to be more relevant to IBM’s software ecosystem roadmap, but what we are really talking about here is what the hardware enables or limits, so this information should be in the hardware roadmap.

Unfortunately, maybe IBM doesn’t have access to his information — only algorithm designers and application developers have the deep knowledge of what requirements they have for hardware. That’s fine, but that still leaves the burden on IBM to take the initiative to solicit and collect the relevant information from algorithm designers and application developers.

In truth, maybe we’re not even at the stage where either IBM or algorithm designers and application developers are ready to start thinking about their needs and requirements two to seven years from now, but personally I think we are at the stage where IBM should be taking the lead and letting people know what information they, IBM, need to design and develop more-capable hardware.

After all, applications are the only reason for the hardware to exist at all. Hardware for the sake of hardware alone, with no mention of applications is… pointless.

# Not clear whether or when quantum networking will be supported

It’s not clear whether or when IBM will be supporting quantum networking — supporting quantum interactions between quantum computers which are in separate physical locations, separated by much more than a few feet.

No crisp, explicit roadmap milestones have been specified.

All IBM has provided is just the vague “*Ultimately, we envision a future where quantum interconnects link dilution refrigerators each holding a million qubits like the intranet links supercomputing processors, creating a massively parallel quantum computer capable of changing the world.*” It is not clear whether that is meant to refer to quantum connections of much more than a few feet or simply multiprocessing and local area networks within a data center or single building, where the environment between the connected machines can be carefully controlled.

In any case, specific milestones, detailing support for various functional capabilities are needed, including number of machines, physical distance, performance, capacity, and function at the level of quantum algorithms.

Not to mention an enhanced programming model for interacting quantum circuits.

# Quantum is still a research program at IBM — and much more research is required

I give IBM a lot of credit for the amazing amount of research that they have tackled and accomplished, but so much more research is still required. Much, much more.

And to be clear, IBM Quantum is a *research program*, not a commercial product business unit, so of course their main focus has been, is, and will continue to be… research.

I’d expect IBM to continue in this research mode for a minimum of ten years, and likely upwards of fifteen to twenty years. And even then, quantum will be an ongoing research area indefinitely.

# Quantum computers are still a laboratory curiosity, not a commercial product

As I mentioned earlier, IBM’s quantum efforts, as impressive as they are, are still focused on research. They do not yet have a *commercial product* and the roadmap is silent as to when their first commercial products — suitable for production deployment of production-scale quantum applications — will debut. As such, I classify their quantum computing efforts as still being a *laboratory curiosity*.

For more on my ruminations about quantum computing as a laboratory curiosity, read my paper:

*When Will Quantum Computing Advance Beyond Mere Laboratory Curiosity?*- https://jackkrupansky.medium.com/when-will-quantum-computing-advance-beyond-mere-laboratory-curiosity-2e1b88329136

# When will IBM offer production-scale quantum computing as a commercial product (or service)?

Unknown.

That may be the biggest hole in their otherwise impressive roadmap.

Personally, I believe that is at least 5–7 years down the road.

Everything that IBM is currently offering in quantum computing is suitable only for evaluation and experimentation, but definitely not for production deployment of applications.

# Quantum Ready? For Who? For What?

IBM (and every other quantum vendor) wants everybody to be *Quantum Ready*, sitting and waiting for the eventual and inevitable arrival of quantum computers capable of supporting production-scale practical quantum applications, but I personally feel that so much of this is very premature. Actually, *all* of it is premature. Research is fine, but expectations of imminent deployment for production applications is not fine at all.

Much basic research in quantum computing is still needed. Very much research. Maybe another 5–7 years, or even 10–15 years. And that’s just to get to the starting line.

Much research is needed for both hardware (qubits) and software (algorithms).

We desperately need a much higher-level programming model for quantum computing, as well as a much richer collection of algorithmic building blocks for designing algorithms and developing applications. That’s work to be done in academia and research labs, not commercial operations focused on products and production.

It’s pointless to have thousands, tens of thousands, or even millions of people be quantum ready when they won’t be able to do anything productive for at least another five to 15 years — and by then the technology will have evolved so dramatically that much of their knowledge will be obsolete anyway.

Some of these comments relate more to IBM’s software roadmap:

*IBM’s roadmap for building an open quantum software ecosystem*- February 4, 2021
- Karl Wehden, Ismael Faro, and Jay Gambetta
- https://www.ibm.com/blogs/research/2021/02/quantum-development-roadmap/

IBM may indeed have preliminary results for their software ecosystem in two to five years, but those would be preliminary results, not seasoned results. Add another three to five years before the dust settles and both the hardware and software ecosystem, including algorithms, are themselves *Quantum Ready* for people to begin using them productively.

And even for those preliminary software results which might be available in two to five years, it’s not possible to be training people today to be Quantum Ready (from a software perspective) for software features which do not yet exist.

# Quantum Hardware Ready is needed

Back to the hardware roadmap, it should have more clear indications as to what the hardware itself is ready for in terms of what expectations algorithm designers and application developers should have — what is the hardware ready for in terms of what types of algorithms and applications can be supported at each stage and each milestone.

*Quantum Ready* users are not the problem or limiting factor at present. Rather, the lack of *Quantum Hardware Ready* is the critical limiting factor. IBM needs to focus more on getting the hardware ready, not blaming users for not being trained for hardware which doesn’t exist and can’t even be simulated.

# Need for higher-quality (and higher-capacity) simulators

Higher-quality (and higher-capacity) simulators which more closely match expected hardware features of future milestones could help to fill the gap, helping users be Quantum Hardware Ready while waiting for the next few hardware development milestones. Granted, we can’t simulate more than about 50-qubits, but we can simulate 40-qubit algorithms which would in theory run on future hardware which has higher-fidelity qubits beyond simply more of them.

And maybe, with sufficient effort and resources we actually can simulate more than 50 qubits. Maybe even 55 qubits. Or even 60. It would be expensive, but it would have real value, especially when comparable real-hardware is not yet available.

Simulators also enable debugging capabilities that enable algorithm designers and application developers to fix bugs more easily than trial and error with real hardware.

A roadmap for classical quantum simulators, including debugging capabilities is needed, both for increasing the number of qubits as well as more closely matching qubit fidelity of each machine in the roadmap.

# Need for debugging capabilities

Execution of a quantum circuit is completely opaque with respect to any intermediate results — the only values which can be observed are the final, measured results, at which point the rich quantum state of even the measured qubits has been collapsed to simple classical 0’s and 1’s. This would be downright unacceptable for developing classical software — rich debugging capabilities are needed.

Unfortunately, the opaqueness and unobservability of quantum state on a real quantum computer curtails any significant debugging capabilities.

That’s where classical quantum simulators can play a big role. They can easily allow all details of the rich quantum state to be observed, captured, and analyzed. Even the simply classical binary 0’s and 1’s of measured qubits could be compared and contrasted with the rich quantum state of the qubits before they are measured. Of course, this would require the development of the classical software to add such debugging capabilities to the raw classical quantum simulators, but that’s a mere matter of software development.

A full suite of sophisticated debugging capabilities are needed, much as they are for classical software.

It could be argued that simple quantum circuits don’t need such sophisticated debugging capabilities, but it’s clear that larger and more complex quantum circuits have a much more compelling need for sophisticated debugging capabilities, especially as less-elite technical staff enter the picture.

The point here is that rich debugging capabilities are needed, but that neither the IBM hardware roadmap nor their software roadmap even mention such capabilities let alone detail milestones for the development of such capabilities.

Debugging may superficially seem to be more of a software tool issue more relevant to the software roadmap, but I would argue that debugging is inherently much closer to the raw machine and directly relates to how the raw machine is used. In fact, absent decent debugging capabilities, it may not be possible for many people to effectively use the machine at all. And this need to be close to the hardware only gets more intense as the hardware evolves and gets more sophisticated and more difficult to use without advanced debugging capabilities.

# Need for testing capabilities

Testing of software is essential, but typically relegated to being a secondary consideration at best. Sophisticated testing capabilities are needed for quantum circuits.

There are many forms of testing, including but not limited to:

- Unit testing.
- Module testing.
- System testing.
- Performance testing.
- Logic analysis.
- Coverage analysis.
- Shot count and circuit repetitions — analyzing results for multiple executions of the same circuit.
- Calibration.
- Diagnostics.
- Hardware fault detection.

It could be argued that simple quantum circuits don’t need sophisticated testing capabilities, but it’s clear that larger and more complex quantum circuits have a much more compelling need for sophisticated testing capabilities, especially as less-elite technical staff enter the picture.

The point here is that rich testing capabilities are needed, but that neither the IBM hardware roadmap nor their software roadmap even mention such capabilities let alone detail milestones for the development of such capabilities.

Testing may superficially seem to be more of a software tool issue more relevant to the software roadmap, but I would argue that testing is inherently much closer to the raw machine and directly relates to how the raw machine is used. In fact, absent decent testing capabilities, it may not be possible for many people to effectively use the machine at all. And this need to be close to the hardware only gets more intense as the hardware evolves and gets more sophisticated and more difficult to use without advanced testing capabilities.

# Need for dramatic improvements in documentation and technical specifications at each milestone

Documentation is always a problematic issue for any technology. IBM does have a fair amount of documentation, the Qiskit textbook, blogs, and papers, but the quality, coverage, and coherence is spotty and inconsistent. Dramatic improvement is needed.

I wouldn’t expect a dramatic improvement instantly, overnight, but I would expect the roadmap to speak to at least incremental improvement for each milestone.

I’ve already written about some of the improvements that I would like to see in general:

*Framework for Principles of Operation for a Quantum Computer*- https://jackkrupansky.medium.com/framework-for-principles-of-operation-for-a-quantum-computer-652ead10bc48

That’s more about the details that an algorithm designer needs to know in terms of the *programming model*.

There is a short section in there about *Implementation Specification*, to cover more of the gory details below even what an algorithm designer necessarily needs, but many algorithms will likely rely on that level of detail, particularly performance, such as qubit fidelity.

To be clear, that section is not referring to documentation for IBM’s proposed software ecosystem, but limited to the nuts and bolts of the programming model — how to use the hardware from an algorithm perspective.

# Brief comments on IBM’s roadmap for building an open quantum software ecosystem

Although the main focus of this informal paper is IBM’s quantum hardware roadmap, I do have to acknowledge that IBM has a separate quantum software roadmap:

*IBM’s roadmap for building an open quantum software ecosystem*- February 4, 2021
- Karl Wehden, Ismael Faro, and Jay Gambetta
- https://www.ibm.com/blogs/research/2021/02/quantum-development-roadmap/

I may post more extensive comments in a separate informal paper, but at least a few comments are in order here, mostly from the context of how quantum algorithm designers and quantum application developers view and use the hardware, through the lens of a programming model, algorithmic building blocks, and programming languages. I’m not so concerned about support software and tools, but primarily how designers and developers think about the hardware itself.

So, here are a few relevant comments:

- IBM’s software roadmap is too brief, too terse, and too vague to make many definitive comments about it.
- It sort of hints at a higher-level programming model, but in a fragmentary manner, not fully integrated, and doesn’t even use the term
*programming model*at all. - It does indeed have some interesting fragmentary thoughts, but just too little in terms of a coherent overarching semantic model. Some pieces of the puzzle are there, but not the big picture that puts it all together.
- I heartily endorse open source software, but there is a wide range of variations on support for open source software. Will IBM cede 100% of control to outside actors or maintain 100% control but simply allow user submissions? Who ultimately has veto authority about the direction of the software — the community or IBM?
- I heartily endorse ecosystems as well, but that can be easier said than done.
- I nominally support their three levels (they call them segments) of kernel, algorithms, and models, but I would add two levels: custom applications, and then packaged solutions (generalized applications.) From the perspective of this (my) paper, I’m focused on the programming model(s) to be used by algorithm developers and application developers.
- I personally use the term
*algorithmic building blocks*, which may or may not be compatible with IBM’s notion of*modules*. My algorithmic building blocks would apply primarily to algorithm designers, but also to application developers (custom and packaged) and application framework developers as well. - IBM also refers to
*application-specific modules for natural science, optimization, machine learning, and finance*, which I do endorse, but I also personally place attention on general-purpose algorithmic building blocks which can be used across application domains. Personally, I would substitute*domain-specific*for*application-specific*. - I personally use the term
*application framework*, which may be a match to IBM’s concept of a model. - In their visual diagram, IBM refers to
*Enterprise Clients*, but that seems to refer to*enterprise developers*. - I appreciate IBM’s commitment to a
*frictionless development framework*, but it’s all a bit too vague for me to be very confident about what it will actually do in terms of specific semantics for algorithms and applications. Again, I’m not so interested in support services and tools as I am in the actual semantics of the programming model. - IBM says “
*where the hardware is no longer a concern to users or developers*”, but that’s a bit too vague. Does it mean they aren’t writing code at all? Or does it simply mean a machine-independent programming model? Or does it mean a higher-level programming model, such as what I have been proposing? Who knows! IBM needs to supply more detail. - I’m all in favor of domain-specific
*pre-built runtimes*— if I understand IBM’s vague description, which seem consistent with my own thought about*packaged solutions*which allow the user to focus on preparing input data and parameters, and then processing output data without even touching or viewing the actual quantum algorithms or application source code. That said, I worry a little that their use of*runtime*may imply significant application logic that invokes the runtime rather than focussing the user on data and configuration parameters. I do see that the vast majority of users of quantum applications won’t even be writing*any code*, but how we get there is an open question. In any case, this paper of mine is focused on quantum algorithm designers and quantum application developers and how they see and use the hardware. - Kernel-level code is interesting, but not so much to me. Maybe various algorithmic building blocks, such as quantum Fourier transform or SWAP networks could be implemented at kernel level, but ultimately, all I really care about is the high-level interface that would be available to algorithm designers and application developers — the programming model, their view of the hardware. The last thing I want to see is algorithm designers and application developers working way down at machine-specific kernel level.
- I heartily endorse
*application-specific modules for natural science, optimization, machine learning, and finance*— at least at a conceptual level. Anything that enables users or application developers to perform computations at the application level without being burdened by details about either the hardware or quantum mechanics. All of that said, I can’t speak to whether I would approve of how IBM is approaching the design of these modules. Also, I am skeptical as to when the hardware will be sufficiently mature to support such modules at production-scale. - I nominally endorse
*quantum model services for natural science, optimization, machine learning, and finance*— at least at a conceptual level. If I read the IBM graphic properly, such*model services*won’t be available until 2023 at the earliest and possibly not until 2026. Even there, it’s not clear if it’s simply that all of the lower-level capabilities are in place to enable*model developers*to develop such application-specific models, or whether such models will then be ready for use by application developers. - No mention of any dependencies on hardware advances, such as quantum error correction and logical qubits, improvements in qubit fidelity, improvements in qubit connectivity.
- No mention of Quantum Volume or matching size of algorithms and required hardware.
- No sense of synchronizing the hardware roadmap and the software roadmap.
- No mention of networked applications or quantum networking.
- No mention of evolution towards vendor-neutral technical standards. The focus is clearly on IBM setting the standards for others to follow. That may not be so much a negative as simply a statement of how young and immature the sector remains.

Those are just a few of my thoughts. I may expand on this list in a separate informal paper focused on the software roadmap.

# Maybe many of the milestones and details which interest me occur beyond the end of the current roadmap

It’s very possible that many of the milestones or details which interest me might occur well beyond the end of IBM’s current quantum hardware roadmap — in the “*and beyond*” stage beyond the 1,121-qubit Condor, the current end of the roadmap. That would account for them not being mentioned on their current roadmap. It’s possible. Is it likely? I simply couldn’t say. Only IBM knows with any certainty.

# Heads up for other quantum computing vendors — all of these comments apply to you as well!

My comments here are specifically directed at IBM and their quantum hardware map (and software map to a limited degree), but many, most, if not virtually all of them apply to *any* vendor in quantum computing. Qubits are qubits. Qubit fidelity is qubit fidelity. Errors are errors. Error correction is error correction. Algorithms are algorithms. Applications are applications. Regardless of the vendor. So it behooves IBM’s competitors and everyone’s partners, suppliers, and customers to pay attention to my comments as well. And researchers in academia as well. Show your roadmaps, your milestones, and the details I have noted.

# Summary and conclusions

- Great that IBM has shared what they have for a roadmap.
- Disappointing that it took so long to get it out.
- More questions than answers.
- Much greater detail is needed.
- Full error correction is still far over the horizon.
- Evolution of qubit fidelity between milestones is unclear.
- Not very clear what developers will really have to work with at each milestone, especially in terms of coherence time, qubit fidelity, gate error rate, measurement error rate, and connectivity.
- Waiting to hear what will succeed Quantum Volume once more than 50 qubits can be used reliably in a deep algorithm.
- This is all still just a research program, a laboratory curiosity, not a commercial product (or service) suitable for production use for production-scale practical applications.
- Unclear how much more research will be required after the end of the current IBM hardware roadmap before quantum computing can transition to a commercial product suitable for production-scale practical quantum applications.
- Unclear what the timeframe will be for transition to a commercial product (or service.)
- No sense of when they might achieve The ENIAC Moment — first production-scale application.
- No sense of when they might achieve The FORTRAN Moment — easy for most organizations to develop quantum applications — without elite teams.
- Unclear whether IBM will achieve even minimal quantum advantage (1,000X classical solutions) by the end of their hardware roadmap (2023 with 1,121-qubit Condor) or whether we’ll have to await the “and beyond” stages after the end of the roadmap.
- It’s very possible that many of the milestones or details which interest me might occur well beyond the end of IBM’s current quantum hardware roadmap — in the “
*and beyond*” stage beyond the 1,121-qubit Condor, the current end of the roadmap. - Many, most, if not virtually all of my comments here apply to
*any*vendor in quantum computing, including IBM’s competitors and everyone’s partners, suppliers, and customers as well. And researchers in academia as well. Show your roadmaps, your milestones, and the details I have noted. - For now, we remain waiting for the next machine on the roadmap — 127-qubit Eagle — in the coming six months, by the end of 2021, and the 433-qubit Osprey in 2022.
- Overall, we’re still in the early innings for quantum computing — and not close to being ready for prime time with production-scale practical applications — or even close to achieving any significant degree of quantum advantage, the purpose for even considering quantum computing.

For more of my writing: ** List of My Papers on Quantum Computing**.