Thoughts on the IBM Quantum Hardware Roadmap

Jack Krupansky
37 min readJul 20, 2021

--

This informal paper documents my thoughts in recent months on IBM’s quantum hardware roadmap which they published on September 15, 2020, as well as their quantum software development and ecosystem roadmap published on February 4, 2021. This paper focuses primarily on the former — the hardware — more than the latter, although a future paper may delve into the latter and there is a section on the latter at the end of this paper.

Overall, we can all be grateful that IBM is giving us a view into their future, unfortunately it raises more questions than it answers. The roadmap makes clear that we’re still in the early innings for quantum computing — and not close to being ready for prime time with production-scale practical applications — or even close to achieving any significant degree of quantum advantage, the purpose for even considering quantum computing.

Most significantly, it fails to provide any insight into the two most crucial questions about quantum computing:

  1. When will quantum computers support production-scale applications?
  2. When will quantum computers achieve quantum advantage (or quantum supremacy) for production-scale applications?

Topics covered in this paper:

  1. Positive highlights.
  2. Negative highlights.
  3. My own interests.
  4. The IBM roadmap itself.
  5. Graphic for the IBM quantum hardware roadmap.
  6. Earlier hint of a roadmap.
  7. I’m not so interested in support software and tools.
  8. Too short — need more detail for longer-term aims, beyond 2023, just two years from now.
  9. Too brief — need more detail on each milestone.
  10. Limited transparency — I’m sure IBM has the desired detail in their internal plans.
  11. When will quantum error correction (QEC) be achieved?
  12. Need roadmap milestones for nines of qubit fidelity.
  13. Need roadmap milestones for qubit measurement fidelity.
  14. When might IBM get to near-perfect qubits?
  15. What will the actual functional transition milestones be on the path to logical qubits?
  16. Will there be any residual error for logical qubits or will they be as perfect as classical bits?
  17. Will future machines support only logical qubits or will physical qubit circuits still be supported?
  18. What functional advantages might come from larger numbers of qubits?
  19. Need milestones for granularity of phase and probability amplitude.
  20. Need timeframes and milestones for size supported for both quantum phase estimation and quantum Fourier transform.
  21. When will quantum chemists (among others) be able to rely on quantum phase estimation and quantum Fourier transform?
  22. When or will IBM support a higher-level programming model?
  23. When will larger algorithms — like using 40 qubits — become possible?
  24. When could a Quantum Volume of 2⁴⁰ be expected?
  25. When will IBM develop a replacement for the Quantum Volume metric?
  26. When will IBM need a replacement for the Quantum Volume metric?
  27. How large could algorithms be on a 1,121-qubit Condor?
  28. When might The ENIAC Moment be achieved?
  29. When might The FORTRAN Moment be achieved?
  30. When might quantum advantage be achieved?
  31. Will IBM achieve even minimal quantum advantage by the end of their hardware roadmap?
  32. How many bits can Shor’s algorithm handle at each stage of the roadmap?
  33. What applications or types of applications might be enabled in terms of support for production-scale data at each milestone?
  34. Not clear whether or when quantum networking will be supported.
  35. Quantum is still a research program at IBM — and much more research is required.
  36. Quantum computers are still a laboratory curiosity, not a commercial product.
  37. When will IBM offer production-scale quantum computing as a commercial product (or service)?
  38. Quantum Ready? For Who? For What?
  39. Quantum Hardware Ready is needed.
  40. Need for higher-quality (and higher-capacity) simulators.
  41. Need for debugging capabilities.
  42. Need for testing capabilities.
  43. Need for dramatic improvements in documentation and technical specifications at each milestone.
  44. Brief comments on IBM’s roadmap for building an open quantum software ecosystem.
  45. Maybe many of the milestones and details which interest me occur beyond the end of the current roadmap
  46. Heads up for other quantum computing vendors — all of these comments apply to you as well!
  47. Summary and conclusions.

Positive highlights

I appreciate:

  1. IBM’s transparency on putting out such a roadmap.
  2. The view into the future beyond the next year or two — including a path to 1,000 qubits, a million qubits, and beyond.
  3. The mention of error correction and logical qubits.
  4. The mention of linking quantum computers to create a massively parallel quantum computer.
  5. The prospect of achieving 100 qubits sometime this year.

Negative highlights

Unfortunately:

  1. Disappointing that it took so long to put the roadmap out. I first heard mention that they had a roadmap back in 2018.
  2. Raises more questions than it answers.
  3. Too short — need more detail for longer-term aims, beyond 2023, just two years from now.
  4. Too brief — need more detail on each milestone.
  5. Needs more milestones. Intermediate stages and further stages. I certainly hope that they are working on more machines than listed over the next three to five years.
  6. Other than raw number of qubits, roughly what can algorithm designers and application developers expect to see in the next two machines, 127-qubit Eagle — in the coming six months, by the end of 2021, and the 433-qubit Osprey in 2022? Obviously a lot can change over the next six to eighteen months, but some sort of expectations need to be set.
  7. Not clear when the quantum processing unit will become modular. When will there be support for more qubits than will fit on a single chip?
  8. Not clear when or whether multiple quantum computers can be directly connected at the quantum level. Comparable to a classical multiprocessor, either tightly-coupled or loosely-coupled.
  9. Not clear whether or when quantum networking will be supported.
  10. Silent as to when error correction and logical qubits will become available.
  11. No milestones given for the path to error correction and logical qubits. What will the actual milestones, the functional transitions really be?
  12. Silent as to when qubit counts will begin to refer to logical qubits. I’m presuming that all qubit counts on the current roadmap are for physical qubits.
  13. Silent as to milestones for capacities of logical qubits, especially for reaching support for practical, production-scale applications.
  14. Silent as any improvements in connectivity between qubits. Each milestone should indicate degree of connectivity. Will SWAP networks still be required? Will full any-to-any connectivity be achieved by some milestone?
  15. Silent as to milestones for improvements to qubit and gate fidelity. No hints for nines of qubit fidelity at each milestone.
  16. Silent as to milestones for improvements to qubit measurement fidelity.
  17. Silent as to when near-perfect qubits might be achieved. High enough fidelity that many algorithms won’t need full quantum error correction.
  18. Silent as to milestones for granularity of phase and probability amplitude.
  19. Silent as to when quantum chemists (among others) will be able to rely on quantum phase estimation and quantum Fourier transform of various sizes. When will quantum phase estimation become practical?
  20. Silent as to the metric to replace quantum volume, which doesn’t work for more than about 50 qubits. Can’t practically simulate a quantum circuit using more than about 50 qubits.
  21. Silent as to the stage at which quantum volume exceeds the number of qubits which can be practically simulated on a classical computer.
  22. Silent as to when larger algorithms — like using 40 qubits — will become possible. When could a Quantum Volume of 2⁴⁰ be expected.
  23. Silent as to how large algorithms could be on a 1,121-qubit Condor. What equivalent of Quantum Volume — number of qubits and depth of circuit — could be expected.
  24. Silent as to when quantum advantage might be expected to be achieved — for any real, production-scale, practical application. Should we presume that means that IBM doesn’t expect quantum advantage until some time after the end of the roadmap?
  25. Silent as to what applications or types of applications might be enabled in terms of support for production-scale data at each milestone.
  26. Silent on the roadmap for machine simulators, including maximum qubit count which can be simulated at each milestone. Silent as to where they think the ultimate wall is for the maximum number of qubits which can be simulated.
  27. Silent as to improvements in qubit coherence and circuit depth at each stage.
  28. Silent as to maximum circuit size and maximum circuit depth which can be supported at each stage.
  29. Silent as to how far they can go with NISQ and which machines might be post-NISQ.
  30. Silent as to when fault-tolerant machines will become available.
  31. Silent as to milestones for various intra-circuit hybrid quantum/classical programming capabilities.
  32. Open question: Will there be any residual error for logical qubits or will they be as perfect as classical bits?
  33. Open question: At some stage, will future machines support only logical qubits or will physical qubit circuits still be supported?
  34. Open question: What will be the smallest machine supporting logical qubit circuits?
  35. Silent as to debugging capabilities.
  36. Silent as to testing capabilities.
  37. It is quite clear that quantum computing is still a research program at IBM, not a commercial product suitable for production use.
  38. Silent as to when quantum computing might transition from mere laboratory curiosity to front-line commercial product suitable for production-scale use cases.
  39. Silent as to how much additional research, beyond the end of the current roadmap, may be necessary before a transition to a commercial product.
  40. Silent as to improvements in documentation and technical specifications at each milestone.

My own interests

I wouldn’t necessarily expect IBM to put these milestones in its own roadmap, but they interest me nonetheless:

  1. When might quantum advantage be achieved — for any real, production-scale, practical application? For minimal quantum advantage (e.g., 2X, 10X, 100X), significant quantum advantage (e.g., 1,000X to 1,000,000X), and dramatic quantum advantage (one quadrillion X)?
  2. How close will each stage come to full quantum advantage? What fractional quantum advantage is achieved at each stage?
  3. What applications might achieve quantum advantage at each stage?
  4. What applications will be supported at each stage which weren’t feasible at earlier stages?
  5. Each successive stage should have some emblematic algorithm which utilizes the new capabilities of that stage, such as more qubits, deeper circuit depth, not just running the same old algorithms with the same number of qubits and circuit depth as for earlier, smaller machines.
  6. What functional advantages might come from larger numbers of qubits, beyond simply that algorithms can handle more data?
  7. Is there any reason to believe that there might be a better qubit technology (alternative to superconducting transmon qubits) down the road, or any reason to believe that no better qubit technology is needed? Does IBM anticipate that there might be a dramatic technology transition at some stage, maybe five, ten, or more years down the road?
  8. Does IBM anticipate that they might actually support more than one qubit technology at some stage? Like, trapped ion?
  9. When or will IBM support a higher-level programming model with higher-level algorithmic building blocks which makes it feasible for non-quantum experts to translate application problems into quantum solutions without knowledge of quantum logic gates and quantum states.
  10. When might The ENIAC Moment be achieved? First production-scale application.
  11. When might The FORTRAN Moment be achieved? Higher-level programming model which makes it easy for most organizations to develop quantum applications — without elite teams.
  12. How many bits can Shor’s algorithm handle at each stage of the roadmap?
  13. Need for a broad set of benchmark tests to evaluate performance, capacity, and precision of various algorithmic building blocks, such as phase estimation, along with target benchmark results for each hardware milestone.
  14. Milestones for optimizing various algorithmic building blocks, such as phase estimation, based on hardware improvements at each stage.
  15. The maximum size of algorithms which can correctly run on the physical hardware at each milestone but can no longer be classically simulated. Number of qubits and circuit depth. Maybe several thresholds for the fraction of correct executions. For now, this could parallel projections of log2(Quantum Volume) and estimate when log2(QV) exceeds the maximum classical quantum simulator capacity.

The IBM roadmap itself

The IBM quantum hardware roadmap can be found here:

The IBM quantum software development and ecosystem roadmap can be found here:

Graphic for the IBM quantum hardware roadmap

I’m a text-only guy, so I won’t reproduce the graphic for the roadmap, but you can find it here — look for the blue diamonds:

Earlier hint of a roadmap

I haven’t been able to track down the original citation, but I believe it was sometime in 2018 that IBM publicly stated that quantum error correction was on their roadmap. So, that was a vague reference to a purported roadmap, but no actual roadmap was available to the public, until 2020.

I’m not so interested in support software and tools

Support software and tools are obviously important, but I’m less concerned about them in this paper, which is more focused on hardware and the programming model for algorithms and applications.

Too short — need more detail for longer-term aims, beyond 2023, just two years from now

In my opinion, the roadmap needs milestones for:

  1. 3 years.
  2. 5 years.
  3. 7 years.
  4. 10 years.
  5. 12 years.
  6. 15 years.
  7. 20 years.
  8. 25 years. Where is the technology really headed?

Too brief — need more detail on each milestone

More than just a too-terse short phrase for key advancement and qubit count and code name. Not looking for any precise detail, especially years out, but at least rough targets, even if nothing more than rough percentage improvements expected at each stage. Graphs with trend lines would be appreciated.

  1. Qubit fidelity.
  2. Qubit lattice layout.
  3. Qubit connectivity.
  4. Gate cycle time.
  5. Qubit coherence.
  6. Maximum circuit depth.
  7. Maximum circuit size.
  8. Maximum circuit executions per second.

Limited transparency — I’m sure IBM has the desired detail in their internal plans

It’s a little baffling that the IBM hardware roadmap has so little technical detail. I’m sure that their own internal plans and roadmaps have a lot of the level of detail that I suggest in this paper. Why exactly they refrain from disclosing that level of detail is unclear.

Actually, part of the motive is very clear, from past history and experience with IBM — this is just the way IBM works, always has, and probably always will. As IBM explicitly says in their software roadmap, “As scientists, it’s not an easy decision to go public with such a transparent roadmap; we prefer to talk about our achievements, not our plans.

I would give IBM a letter grade of C-minus on transparency — and that’s being generous. And maybe also only because I hope they will take some of my criticism to heart and dramatically increase their transparency on roadmap plans for hardware and evolution of the programming model.

When will quantum error correction (QEC) be achieved?

The IBM roadmap graphic does have quantum error correction listed as a key achievement for the and beyond stage, sometime beyond the 1,121-qubit Condor processor planned for 2023. I would have hoped to see at least some progress sooner, and some highlighting of milestones on the path to full quantum error correction and full support for error-free logical qubits.

The text of the roadmap has vague statements such as:

  1. … as we scale up the number of physical qubits, we will also be able to explore how they’ll work together as error-corrected logical qubits — every processor we design has fault tolerance considerations taken into account.
  2. We think of Condor as an inflection point, a milestone that marks our ability to implement error correction and scale up our devices…

That latter statement sounds promising, but the graphic doesn’t list error correction until the vague, nebulous, and unspecified and beyond stage after Condor, not for Condor itself. Is the graphic wrong — was quantum error correction supposed to be listed under Condor’s key achievements? Unknown, but interesting speculation.

Need roadmap milestones for nines of qubit fidelity

There is no mention in the IBM roadmap of how many nines of qubit fidelity will be achieved and when and in what milestones.

Qubit fidelity includes:

  1. Coherence time.
  2. Gate errors. Both single-qubit and two-qubit.
  3. Measurement errors.

All three can and should be detailed separately, but an overall metric for qubit fidelity is needed as well.

Minimal milestones in nines of overall qubit fidelity:

  1. Two nines — 99%.
  2. Three nines — 99.9%.
  3. Four nines — 99.99%.
  4. Five nines — 99.999%.
  5. Six nines — 99.9999%.
  6. Whether IBM has intentions or plans for more than six nines of qubit fidelity should be specified. Seven, eight, nine, and higher nines of qubit fidelity would be great, but will likely be out of reach in the next two to four years.
  7. What maximum qubit fidelity, short of quantum error correction, could be achieved in the longer run, beyond the published roadmap, should also be specified.

For more on nines of qubit fidelity, see my own informal paper:

Need roadmap milestones for qubit measurement fidelity

I didn’t realize this until recently, but simple measurement of qubits to get the results of a quantum computation is a very error-prone process. So even if qubit coherence is increased and gate errors are reduced, there are still measurement errors to deal with.

In fact, measurement errors may be more difficult to cure. After all, measurement is the transition from the quantum world to the classical world.

At least on Google’s Sycamore quantum processor, the measurement error rate is significantly greater than the gate error rate, although IBM’s qubit measurement error rate and how it compares to coherence and gate error rate is unknown, from public information.

I’m confident that IBM will be making improvements on the qubit measurement front, but we need to see qubit measurement fidelity (nines) shown on the roadmap for each hardware milestone.

Alternatively, if measurement fidelity is folded into an overall composite qubit fidelity metric, that may be sufficient, although having the various qubit fidelity metrics disaggregated would still be helpful and even better for some algorithms and applications.

When might IBM get to near-perfect qubits?

Although quantum error correction is the long-term goal, a milestone along the way and an independent goal in its own right are near-perfect qubits, which have a high-enough qubit fidelity that they both support implementation of quantum error correction and they enable elite and highly-motivated technical teams to implement practical applications even before quantum error correction is available and supporting enough logical qubits to enable practical applications.

IBM has made no mention of near-perfect qubits, nor a roadmap towards them.

What will the actual functional transition milestones be on the path to logical qubits?

IBM hasn’t given any specific milestones for the path to error correction and logical qubits. What will the actual milestones, the functional transitions really be?

What exactly will algorithm designers actually be able to do at each functional milestone?

Or will it actually be an all at once transition from nothing to perfection?

At a minimum, we need to see milestones based on the number of logical qubits.

Will the number of physical qubits per logical qubit be stable across all of the milestones? Or, will the number of physical qubits per logical qubit decline over time as qubit fidelity improves? What might the curve of logical qubit capacity look like? Will it be linear, shallow, steep, an exponential curve, or what?

We should have target dates for 5, 8, 10, 12, 16, 20, 24, 28, 32, 40, 48, 56, 64, 72, 80, 96, 128, 256, 512, and 1024 logical qubits — as a starter. Obviously 2K, 4K, 8K, 16K, 32K logical qubits and beyond are needed as well but equally obviously will take significantly longer.

Will there be any residual error for logical qubits or will they be as perfect as classical bits?

It sure would be nice if logical qubits are as (seemingly) perfect and error-free as classical bits are, but I suspect that there will be some tiny residual error. IBM needs to set expectations in their roadmap.

Some possibilities:

  1. Six nines — one error in a million operations.
  2. Nine nieces — one error in a billion operations.
  3. Twelve nines — one error in a trillion operations.
  4. Fifteen nines — one error in a quadrillion operations.

Also, does IBM expect any residual error to evolve over time? Might there be some still-significant residual error in the early versions of logical qubits, but then some transition to error-free or very low error-rate in some subsequent stage?

Might there be some significant variance between machines even those produced at the same stage. For example might 2K and 8K processors produced at the same stage have significantly different residual errors? Might a machine with fewer logical qubits have lower residual errors?

Might differences in residual errors be due to variations in the number of physical qubits per logical qubit?

Will the user, the algorithm designer, or the application developer have any control over the residual error, such as configuring the number of physical qubits per logical qubit?

Will future machines support only logical qubits or will physical qubit circuits still be supported?

Once logical qubits become available, IBM has not indicated whether future machines will support only logical qubits or still support physical qubit circuits.

Also, can a single application or single quantum circuit support both logical qubits and physical qubits, or only one or the other?

Or can at least a mix of circuit types be used in a single application even if an individual circuit is all-logical or all-physical?

What functional advantages might come from larger numbers of qubits?

What functional advantages might come from larger numbers of qubits, beyond simply that algorithms can handle more data?

What consequences, such as applications which are enabled, could come from 100 qubits? Or 256, or 512, or 1,000?

It would be nice for IBM to annotate the milestones in terms of types of applications which might be enabled — and able to achieve dramatic quantum advantage for that application category.

Need milestones for granularity of phase and probability amplitude

Quantum computational chemistry is an oft-touted application for quantum computers. Variational methods are currently being used as a stopgap measure, but ultimately quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve both precision of results and dramatic quantum advantage for performance, and both are critically dependent on granularity of the phase portion of quantum state. Very fine granularity for phase is needed. So, the roadmap should detail milestones for improvement of granularity of phase.

And granularity of probability amplitudes is in the same boat. I presume that phase and probability amplitudes will have roughly comparable gradations — although vendors have been silent on this matter.

In any case, the roadmap should detail milestones for improvement of granularity of both phase and probability amplitude.

For more detail on the issues related to granularity of phase and probability amplitudes, see my paper:

Need timeframes and milestones for size supported for both quantum phase estimation and quantum Fourier transform

The roadmap should indicate the timeframes in which both quantum phase estimation (QPE) and quantum Fourier transform (QFT) will become practical, indicating the size of QPE and QFT which will be supported in various timeframes and at various milestones.

In particular, I’m interested in these size milestones:

  1. 4-bit.
  2. 8-bit.
  3. 12-bit.
  4. 16-bit.
  5. 20-bit.
  6. 24-bit.
  7. 32-bit.
  8. 40-bit.
  9. 48-bit.
  10. 56-bit.
  11. 64-bit.
  12. 80-bit.
  13. 96-bit.
  14. 128-bit.
  15. 192-bit.
  16. 256-bit.

And some indications about expectations for 512, 1024, and 2048-bit and beyond.

When will quantum chemists (among others) be able to rely on quantum phase estimation and quantum Fourier transform?

The roadmap should make it very clear when quantum chemists (among others) can begin relying on quantum phase estimation and quantum Fourier transform. Variational methods are a great stopgap measure, but quantum chemists (among others) need greater precision of results and the true, dramatic quantum advantage of quantum computing.

When or will IBM support a higher-level programming model?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: when or will IBM support a higher-level programming model with higher-level algorithmic building blocks which makes it feasible for non-quantum experts to translate application problems into quantum solutions without knowledge of quantum logic gates and quantum states?

When might The ENIAC Moment be achieved?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: at what stage would an elite team be the first to develop and test (even if not deploy into production) the first production-scale practical application? What I call The ENIAC Moment.

And just to be clear, I’m referring to real, practical applications, not contrived computer science laboratory experiments such as cross-entropy benchmarking.

This will be a very important milestone for all of quantum computing.

When might The FORTRAN Moment be achieved?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: at what stage will it finally be easy for most organizations to develop quantum applications — without elite teams? What I call The FORTRAN Moment.

This presumes a much higher-level programming model, higher-level algorithmic building blocks, and some sort of high-level quantum programming language, as well as full support for quantum error correction (QEC) and error-free logical qubits. Some of that is beyond the scope of a hardware roadmap per se, but the point will the hardware be capable enough of supporting all of that.

And just to be clear, I’m referring to real, practical applications, not contrived computer science laboratory experiments such as cross-entropy benchmarking.

This will be a very important milestone for all of quantum computing.

When will larger algorithms — like using 40 qubits — become possible?

Published quantum algorithms currently rarely utilize even more than a mere 20 qubits. I’m anxious to see larger algorithms, particularly:

  1. 24 qubits.
  2. 28 qubits.
  3. 32 qubits.
  4. 36 qubits.
  5. 40 qubits.
  6. 44 qubits.
  7. 48 qubits.
  8. 50 qubits.
  9. 56 qubits.
  10. 60 qubits.
  11. 64 qubits.
  12. And more.

But the big milestone for me will be to see real hardware capable of supporting 40-qubit algorithms.

That will still be small enough to fully simulate on a classical quantum simulator. So we could see simulation even before the hardware is available, but get confirmation when the hardware does become available.

In any case, I’d like to see each hardware milestone tagged with the algorithm size that is expected to be supported. Both number of qubits and maximum circuit depth.

And just to be clear, I’m referring to algorithms for real, practical applications, not contrived computer science laboratory experiments such as cross-entropy benchmarking.

For some examples of practical applications of quantum computing which are anticipated, see my paper:

When could a Quantum Volume of 2⁴⁰ be expected?

A presumption for support for 40-qubit algorithms is that we need hardware with a Quantum Volume (QV) of at least 2⁴⁰ (one trillion.)

I’d like to see hardware milestones tagged with expected Quantum Volume.

Right now, we can’t even tell if the 1,121-qubit Condor will support QV of 2⁴⁰. Ditto for the 433-qubit Osprey. And even for the 127-qubit Eagle.

When will IBM develop a replacement for the Quantum Volume metric?

IBM’s Quantum Volume capacity metric will only work up to about 50 qubits since the metric requires a full classical simulation of the circuit and 50 qubits is roughly the limit for classical simulation of quantum circuits. A Quantum Volume of 2⁵⁰ would represent a quantum circuit with a depth of 50 quantum logic gates operating on 50 qubits and achieving acceptable results, which would require simulation of roughly one quadrillion quantum states.

Neither the roadmap nor any other public comments by IBM have given any hint of what metric might be used to replace Quantum Volume once their quantum computers are able to execute quantum circuits on 50 qubits for 50 gates and get acceptable results.

For more on the nature of this 50-qubit limit, see my paper:

For IBM’s original paper introducing the Quantum Volume metric:

When will IBM need a replacement for the Quantum Volume metric?

IBM has not indicated at what stage on the roadmap they expect to be able to execute quantum circuits with acceptable results which can no longer be classically simulated, which is a key requirement for deriving the Quantum Volume metric.

My suspicion is that even IBM doesn’t expect to get to the 50-qubit and 50-gate depth limit of the Quantum Volume metric by the end of their current roadmap. They’ll certainly have enough qubits — and do already today, but qubit fidelity and gate error rates will continue to preclude quantum circuits with a depth of 50 and with acceptable results until some stage well after the end of their current roadmap.

Still, it sure would be nice if IBM could set expectations as to when this milestone might be achieved.

How large could algorithms be on a 1,121-qubit Condor?

What equivalent of Quantum Volume (QV) — number of qubits and depth of circuit — could be expected for the 1,121-qubit Condor processor? I say equivalent of QV because technically actual QV requires a full classical circuit simulation, which will not be possible much beyond 50 qubits (and may not be practical much beyond 40 qubits or even 36–38 qubits.)

How large (both qubits and circuit depth) can we expect algorithms to be on Condor?

Could a full 1,121 of qubits be effectively used in a single algorithm? One would hope so, but I’d like to see an explicit statement.

After all, even the current 53-qubit and 65-qubit Hummingbird processors don’t have a Quantum Volume even close to approaching the use of all or even a simple majority of the available qubits.

When might quantum advantage be achieved?

IBM’s roadmap simply doesn’t clue us in at all as to when they expect that quantum advantage might be achieved. Are we to conclude that they don’t expect it to be achieved until some time after the end of the roadmap?

For more on my own thoughts on quantum advantage, read my paper:

As well as my more recent paper on dramatic quantum advantage:

In that latter paper I suggest three levels of quantum advantage — and some reasonable stepping stones along the way. I’d like to know what expectations IBM might set for achieving each of those levels relative to their hardware milestones:

  1. Minimal quantum advantage. A 1,000X performance advantage over classical solutions. 2X, 10X, and 100X (among others) are reasonable stepping stones.
  2. Substantial or significant quantum advantage. A 1,000,000X performance advantage over classical solutions. 20,000X, 100,000X, and 500,000X (among others) are reasonable stepping stones.
  3. Dramatic quantum advantage. A one quadrillion X (one million billion times) performance advantage over classical solutions. 100,000,000X, a billion X, and a trillion X (among others) are reasonable stepping stones.

Granted, achieving such milestones will vary from application to application, but still, these are important milestones to track in terms of hardware performance.

IBM’s software roadmap is silent on this matter as well.

Will IBM achieve even minimal quantum advantage by the end of their hardware roadmap?

There is no hint in the roadmap as to whether IBM will even come close to achieving minimal quantum advantage (1,000X) over a comparable classical solution by the end of the roadmap — 1,121-qubit Condor in 2023, or whether even minimal quantum advantage is relegated to the “and beyond” stages after Condor.

IBM may — or may not — manage to achieve some degree of fractional minimal quantum advantage by the end of the roadmap — maybe in the range of 2X, 10X, or 100X advantage over a comparable classical solution. But even that is speculation on my part — IBM is silent on this matter in their quantum hardware roadmap.

How many bits can Shor’s algorithm handle at each stage of the roadmap?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: How many bits can Shor’s algorithm handle at each stage of the roadmap?

Although ultimately people are intensely curious about cracking 2048 and 4096-bit public encryption keys, in the near term, much smaller milestones are of interest:

  1. 5-bit.
  2. 6-bit.
  3. 7-bit.
  4. 8-bit.
  5. 10 bit.
  6. 12-bit.
  7. 16-bit.
  8. 20-bit.
  9. 24-bit.
  10. 32-bit.

And some indications about expectations for 64, 128, 256, 512, and 1024-bit and beyond.

As far as I can tell, no vendor is providing this information. That may be due primarily because Shor’s algorithm uses quantum Fourier transform and quantum phase estimation, which are impractical today and for the indefinite future. Still, it would be nice to know when it will be practical to get a handle on how capable machines are at each milestone.

And all of this begs the question of when a pure, clean, complete implementation of Shor’s algorithm is available at all on any machine. Most so-called implementations utilize various tricks and shortcuts to approximate Shor’s algorithm, but not the full algorithm in all of its glory.

So, I’d like to see an indication of when IBM’s future hardware is capable of full support for Shor’s algorithm at any input size, even four bits, but preferably 6–8 bits. That will be a major milestone since it will require 24 to 32 qubits and a fairly deep circuit, well beyond the capabilities of current hardware.

What applications or types of applications might be enabled in terms of support for production-scale data at each milestone?

Looking at the IBM hardware roadmap, one is left wondering what applications or types of applications might be enabled at each successive hardware milestone. IBM’s hardware roadmap is silent in this regard.

Support for applications depends critically on supporting enough data to enable production-scale applications. That’s step one for application support — are there enough qubits.

But qubits alone are not enough. Qubit fidelity is also critical. Connectivity is critical.

Some categories of applications will depend on support for quantum Fourier transform and quantum phase estimation, so fine granularity of phase and probability amplitude will be critical.

Since applications are software, you might expect this to be more relevant to IBM’s software ecosystem roadmap, but what we are really talking about here is what the hardware enables or limits, so this information should be in the hardware roadmap.

Unfortunately, maybe IBM doesn’t have access to his information — only algorithm designers and application developers have the deep knowledge of what requirements they have for hardware. That’s fine, but that still leaves the burden on IBM to take the initiative to solicit and collect the relevant information from algorithm designers and application developers.

In truth, maybe we’re not even at the stage where either IBM or algorithm designers and application developers are ready to start thinking about their needs and requirements two to seven years from now, but personally I think we are at the stage where IBM should be taking the lead and letting people know what information they, IBM, need to design and develop more-capable hardware.

After all, applications are the only reason for the hardware to exist at all. Hardware for the sake of hardware alone, with no mention of applications is… pointless.

Not clear whether or when quantum networking will be supported

It’s not clear whether or when IBM will be supporting quantum networking — supporting quantum interactions between quantum computers which are in separate physical locations, separated by much more than a few feet.

No crisp, explicit roadmap milestones have been specified.

All IBM has provided is just the vague “Ultimately, we envision a future where quantum interconnects link dilution refrigerators each holding a million qubits like the intranet links supercomputing processors, creating a massively parallel quantum computer capable of changing the world.” It is not clear whether that is meant to refer to quantum connections of much more than a few feet or simply multiprocessing and local area networks within a data center or single building, where the environment between the connected machines can be carefully controlled.

In any case, specific milestones, detailing support for various functional capabilities are needed, including number of machines, physical distance, performance, capacity, and function at the level of quantum algorithms.

Not to mention an enhanced programming model for interacting quantum circuits.

Quantum is still a research program at IBM — and much more research is required

I give IBM a lot of credit for the amazing amount of research that they have tackled and accomplished, but so much more research is still required. Much, much more.

And to be clear, IBM Quantum is a research program, not a commercial product business unit, so of course their main focus has been, is, and will continue to be… research.

I’d expect IBM to continue in this research mode for a minimum of ten years, and likely upwards of fifteen to twenty years. And even then, quantum will be an ongoing research area indefinitely.

Quantum computers are still a laboratory curiosity, not a commercial product

As I mentioned earlier, IBM’s quantum efforts, as impressive as they are, are still focused on research. They do not yet have a commercial product and the roadmap is silent as to when their first commercial products — suitable for production deployment of production-scale quantum applications — will debut. As such, I classify their quantum computing efforts as still being a laboratory curiosity.

For more on my ruminations about quantum computing as a laboratory curiosity, read my paper:

When will IBM offer production-scale quantum computing as a commercial product (or service)?

Unknown.

That may be the biggest hole in their otherwise impressive roadmap.

Personally, I believe that is at least 5–7 years down the road.

Everything that IBM is currently offering in quantum computing is suitable only for evaluation and experimentation, but definitely not for production deployment of applications.

Quantum Ready? For Who? For What?

IBM (and every other quantum vendor) wants everybody to be Quantum Ready, sitting and waiting for the eventual and inevitable arrival of quantum computers capable of supporting production-scale practical quantum applications, but I personally feel that so much of this is very premature. Actually, all of it is premature. Research is fine, but expectations of imminent deployment for production applications is not fine at all.

Much basic research in quantum computing is still needed. Very much research. Maybe another 5–7 years, or even 10–15 years. And that’s just to get to the starting line.

Much research is needed for both hardware (qubits) and software (algorithms).

We desperately need a much higher-level programming model for quantum computing, as well as a much richer collection of algorithmic building blocks for designing algorithms and developing applications. That’s work to be done in academia and research labs, not commercial operations focused on products and production.

It’s pointless to have thousands, tens of thousands, or even millions of people be quantum ready when they won’t be able to do anything productive for at least another five to 15 years — and by then the technology will have evolved so dramatically that much of their knowledge will be obsolete anyway.

Some of these comments relate more to IBM’s software roadmap:

IBM may indeed have preliminary results for their software ecosystem in two to five years, but those would be preliminary results, not seasoned results. Add another three to five years before the dust settles and both the hardware and software ecosystem, including algorithms, are themselves Quantum Ready for people to begin using them productively.

And even for those preliminary software results which might be available in two to five years, it’s not possible to be training people today to be Quantum Ready (from a software perspective) for software features which do not yet exist.

Quantum Hardware Ready is needed

Back to the hardware roadmap, it should have more clear indications as to what the hardware itself is ready for in terms of what expectations algorithm designers and application developers should have — what is the hardware ready for in terms of what types of algorithms and applications can be supported at each stage and each milestone.

Quantum Ready users are not the problem or limiting factor at present. Rather, the lack of Quantum Hardware Ready is the critical limiting factor. IBM needs to focus more on getting the hardware ready, not blaming users for not being trained for hardware which doesn’t exist and can’t even be simulated.

Need for higher-quality (and higher-capacity) simulators

Higher-quality (and higher-capacity) simulators which more closely match expected hardware features of future milestones could help to fill the gap, helping users be Quantum Hardware Ready while waiting for the next few hardware development milestones. Granted, we can’t simulate more than about 50-qubits, but we can simulate 40-qubit algorithms which would in theory run on future hardware which has higher-fidelity qubits beyond simply more of them.

And maybe, with sufficient effort and resources we actually can simulate more than 50 qubits. Maybe even 55 qubits. Or even 60. It would be expensive, but it would have real value, especially when comparable real-hardware is not yet available.

Simulators also enable debugging capabilities that enable algorithm designers and application developers to fix bugs more easily than trial and error with real hardware.

A roadmap for classical quantum simulators, including debugging capabilities is needed, both for increasing the number of qubits as well as more closely matching qubit fidelity of each machine in the roadmap.

Need for debugging capabilities

Execution of a quantum circuit is completely opaque with respect to any intermediate results — the only values which can be observed are the final, measured results, at which point the rich quantum state of even the measured qubits has been collapsed to simple classical 0’s and 1’s. This would be downright unacceptable for developing classical software — rich debugging capabilities are needed.

Unfortunately, the opaqueness and unobservability of quantum state on a real quantum computer curtails any significant debugging capabilities.

That’s where classical quantum simulators can play a big role. They can easily allow all details of the rich quantum state to be observed, captured, and analyzed. Even the simply classical binary 0’s and 1’s of measured qubits could be compared and contrasted with the rich quantum state of the qubits before they are measured. Of course, this would require the development of the classical software to add such debugging capabilities to the raw classical quantum simulators, but that’s a mere matter of software development.

A full suite of sophisticated debugging capabilities are needed, much as they are for classical software.

It could be argued that simple quantum circuits don’t need such sophisticated debugging capabilities, but it’s clear that larger and more complex quantum circuits have a much more compelling need for sophisticated debugging capabilities, especially as less-elite technical staff enter the picture.

The point here is that rich debugging capabilities are needed, but that neither the IBM hardware roadmap nor their software roadmap even mention such capabilities let alone detail milestones for the development of such capabilities.

Debugging may superficially seem to be more of a software tool issue more relevant to the software roadmap, but I would argue that debugging is inherently much closer to the raw machine and directly relates to how the raw machine is used. In fact, absent decent debugging capabilities, it may not be possible for many people to effectively use the machine at all. And this need to be close to the hardware only gets more intense as the hardware evolves and gets more sophisticated and more difficult to use without advanced debugging capabilities.

Need for testing capabilities

Testing of software is essential, but typically relegated to being a secondary consideration at best. Sophisticated testing capabilities are needed for quantum circuits.

There are many forms of testing, including but not limited to:

  1. Unit testing.
  2. Module testing.
  3. System testing.
  4. Performance testing.
  5. Logic analysis.
  6. Coverage analysis.
  7. Shot count and circuit repetitions — analyzing results for multiple executions of the same circuit.
  8. Calibration.
  9. Diagnostics.
  10. Hardware fault detection.

It could be argued that simple quantum circuits don’t need sophisticated testing capabilities, but it’s clear that larger and more complex quantum circuits have a much more compelling need for sophisticated testing capabilities, especially as less-elite technical staff enter the picture.

The point here is that rich testing capabilities are needed, but that neither the IBM hardware roadmap nor their software roadmap even mention such capabilities let alone detail milestones for the development of such capabilities.

Testing may superficially seem to be more of a software tool issue more relevant to the software roadmap, but I would argue that testing is inherently much closer to the raw machine and directly relates to how the raw machine is used. In fact, absent decent testing capabilities, it may not be possible for many people to effectively use the machine at all. And this need to be close to the hardware only gets more intense as the hardware evolves and gets more sophisticated and more difficult to use without advanced testing capabilities.

Need for dramatic improvements in documentation and technical specifications at each milestone

Documentation is always a problematic issue for any technology. IBM does have a fair amount of documentation, the Qiskit textbook, blogs, and papers, but the quality, coverage, and coherence is spotty and inconsistent. Dramatic improvement is needed.

I wouldn’t expect a dramatic improvement instantly, overnight, but I would expect the roadmap to speak to at least incremental improvement for each milestone.

I’ve already written about some of the improvements that I would like to see in general:

That’s more about the details that an algorithm designer needs to know in terms of the programming model.

There is a short section in there about Implementation Specification, to cover more of the gory details below even what an algorithm designer necessarily needs, but many algorithms will likely rely on that level of detail, particularly performance, such as qubit fidelity.

To be clear, that section is not referring to documentation for IBM’s proposed software ecosystem, but limited to the nuts and bolts of the programming model — how to use the hardware from an algorithm perspective.

Brief comments on IBM’s roadmap for building an open quantum software ecosystem

Although the main focus of this informal paper is IBM’s quantum hardware roadmap, I do have to acknowledge that IBM has a separate quantum software roadmap:

I may post more extensive comments in a separate informal paper, but at least a few comments are in order here, mostly from the context of how quantum algorithm designers and quantum application developers view and use the hardware, through the lens of a programming model, algorithmic building blocks, and programming languages. I’m not so concerned about support software and tools, but primarily how designers and developers think about the hardware itself.

So, here are a few relevant comments:

  1. IBM’s software roadmap is too brief, too terse, and too vague to make many definitive comments about it.
  2. It sort of hints at a higher-level programming model, but in a fragmentary manner, not fully integrated, and doesn’t even use the term programming model at all.
  3. It does indeed have some interesting fragmentary thoughts, but just too little in terms of a coherent overarching semantic model. Some pieces of the puzzle are there, but not the big picture that puts it all together.
  4. I heartily endorse open source software, but there is a wide range of variations on support for open source software. Will IBM cede 100% of control to outside actors or maintain 100% control but simply allow user submissions? Who ultimately has veto authority about the direction of the software — the community or IBM?
  5. I heartily endorse ecosystems as well, but that can be easier said than done.
  6. I nominally support their three levels (they call them segments) of kernel, algorithms, and models, but I would add two levels: custom applications, and then packaged solutions (generalized applications.) From the perspective of this (my) paper, I’m focused on the programming model(s) to be used by algorithm developers and application developers.
  7. I personally use the term algorithmic building blocks, which may or may not be compatible with IBM’s notion of modules. My algorithmic building blocks would apply primarily to algorithm designers, but also to application developers (custom and packaged) and application framework developers as well.
  8. IBM also refers to application-specific modules for natural science, optimization, machine learning, and finance, which I do endorse, but I also personally place attention on general-purpose algorithmic building blocks which can be used across application domains. Personally, I would substitute domain-specific for application-specific.
  9. I personally use the term application framework, which may be a match to IBM’s concept of a model.
  10. In their visual diagram, IBM refers to Enterprise Clients, but that seems to refer to enterprise developers.
  11. I appreciate IBM’s commitment to a frictionless development framework, but it’s all a bit too vague for me to be very confident about what it will actually do in terms of specific semantics for algorithms and applications. Again, I’m not so interested in support services and tools as I am in the actual semantics of the programming model.
  12. IBM says “where the hardware is no longer a concern to users or developers”, but that’s a bit too vague. Does it mean they aren’t writing code at all? Or does it simply mean a machine-independent programming model? Or does it mean a higher-level programming model, such as what I have been proposing? Who knows! IBM needs to supply more detail.
  13. I’m all in favor of domain-specific pre-built runtimes — if I understand IBM’s vague description, which seem consistent with my own thought about packaged solutions which allow the user to focus on preparing input data and parameters, and then processing output data without even touching or viewing the actual quantum algorithms or application source code. That said, I worry a little that their use of runtime may imply significant application logic that invokes the runtime rather than focussing the user on data and configuration parameters. I do see that the vast majority of users of quantum applications won’t even be writing any code, but how we get there is an open question. In any case, this paper of mine is focused on quantum algorithm designers and quantum application developers and how they see and use the hardware.
  14. Kernel-level code is interesting, but not so much to me. Maybe various algorithmic building blocks, such as quantum Fourier transform or SWAP networks could be implemented at kernel level, but ultimately, all I really care about is the high-level interface that would be available to algorithm designers and application developers — the programming model, their view of the hardware. The last thing I want to see is algorithm designers and application developers working way down at machine-specific kernel level.
  15. I heartily endorse application-specific modules for natural science, optimization, machine learning, and finance — at least at a conceptual level. Anything that enables users or application developers to perform computations at the application level without being burdened by details about either the hardware or quantum mechanics. All of that said, I can’t speak to whether I would approve of how IBM is approaching the design of these modules. Also, I am skeptical as to when the hardware will be sufficiently mature to support such modules at production-scale.
  16. I nominally endorse quantum model services for natural science, optimization, machine learning, and finance — at least at a conceptual level. If I read the IBM graphic properly, such model services won’t be available until 2023 at the earliest and possibly not until 2026. Even there, it’s not clear if it’s simply that all of the lower-level capabilities are in place to enable model developers to develop such application-specific models, or whether such models will then be ready for use by application developers.
  17. No mention of any dependencies on hardware advances, such as quantum error correction and logical qubits, improvements in qubit fidelity, improvements in qubit connectivity.
  18. No mention of Quantum Volume or matching size of algorithms and required hardware.
  19. No sense of synchronizing the hardware roadmap and the software roadmap.
  20. No mention of networked applications or quantum networking.
  21. No mention of evolution towards vendor-neutral technical standards. The focus is clearly on IBM setting the standards for others to follow. That may not be so much a negative as simply a statement of how young and immature the sector remains.

Those are just a few of my thoughts. I may expand on this list in a separate informal paper focused on the software roadmap.

Maybe many of the milestones and details which interest me occur beyond the end of the current roadmap

It’s very possible that many of the milestones or details which interest me might occur well beyond the end of IBM’s current quantum hardware roadmap — in the “and beyond” stage beyond the 1,121-qubit Condor, the current end of the roadmap. That would account for them not being mentioned on their current roadmap. It’s possible. Is it likely? I simply couldn’t say. Only IBM knows with any certainty.

Heads up for other quantum computing vendors — all of these comments apply to you as well!

My comments here are specifically directed at IBM and their quantum hardware map (and software map to a limited degree), but many, most, if not virtually all of them apply to any vendor in quantum computing. Qubits are qubits. Qubit fidelity is qubit fidelity. Errors are errors. Error correction is error correction. Algorithms are algorithms. Applications are applications. Regardless of the vendor. So it behooves IBM’s competitors and everyone’s partners, suppliers, and customers to pay attention to my comments as well. And researchers in academia as well. Show your roadmaps, your milestones, and the details I have noted.

Summary and conclusions

  1. Great that IBM has shared what they have for a roadmap.
  2. Disappointing that it took so long to get it out.
  3. More questions than answers.
  4. Much greater detail is needed.
  5. Full error correction is still far over the horizon.
  6. Evolution of qubit fidelity between milestones is unclear.
  7. Not very clear what developers will really have to work with at each milestone, especially in terms of coherence time, qubit fidelity, gate error rate, measurement error rate, and connectivity.
  8. Waiting to hear what will succeed Quantum Volume once more than 50 qubits can be used reliably in a deep algorithm.
  9. This is all still just a research program, a laboratory curiosity, not a commercial product (or service) suitable for production use for production-scale practical applications.
  10. Unclear how much more research will be required after the end of the current IBM hardware roadmap before quantum computing can transition to a commercial product suitable for production-scale practical quantum applications.
  11. Unclear what the timeframe will be for transition to a commercial product (or service.)
  12. No sense of when they might achieve The ENIAC Moment — first production-scale application.
  13. No sense of when they might achieve The FORTRAN Moment — easy for most organizations to develop quantum applications — without elite teams.
  14. Unclear whether IBM will achieve even minimal quantum advantage (1,000X classical solutions) by the end of their hardware roadmap (2023 with 1,121-qubit Condor) or whether we’ll have to await the “and beyond” stages after the end of the roadmap.
  15. It’s very possible that many of the milestones or details which interest me might occur well beyond the end of IBM’s current quantum hardware roadmap — in the “and beyond” stage beyond the 1,121-qubit Condor, the current end of the roadmap.
  16. Many, most, if not virtually all of my comments here apply to any vendor in quantum computing, including IBM’s competitors and everyone’s partners, suppliers, and customers as well. And researchers in academia as well. Show your roadmaps, your milestones, and the details I have noted.
  17. For now, we remain waiting for the next machine on the roadmap — 127-qubit Eagle — in the coming six months, by the end of 2021, and the 433-qubit Osprey in 2022.
  18. Overall, we’re still in the early innings for quantum computing — and not close to being ready for prime time with production-scale practical applications — or even close to achieving any significant degree of quantum advantage, the purpose for even considering quantum computing.

--

--