Thoughts on the IBM Quantum Hardware Roadmap

  1. When will quantum computers support production-scale applications?
  2. When will quantum computers achieve quantum advantage (or quantum supremacy) for production-scale applications?
  1. Positive highlights.
  2. Negative highlights.
  3. My own interests.
  4. The IBM roadmap itself.
  5. Graphic for the IBM quantum hardware roadmap.
  6. Earlier hint of a roadmap.
  7. I’m not so interested in support software and tools.
  8. Too short — need more detail for longer-term aims, beyond 2023, just two years from now.
  9. Too brief — need more detail on each milestone.
  10. Limited transparency — I’m sure IBM has the desired detail in their internal plans.
  11. When will quantum error correction (QEC) be achieved?
  12. Need roadmap milestones for nines of qubit fidelity.
  13. Need roadmap milestones for qubit measurement fidelity.
  14. When might IBM get to near-perfect qubits?
  15. What will the actual functional transition milestones be on the path to logical qubits?
  16. Will there be any residual error for logical qubits or will they be as perfect as classical bits?
  17. Will future machines support only logical qubits or will physical qubit circuits still be supported?
  18. What functional advantages might come from larger numbers of qubits?
  19. Need milestones for granularity of phase and probability amplitude.
  20. Need timeframes and milestones for size supported for both quantum phase estimation and quantum Fourier transform.
  21. When will quantum chemists (among others) be able to rely on quantum phase estimation and quantum Fourier transform?
  22. When or will IBM support a higher-level programming model?
  23. When will larger algorithms — like using 40 qubits — become possible?
  24. When could a Quantum Volume of 2⁴⁰ be expected?
  25. When will IBM develop a replacement for the Quantum Volume metric?
  26. When will IBM need a replacement for the Quantum Volume metric?
  27. How large could algorithms be on a 1,121-qubit Condor?
  28. When might The ENIAC Moment be achieved?
  29. When might The FORTRAN Moment be achieved?
  30. When might quantum advantage be achieved?
  31. Will IBM achieve even minimal quantum advantage by the end of their hardware roadmap?
  32. How many bits can Shor’s algorithm handle at each stage of the roadmap?
  33. What applications or types of applications might be enabled in terms of support for production-scale data at each milestone?
  34. Not clear whether or when quantum networking will be supported.
  35. Quantum is still a research program at IBM — and much more research is required.
  36. Quantum computers are still a laboratory curiosity, not a commercial product.
  37. When will IBM offer production-scale quantum computing as a commercial product (or service)?
  38. Quantum Ready? For Who? For What?
  39. Quantum Hardware Ready is needed.
  40. Need for higher-quality (and higher-capacity) simulators.
  41. Need for debugging capabilities.
  42. Need for testing capabilities.
  43. Need for dramatic improvements in documentation and technical specifications at each milestone.
  44. Brief comments on IBM’s roadmap for building an open quantum software ecosystem.
  45. Maybe many of the milestones and details which interest me occur beyond the end of the current roadmap
  46. Heads up for other quantum computing vendors — all of these comments apply to you as well!
  47. Summary and conclusions.

Positive highlights

I appreciate:

  1. IBM’s transparency on putting out such a roadmap.
  2. The view into the future beyond the next year or two — including a path to 1,000 qubits, a million qubits, and beyond.
  3. The mention of error correction and logical qubits.
  4. The mention of linking quantum computers to create a massively parallel quantum computer.
  5. The prospect of achieving 100 qubits sometime this year.

Negative highlights

Unfortunately:

  1. Disappointing that it took so long to put the roadmap out. I first heard mention that they had a roadmap back in 2018.
  2. Raises more questions than it answers.
  3. Too short — need more detail for longer-term aims, beyond 2023, just two years from now.
  4. Too brief — need more detail on each milestone.
  5. Needs more milestones. Intermediate stages and further stages. I certainly hope that they are working on more machines than listed over the next three to five years.
  6. Other than raw number of qubits, roughly what can algorithm designers and application developers expect to see in the next two machines, 127-qubit Eagle — in the coming six months, by the end of 2021, and the 433-qubit Osprey in 2022? Obviously a lot can change over the next six to eighteen months, but some sort of expectations need to be set.
  7. Not clear when the quantum processing unit will become modular. When will there be support for more qubits than will fit on a single chip?
  8. Not clear when or whether multiple quantum computers can be directly connected at the quantum level. Comparable to a classical multiprocessor, either tightly-coupled or loosely-coupled.
  9. Not clear whether or when quantum networking will be supported.
  10. Silent as to when error correction and logical qubits will become available.
  11. No milestones given for the path to error correction and logical qubits. What will the actual milestones, the functional transitions really be?
  12. Silent as to when qubit counts will begin to refer to logical qubits. I’m presuming that all qubit counts on the current roadmap are for physical qubits.
  13. Silent as to milestones for capacities of logical qubits, especially for reaching support for practical, production-scale applications.
  14. Silent as any improvements in connectivity between qubits. Each milestone should indicate degree of connectivity. Will SWAP networks still be required? Will full any-to-any connectivity be achieved by some milestone?
  15. Silent as to milestones for improvements to qubit and gate fidelity. No hints for nines of qubit fidelity at each milestone.
  16. Silent as to milestones for improvements to qubit measurement fidelity.
  17. Silent as to when near-perfect qubits might be achieved. High enough fidelity that many algorithms won’t need full quantum error correction.
  18. Silent as to milestones for granularity of phase and probability amplitude.
  19. Silent as to when quantum chemists (among others) will be able to rely on quantum phase estimation and quantum Fourier transform of various sizes. When will quantum phase estimation become practical?
  20. Silent as to the metric to replace quantum volume, which doesn’t work for more than about 50 qubits. Can’t practically simulate a quantum circuit using more than about 50 qubits.
  21. Silent as to the stage at which quantum volume exceeds the number of qubits which can be practically simulated on a classical computer.
  22. Silent as to when larger algorithms — like using 40 qubits — will become possible. When could a Quantum Volume of 2⁴⁰ be expected.
  23. Silent as to how large algorithms could be on a 1,121-qubit Condor. What equivalent of Quantum Volume — number of qubits and depth of circuit — could be expected.
  24. Silent as to when quantum advantage might be expected to be achieved — for any real, production-scale, practical application. Should we presume that means that IBM doesn’t expect quantum advantage until some time after the end of the roadmap?
  25. Silent as to what applications or types of applications might be enabled in terms of support for production-scale data at each milestone.
  26. Silent on the roadmap for machine simulators, including maximum qubit count which can be simulated at each milestone. Silent as to where they think the ultimate wall is for the maximum number of qubits which can be simulated.
  27. Silent as to improvements in qubit coherence and circuit depth at each stage.
  28. Silent as to maximum circuit size and maximum circuit depth which can be supported at each stage.
  29. Silent as to how far they can go with NISQ and which machines might be post-NISQ.
  30. Silent as to when fault-tolerant machines will become available.
  31. Silent as to milestones for various intra-circuit hybrid quantum/classical programming capabilities.
  32. Open question: Will there be any residual error for logical qubits or will they be as perfect as classical bits?
  33. Open question: At some stage, will future machines support only logical qubits or will physical qubit circuits still be supported?
  34. Open question: What will be the smallest machine supporting logical qubit circuits?
  35. Silent as to debugging capabilities.
  36. Silent as to testing capabilities.
  37. It is quite clear that quantum computing is still a research program at IBM, not a commercial product suitable for production use.
  38. Silent as to when quantum computing might transition from mere laboratory curiosity to front-line commercial product suitable for production-scale use cases.
  39. Silent as to how much additional research, beyond the end of the current roadmap, may be necessary before a transition to a commercial product.
  40. Silent as to improvements in documentation and technical specifications at each milestone.

My own interests

I wouldn’t necessarily expect IBM to put these milestones in its own roadmap, but they interest me nonetheless:

  1. When might quantum advantage be achieved — for any real, production-scale, practical application? For minimal quantum advantage (e.g., 2X, 10X, 100X), significant quantum advantage (e.g., 1,000X to 1,000,000X), and dramatic quantum advantage (one quadrillion X)?
  2. How close will each stage come to full quantum advantage? What fractional quantum advantage is achieved at each stage?
  3. What applications might achieve quantum advantage at each stage?
  4. What applications will be supported at each stage which weren’t feasible at earlier stages?
  5. Each successive stage should have some emblematic algorithm which utilizes the new capabilities of that stage, such as more qubits, deeper circuit depth, not just running the same old algorithms with the same number of qubits and circuit depth as for earlier, smaller machines.
  6. What functional advantages might come from larger numbers of qubits, beyond simply that algorithms can handle more data?
  7. Is there any reason to believe that there might be a better qubit technology (alternative to superconducting transmon qubits) down the road, or any reason to believe that no better qubit technology is needed? Does IBM anticipate that there might be a dramatic technology transition at some stage, maybe five, ten, or more years down the road?
  8. Does IBM anticipate that they might actually support more than one qubit technology at some stage? Like, trapped ion?
  9. When or will IBM support a higher-level programming model with higher-level algorithmic building blocks which makes it feasible for non-quantum experts to translate application problems into quantum solutions without knowledge of quantum logic gates and quantum states.
  10. When might The ENIAC Moment be achieved? First production-scale application.
  11. When might The FORTRAN Moment be achieved? Higher-level programming model which makes it easy for most organizations to develop quantum applications — without elite teams.
  12. How many bits can Shor’s algorithm handle at each stage of the roadmap?
  13. Need for a broad set of benchmark tests to evaluate performance, capacity, and precision of various algorithmic building blocks, such as phase estimation, along with target benchmark results for each hardware milestone.
  14. Milestones for optimizing various algorithmic building blocks, such as phase estimation, based on hardware improvements at each stage.
  15. The maximum size of algorithms which can correctly run on the physical hardware at each milestone but can no longer be classically simulated. Number of qubits and circuit depth. Maybe several thresholds for the fraction of correct executions. For now, this could parallel projections of log2(Quantum Volume) and estimate when log2(QV) exceeds the maximum classical quantum simulator capacity.

The IBM roadmap itself

The IBM quantum hardware roadmap can be found here:

Graphic for the IBM quantum hardware roadmap

I’m a text-only guy, so I won’t reproduce the graphic for the roadmap, but you can find it here — look for the blue diamonds:

Earlier hint of a roadmap

I haven’t been able to track down the original citation, but I believe it was sometime in 2018 that IBM publicly stated that quantum error correction was on their roadmap. So, that was a vague reference to a purported roadmap, but no actual roadmap was available to the public, until 2020.

I’m not so interested in support software and tools

Support software and tools are obviously important, but I’m less concerned about them in this paper, which is more focused on hardware and the programming model for algorithms and applications.

Too short — need more detail for longer-term aims, beyond 2023, just two years from now

In my opinion, the roadmap needs milestones for:

  1. 3 years.
  2. 5 years.
  3. 7 years.
  4. 10 years.
  5. 12 years.
  6. 15 years.
  7. 20 years.
  8. 25 years. Where is the technology really headed?

Too brief — need more detail on each milestone

More than just a too-terse short phrase for key advancement and qubit count and code name. Not looking for any precise detail, especially years out, but at least rough targets, even if nothing more than rough percentage improvements expected at each stage. Graphs with trend lines would be appreciated.

  1. Qubit fidelity.
  2. Qubit lattice layout.
  3. Qubit connectivity.
  4. Gate cycle time.
  5. Qubit coherence.
  6. Maximum circuit depth.
  7. Maximum circuit size.
  8. Maximum circuit executions per second.

Limited transparency — I’m sure IBM has the desired detail in their internal plans

It’s a little baffling that the IBM hardware roadmap has so little technical detail. I’m sure that their own internal plans and roadmaps have a lot of the level of detail that I suggest in this paper. Why exactly they refrain from disclosing that level of detail is unclear.

When will quantum error correction (QEC) be achieved?

The IBM roadmap graphic does have quantum error correction listed as a key achievement for the and beyond stage, sometime beyond the 1,121-qubit Condor processor planned for 2023. I would have hoped to see at least some progress sooner, and some highlighting of milestones on the path to full quantum error correction and full support for error-free logical qubits.

  1. … as we scale up the number of physical qubits, we will also be able to explore how they’ll work together as error-corrected logical qubits — every processor we design has fault tolerance considerations taken into account.
  2. We think of Condor as an inflection point, a milestone that marks our ability to implement error correction and scale up our devices…

Need roadmap milestones for nines of qubit fidelity

There is no mention in the IBM roadmap of how many nines of qubit fidelity will be achieved and when and in what milestones.

  1. Coherence time.
  2. Gate errors. Both single-qubit and two-qubit.
  3. Measurement errors.
  1. Two nines — 99%.
  2. Three nines — 99.9%.
  3. Four nines — 99.99%.
  4. Five nines — 99.999%.
  5. Six nines — 99.9999%.
  6. Whether IBM has intentions or plans for more than six nines of qubit fidelity should be specified. Seven, eight, nine, and higher nines of qubit fidelity would be great, but will likely be out of reach in the next two to four years.
  7. What maximum qubit fidelity, short of quantum error correction, could be achieved in the longer run, beyond the published roadmap, should also be specified.

Need roadmap milestones for qubit measurement fidelity

I didn’t realize this until recently, but simple measurement of qubits to get the results of a quantum computation is a very error-prone process. So even if qubit coherence is increased and gate errors are reduced, there are still measurement errors to deal with.

When might IBM get to near-perfect qubits?

Although quantum error correction is the long-term goal, a milestone along the way and an independent goal in its own right are near-perfect qubits, which have a high-enough qubit fidelity that they both support implementation of quantum error correction and they enable elite and highly-motivated technical teams to implement practical applications even before quantum error correction is available and supporting enough logical qubits to enable practical applications.

What will the actual functional transition milestones be on the path to logical qubits?

IBM hasn’t given any specific milestones for the path to error correction and logical qubits. What will the actual milestones, the functional transitions really be?

Will there be any residual error for logical qubits or will they be as perfect as classical bits?

It sure would be nice if logical qubits are as (seemingly) perfect and error-free as classical bits are, but I suspect that there will be some tiny residual error. IBM needs to set expectations in their roadmap.

  1. Six nines — one error in a million operations.
  2. Nine nieces — one error in a billion operations.
  3. Twelve nines — one error in a trillion operations.
  4. Fifteen nines — one error in a quadrillion operations.

Will future machines support only logical qubits or will physical qubit circuits still be supported?

Once logical qubits become available, IBM has not indicated whether future machines will support only logical qubits or still support physical qubit circuits.

What functional advantages might come from larger numbers of qubits?

What functional advantages might come from larger numbers of qubits, beyond simply that algorithms can handle more data?

Need milestones for granularity of phase and probability amplitude

Quantum computational chemistry is an oft-touted application for quantum computers. Variational methods are currently being used as a stopgap measure, but ultimately quantum phase estimation (QPE) and quantum Fourier transform (QFT) are needed to achieve both precision of results and dramatic quantum advantage for performance, and both are critically dependent on granularity of the phase portion of quantum state. Very fine granularity for phase is needed. So, the roadmap should detail milestones for improvement of granularity of phase.

Need timeframes and milestones for size supported for both quantum phase estimation and quantum Fourier transform

The roadmap should indicate the timeframes in which both quantum phase estimation (QPE) and quantum Fourier transform (QFT) will become practical, indicating the size of QPE and QFT which will be supported in various timeframes and at various milestones.

  1. 4-bit.
  2. 8-bit.
  3. 12-bit.
  4. 16-bit.
  5. 20-bit.
  6. 24-bit.
  7. 32-bit.
  8. 40-bit.
  9. 48-bit.
  10. 56-bit.
  11. 64-bit.
  12. 80-bit.
  13. 96-bit.
  14. 128-bit.
  15. 192-bit.
  16. 256-bit.

When will quantum chemists (among others) be able to rely on quantum phase estimation and quantum Fourier transform?

The roadmap should make it very clear when quantum chemists (among others) can begin relying on quantum phase estimation and quantum Fourier transform. Variational methods are a great stopgap measure, but quantum chemists (among others) need greater precision of results and the true, dramatic quantum advantage of quantum computing.

When or will IBM support a higher-level programming model?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: when or will IBM support a higher-level programming model with higher-level algorithmic building blocks which makes it feasible for non-quantum experts to translate application problems into quantum solutions without knowledge of quantum logic gates and quantum states?

When might The ENIAC Moment be achieved?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: at what stage would an elite team be the first to develop and test (even if not deploy into production) the first production-scale practical application? What I call The ENIAC Moment.

When might The FORTRAN Moment be achieved?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: at what stage will it finally be easy for most organizations to develop quantum applications — without elite teams? What I call The FORTRAN Moment.

When will larger algorithms — like using 40 qubits — become possible?

Published quantum algorithms currently rarely utilize even more than a mere 20 qubits. I’m anxious to see larger algorithms, particularly:

  1. 24 qubits.
  2. 28 qubits.
  3. 32 qubits.
  4. 36 qubits.
  5. 40 qubits.
  6. 44 qubits.
  7. 48 qubits.
  8. 50 qubits.
  9. 56 qubits.
  10. 60 qubits.
  11. 64 qubits.
  12. And more.

When could a Quantum Volume of 2⁴⁰ be expected?

A presumption for support for 40-qubit algorithms is that we need hardware with a Quantum Volume (QV) of at least 2⁴⁰ (one trillion.)

When will IBM develop a replacement for the Quantum Volume metric?

IBM’s Quantum Volume capacity metric will only work up to about 50 qubits since the metric requires a full classical simulation of the circuit and 50 qubits is roughly the limit for classical simulation of quantum circuits. A Quantum Volume of 2⁵⁰ would represent a quantum circuit with a depth of 50 quantum logic gates operating on 50 qubits and achieving acceptable results, which would require simulation of roughly one quadrillion quantum states.

When will IBM need a replacement for the Quantum Volume metric?

IBM has not indicated at what stage on the roadmap they expect to be able to execute quantum circuits with acceptable results which can no longer be classically simulated, which is a key requirement for deriving the Quantum Volume metric.

How large could algorithms be on a 1,121-qubit Condor?

What equivalent of Quantum Volume (QV) — number of qubits and depth of circuit — could be expected for the 1,121-qubit Condor processor? I say equivalent of QV because technically actual QV requires a full classical circuit simulation, which will not be possible much beyond 50 qubits (and may not be practical much beyond 40 qubits or even 36–38 qubits.)

When might quantum advantage be achieved?

IBM’s roadmap simply doesn’t clue us in at all as to when they expect that quantum advantage might be achieved. Are we to conclude that they don’t expect it to be achieved until some time after the end of the roadmap?

  1. Minimal quantum advantage. A 1,000X performance advantage over classical solutions. 2X, 10X, and 100X (among others) are reasonable stepping stones.
  2. Substantial or significant quantum advantage. A 1,000,000X performance advantage over classical solutions. 20,000X, 100,000X, and 500,000X (among others) are reasonable stepping stones.
  3. Dramatic quantum advantage. A one quadrillion X (one million billion times) performance advantage over classical solutions. 100,000,000X, a billion X, and a trillion X (among others) are reasonable stepping stones.

Will IBM achieve even minimal quantum advantage by the end of their hardware roadmap?

There is no hint in the roadmap as to whether IBM will even come close to achieving minimal quantum advantage (1,000X) over a comparable classical solution by the end of the roadmap — 1,121-qubit Condor in 2023, or whether even minimal quantum advantage is relegated to the “and beyond” stages after Condor.

How many bits can Shor’s algorithm handle at each stage of the roadmap?

I wouldn’t expect IBM to put this in the roadmap per se, but it’s a very interesting question: How many bits can Shor’s algorithm handle at each stage of the roadmap?

  1. 5-bit.
  2. 6-bit.
  3. 7-bit.
  4. 8-bit.
  5. 10 bit.
  6. 12-bit.
  7. 16-bit.
  8. 20-bit.
  9. 24-bit.
  10. 32-bit.

What applications or types of applications might be enabled in terms of support for production-scale data at each milestone?

Looking at the IBM hardware roadmap, one is left wondering what applications or types of applications might be enabled at each successive hardware milestone. IBM’s hardware roadmap is silent in this regard.

Not clear whether or when quantum networking will be supported

It’s not clear whether or when IBM will be supporting quantum networking — supporting quantum interactions between quantum computers which are in separate physical locations, separated by much more than a few feet.

Quantum is still a research program at IBM — and much more research is required

I give IBM a lot of credit for the amazing amount of research that they have tackled and accomplished, but so much more research is still required. Much, much more.

Quantum computers are still a laboratory curiosity, not a commercial product

As I mentioned earlier, IBM’s quantum efforts, as impressive as they are, are still focused on research. They do not yet have a commercial product and the roadmap is silent as to when their first commercial products — suitable for production deployment of production-scale quantum applications — will debut. As such, I classify their quantum computing efforts as still being a laboratory curiosity.

When will IBM offer production-scale quantum computing as a commercial product (or service)?

Unknown.

Quantum Ready? For Who? For What?

IBM (and every other quantum vendor) wants everybody to be Quantum Ready, sitting and waiting for the eventual and inevitable arrival of quantum computers capable of supporting production-scale practical quantum applications, but I personally feel that so much of this is very premature. Actually, all of it is premature. Research is fine, but expectations of imminent deployment for production applications is not fine at all.

Quantum Hardware Ready is needed

Back to the hardware roadmap, it should have more clear indications as to what the hardware itself is ready for in terms of what expectations algorithm designers and application developers should have — what is the hardware ready for in terms of what types of algorithms and applications can be supported at each stage and each milestone.

Need for higher-quality (and higher-capacity) simulators

Higher-quality (and higher-capacity) simulators which more closely match expected hardware features of future milestones could help to fill the gap, helping users be Quantum Hardware Ready while waiting for the next few hardware development milestones. Granted, we can’t simulate more than about 50-qubits, but we can simulate 40-qubit algorithms which would in theory run on future hardware which has higher-fidelity qubits beyond simply more of them.

Need for debugging capabilities

Execution of a quantum circuit is completely opaque with respect to any intermediate results — the only values which can be observed are the final, measured results, at which point the rich quantum state of even the measured qubits has been collapsed to simple classical 0’s and 1’s. This would be downright unacceptable for developing classical software — rich debugging capabilities are needed.

Need for testing capabilities

Testing of software is essential, but typically relegated to being a secondary consideration at best. Sophisticated testing capabilities are needed for quantum circuits.

  1. Unit testing.
  2. Module testing.
  3. System testing.
  4. Performance testing.
  5. Logic analysis.
  6. Coverage analysis.
  7. Shot count and circuit repetitions — analyzing results for multiple executions of the same circuit.
  8. Calibration.
  9. Diagnostics.
  10. Hardware fault detection.

Need for dramatic improvements in documentation and technical specifications at each milestone

Documentation is always a problematic issue for any technology. IBM does have a fair amount of documentation, the Qiskit textbook, blogs, and papers, but the quality, coverage, and coherence is spotty and inconsistent. Dramatic improvement is needed.

Brief comments on IBM’s roadmap for building an open quantum software ecosystem

Although the main focus of this informal paper is IBM’s quantum hardware roadmap, I do have to acknowledge that IBM has a separate quantum software roadmap:

  1. IBM’s software roadmap is too brief, too terse, and too vague to make many definitive comments about it.
  2. It sort of hints at a higher-level programming model, but in a fragmentary manner, not fully integrated, and doesn’t even use the term programming model at all.
  3. It does indeed have some interesting fragmentary thoughts, but just too little in terms of a coherent overarching semantic model. Some pieces of the puzzle are there, but not the big picture that puts it all together.
  4. I heartily endorse open source software, but there is a wide range of variations on support for open source software. Will IBM cede 100% of control to outside actors or maintain 100% control but simply allow user submissions? Who ultimately has veto authority about the direction of the software — the community or IBM?
  5. I heartily endorse ecosystems as well, but that can be easier said than done.
  6. I nominally support their three levels (they call them segments) of kernel, algorithms, and models, but I would add two levels: custom applications, and then packaged solutions (generalized applications.) From the perspective of this (my) paper, I’m focused on the programming model(s) to be used by algorithm developers and application developers.
  7. I personally use the term algorithmic building blocks, which may or may not be compatible with IBM’s notion of modules. My algorithmic building blocks would apply primarily to algorithm designers, but also to application developers (custom and packaged) and application framework developers as well.
  8. IBM also refers to application-specific modules for natural science, optimization, machine learning, and finance, which I do endorse, but I also personally place attention on general-purpose algorithmic building blocks which can be used across application domains. Personally, I would substitute domain-specific for application-specific.
  9. I personally use the term application framework, which may be a match to IBM’s concept of a model.
  10. In their visual diagram, IBM refers to Enterprise Clients, but that seems to refer to enterprise developers.
  11. I appreciate IBM’s commitment to a frictionless development framework, but it’s all a bit too vague for me to be very confident about what it will actually do in terms of specific semantics for algorithms and applications. Again, I’m not so interested in support services and tools as I am in the actual semantics of the programming model.
  12. IBM says “where the hardware is no longer a concern to users or developers”, but that’s a bit too vague. Does it mean they aren’t writing code at all? Or does it simply mean a machine-independent programming model? Or does it mean a higher-level programming model, such as what I have been proposing? Who knows! IBM needs to supply more detail.
  13. I’m all in favor of domain-specific pre-built runtimes — if I understand IBM’s vague description, which seem consistent with my own thought about packaged solutions which allow the user to focus on preparing input data and parameters, and then processing output data without even touching or viewing the actual quantum algorithms or application source code. That said, I worry a little that their use of runtime may imply significant application logic that invokes the runtime rather than focussing the user on data and configuration parameters. I do see that the vast majority of users of quantum applications won’t even be writing any code, but how we get there is an open question. In any case, this paper of mine is focused on quantum algorithm designers and quantum application developers and how they see and use the hardware.
  14. Kernel-level code is interesting, but not so much to me. Maybe various algorithmic building blocks, such as quantum Fourier transform or SWAP networks could be implemented at kernel level, but ultimately, all I really care about is the high-level interface that would be available to algorithm designers and application developers — the programming model, their view of the hardware. The last thing I want to see is algorithm designers and application developers working way down at machine-specific kernel level.
  15. I heartily endorse application-specific modules for natural science, optimization, machine learning, and finance — at least at a conceptual level. Anything that enables users or application developers to perform computations at the application level without being burdened by details about either the hardware or quantum mechanics. All of that said, I can’t speak to whether I would approve of how IBM is approaching the design of these modules. Also, I am skeptical as to when the hardware will be sufficiently mature to support such modules at production-scale.
  16. I nominally endorse quantum model services for natural science, optimization, machine learning, and finance — at least at a conceptual level. If I read the IBM graphic properly, such model services won’t be available until 2023 at the earliest and possibly not until 2026. Even there, it’s not clear if it’s simply that all of the lower-level capabilities are in place to enable model developers to develop such application-specific models, or whether such models will then be ready for use by application developers.
  17. No mention of any dependencies on hardware advances, such as quantum error correction and logical qubits, improvements in qubit fidelity, improvements in qubit connectivity.
  18. No mention of Quantum Volume or matching size of algorithms and required hardware.
  19. No sense of synchronizing the hardware roadmap and the software roadmap.
  20. No mention of networked applications or quantum networking.
  21. No mention of evolution towards vendor-neutral technical standards. The focus is clearly on IBM setting the standards for others to follow. That may not be so much a negative as simply a statement of how young and immature the sector remains.

Maybe many of the milestones and details which interest me occur beyond the end of the current roadmap

It’s very possible that many of the milestones or details which interest me might occur well beyond the end of IBM’s current quantum hardware roadmap — in the “and beyond” stage beyond the 1,121-qubit Condor, the current end of the roadmap. That would account for them not being mentioned on their current roadmap. It’s possible. Is it likely? I simply couldn’t say. Only IBM knows with any certainty.

Heads up for other quantum computing vendors — all of these comments apply to you as well!

My comments here are specifically directed at IBM and their quantum hardware map (and software map to a limited degree), but many, most, if not virtually all of them apply to any vendor in quantum computing. Qubits are qubits. Qubit fidelity is qubit fidelity. Errors are errors. Error correction is error correction. Algorithms are algorithms. Applications are applications. Regardless of the vendor. So it behooves IBM’s competitors and everyone’s partners, suppliers, and customers to pay attention to my comments as well. And researchers in academia as well. Show your roadmaps, your milestones, and the details I have noted.

Summary and conclusions

  1. Great that IBM has shared what they have for a roadmap.
  2. Disappointing that it took so long to get it out.
  3. More questions than answers.
  4. Much greater detail is needed.
  5. Full error correction is still far over the horizon.
  6. Evolution of qubit fidelity between milestones is unclear.
  7. Not very clear what developers will really have to work with at each milestone, especially in terms of coherence time, qubit fidelity, gate error rate, measurement error rate, and connectivity.
  8. Waiting to hear what will succeed Quantum Volume once more than 50 qubits can be used reliably in a deep algorithm.
  9. This is all still just a research program, a laboratory curiosity, not a commercial product (or service) suitable for production use for production-scale practical applications.
  10. Unclear how much more research will be required after the end of the current IBM hardware roadmap before quantum computing can transition to a commercial product suitable for production-scale practical quantum applications.
  11. Unclear what the timeframe will be for transition to a commercial product (or service.)
  12. No sense of when they might achieve The ENIAC Moment — first production-scale application.
  13. No sense of when they might achieve The FORTRAN Moment — easy for most organizations to develop quantum applications — without elite teams.
  14. Unclear whether IBM will achieve even minimal quantum advantage (1,000X classical solutions) by the end of their hardware roadmap (2023 with 1,121-qubit Condor) or whether we’ll have to await the “and beyond” stages after the end of the roadmap.
  15. It’s very possible that many of the milestones or details which interest me might occur well beyond the end of IBM’s current quantum hardware roadmap — in the “and beyond” stage beyond the 1,121-qubit Condor, the current end of the roadmap.
  16. Many, most, if not virtually all of my comments here apply to any vendor in quantum computing, including IBM’s competitors and everyone’s partners, suppliers, and customers as well. And researchers in academia as well. Show your roadmaps, your milestones, and the details I have noted.
  17. For now, we remain waiting for the next machine on the roadmap — 127-qubit Eagle — in the coming six months, by the end of 2021, and the 433-qubit Osprey in 2022.
  18. Overall, we’re still in the early innings for quantum computing — and not close to being ready for prime time with production-scale practical applications — or even close to achieving any significant degree of quantum advantage, the purpose for even considering quantum computing.

--

--

Freelance Consultant

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store