Thoughts on the 2022 IBM Quantum Roadmap Update

  1. Major focus on modularity and scaling of hardware architecture, software and tools for applications, and partners and building an ecosystem.
  2. The hardware architectural advances are technically impressive.
  3. Too much focus on higher qubit count. With no clear purpose.
  4. No real focus on higher qubit fidelity. No specific milestones listed. It just comes across as being an afterthought rather than a primary focus. And right now quality (qubit fidelity) is seriously lagging behind scaling (qubit count.)
  5. No attention given to qubit connectivity. No recognition of the problem or path to addressing it.
  6. A lot of extra complexity. With little benefit to developers.
  7. No real focus on a simpler developer experience. No serious attempt to minimize or reduce developer complexity. So-called Frictionless development is still very high friction.
  8. Too vague on milestones for full quantum error correction.
  9. No milestones or metrics for path to quantum advantage. How will we know when we’ve reached quantum advantage and what can we say about it.
  10. No true sense of exactly when we would finally arrive at practical quantum computing. Again, what specific metrics.
  11. No sense of when IBM would offer a commercial product or service. Still focused on research, prototyping, and experimentation — pre-commercialization.
  12. No hint of quality or connectivity updates for Falcon, Hummingbird, or Eagle.
  13. Good to see such transparency.
  14. But significantly more transparency and detail is needed.
  15. Unclear if sufficient to avert a Quantum Winter to two to three years.
  1. My thoughts on IBM’s previous roadmap
  2. Major positive highlights
  3. Major negative highlights
  4. Three pillars to usher in an era of practical quantum computing
  5. Summary of new hardware
  6. Summary of new software capabilities
  7. Summary of roadmap milestones by year
  8. Roadmap documents and video
  9. Previous IBM Quantum hardware roadmap
  10. Scale, quality, and speed as the three essential dimensions of quantum computing performance
  11. Performance = Scale + Quality + Speed
  12. Four major objectives for 2022
  13. Achieve Quantum Volume of 1024 this year
  14. My thoughts on Eagle and Osprey
  15. Osprey is not committed for more than Quantum Volume of 1024
  16. Less than four months until Osprey is formally introduced
  17. 133-qubit Heron is a classical multi-core quantum processor
  18. Will the 133-qubit Heron processor offer much over the 127-qubit Eagle processor?
  19. Crossbill will be IBM’s first multi-chip quantum processor
  20. Crossbill may be more of an internal engineering milestone rather than offering any features to developers
  21. Will the 408-qubit Crossbill offer any advantage over the 433-qubit Osprey?
  22. Flamingo is a modular quantum processor
  23. Will the 1,386-qubit Flamingo offer much advantage over the 1,121-qubit Condor?
  24. How many chips are in a Kookaburra processor?
  25. How many Kookaburra processors can be connected in a single system?
  26. Beyond 2026… or is it 2026 and Beyond?
  27. Hardware for scaling to 10K-100K qubits
  28. At what stage will multiple Quantum System Two systems be linked?
  29. Every processor should have qubit fidelity and Quantum Volume targets in addition to its qubit count
  30. Supply capabilities label for every processor in the roadmap
  31. Unclear if every new processor in a given year will meet the Quantum Volume target of doubling every year
  32. Will Osprey, Heron, and Condor necessarily exceed the qubit fidelity and Quantum Volume of the best Falcon from this year?
  33. When can Falcon and Hummingbird be retired?
  34. Does Hummingbird have any value now that Eagle is available?
  35. When will IBM have a processor with better qubit quality than Falcon?
  36. Are all of the new processors NISQ devices?
  37. Intelligent software orchestration layer
  38. Serverless programming model to allow quantum and classical processors to work together frictionlessly
  39. Capabilities and metrics that are not mentioned in the IBM roadmap
  40. Additional needs not covered by the IBM roadmap
  41. Critical needs for quantum computing
  42. Three distinct developer personas: kernel developers, algorithm developers, and model developers
  43. Model is an ambiguous term — generic design vs. high-level application
  44. Model developers — developing high-level applications
  45. Models seem roughly comparable to my configurable packaged quantum solutions
  46. Tens of thousands of qubits
  47. Hundreds of thousands of qubits
  48. Misguided to focus so heavily on more qubits since people have been unable to effectively use even 53, 65, or 127 qubits effectively so far
  49. IBM has not provided a justification for the excessive focus on qubit count over qubit fidelity and qubit connectivity (scale over quality)
  50. What do we need all of these qubits for?
  51. We need lots of processors, not lots of qubits
  52. We need lots of processors for circuit repetitions for large shot counts
  53. Modular processors needed for quantum knitting of larger quantum circuits
  54. Two approaches to circuit knitting
  55. Using classical communication for circuit knitting with multiple, parallel quantum processors
  56. Paper on simulating larger quantum circuits on smaller quantum computers
  57. What exactly is classical communication between quantum processors?
  58. Not even a mention of improving connectivity between qubits within a chip or within a quantum processor
  59. Is this a Tower of Babel, too complex and with too many moving parts?
  60. Rising complexity — need simplicity, eventually
  61. What is Qiskit Runtime?
  62. What are Qiskit Runtime Primitives all about?
  63. What is Quantum Serverless?
  64. A little confusion between Quantum Serverless, Qiskit Runtime, and Qiskit Runtime Primitives
  65. A little confusion between Frictionless Development and Quantum Serverless
  66. IBM’s commitment to double Quantum Volume (QV) each year
  67. Will Quantum Volume double every year?
  68. Will anything on the roadmap make a significant difference to the average quantum algorithm designer or quantum application developer in the near term? Not really
  69. It’s not on the roadmap, but we really need a processor with 48 fully-connected near-perfect qubits
  70. No significant detail on logical qubits and quantum error correction
  71. No explanation for error suppression
  72. Physical qubit fidelity is a necessary base even for full quantum error correction, as well as error suppression and mitigation
  73. Error suppression, error mitigation, and even full error correction are not valid substitutes for higher raw physical qubit fidelity
  74. Net qubit fidelity: raw physical qubit fidelity, error suppression, mitigation, correction, and statistical aggregation to determine expectation value
  75. Emphasis on variational methods won’t lead to any dramatic quantum advantage
  76. Premature integration with classical processing
  77. Some day modular design and higher qubit counts will actually matter, but not now and not soon
  78. Nuances of the various approaches to interconnections leads to more complex tooling and burden on developers
  79. Designers of quantum algorithms and developers of quantum applications need frictionless design and development, not more friction
  80. This is still IBM research, not a commercial product engineering team
  81. Risk of premature commercialization
  82. Will the IBM roadmap be enough to avoid a Quantum Winter? Unclear
  83. Need to double down on research — and prototyping and experimentation
  84. Need for development of industry standards
  85. My raw notes from reviewing IBM’s announcement
  86. My original proposal for this topic
  87. Summary and conclusions

My thoughts on IBM’s previous roadmap

For a baseline, you can review my thoughts on IBM’s previous roadmap, written in 2021 on the roadmap from 2020, here:

Major positive highlights

  1. Thank IBM for being this transparent and for such a long time horizon.
  2. Plenty of interesting engineering advances.
  3. Focus on modular quantum systems.
  4. Many more qubits.
  5. Long range coupler to connect chips through a cryogenic cable of around a meter long.
  6. Plenty of interesting software and tool advances.
  7. The new IBM Quantum System Two, with interconnections between systems.

Major negative highlights

  1. Many interesting technical capabilities or metrics which don’t show up on the roadmap. See separate section — Capabilities and metrics that are not mentioned in the IBM roadmap.
  2. Little improvement in qubit fidelity.
  3. No milestones for qubit fidelity or Quantum Volume (QV).
  4. No improvement in qubit connectivity. Within a processor or within a chip.
  5. Too brief — need more detail on each milestone.
  6. Limited transparency — I’m sure IBM has the desired detail in their internal plans.
  7. No indication of when practical quantum computing will be achieved.
  8. No milestones or metrics for degrees of quantum advantage.
  9. No indication of when a commercial product offering will be achieved.

Three pillars to usher in an era of practical quantum computing

IBM characterizes their approach to quantum computing as having three pillars. They want to leverage three pillars:

  1. Robust and scalable quantum hardware.
  2. Cutting-edge quantum software to orchestrate and enable accessible and powerful quantum programs.
  3. A broad global ecosystem of quantum-ready organizations and communities.

Summary of new hardware

  1. 433-qubit Osprey processor. Previously announced. Coming in just a few months, in 2022.
  2. IBM Quantum System Two overall quantum computer system packaging. Previously announced.
  3. 1,121-qubit Condor processor. Previously announced.
  4. 133-qubit Heron processor. Modular processor. New announcement.
  5. Classical communication link between quantum processors. New announcement.
  6. Quantum communication between modular chips. For modular processors. New announcement.
  7. 408-qubit Crossbill processor. Modular processor. IBM’s first multi-chip processor. New announcement.
  8. 1,386-qubit Flaming processor. Modular processor. New announcement.
  9. 4,158-qubit Kookaburra processor. New announcement.
  10. One-meter quantum cryogenic communication link between quantum computer systems. New announcement.
  11. Potential for scaling to 10K to 100K qubits using modular processors with classical and quantum communication. New announcement.

Summary of new software capabilities

  1. Preparing for serverless quantum computation.
  2. Quantum Serverless.users can take advantage of quantum resources at scale without having to worry about the intricacies of the hardware — we call this frictionless development — which we hope to achieve with a serverless execution model.
  3. Intelligent orchestration.
  4. Dynamic circuits.
  5. Circuit knitting.
  6. Threaded primitives.
  7. Error mitigation and suppression techniques.
  8. Qiskit Runtime Primitives. Sampling. Estimation.
  9. Application services.
  10. Prototype software applications.
  11. Circuit libraries.
  12. Preparation for full error correction.

Summary of roadmap milestones by year

These milestones are based on the graphic roadmap supplied by IBM plus milestones mentioned in the video or textual documents of the roadmap.

  1. 433-qubit Osprey processor by end of the year.
  2. Demonstrate a quantum volume of 1024.
  3. Increase speed from 1.4K CLOPS to 10K CLOPS.
  1. Bring dynamic circuits to the stack. For increased circuit variety and algorithmic complexity.
  1. 1,121-qubit Condor processor.
  2. 133-qubit Heron processor. Support multiple processors — 133 x p, connected with a classical communication link. Classical parallelized quantum computing with multiple Heron processors connected by a single control system.
  3. Quantum volume is expected to at least double to 2048 (11 qubits).
  1. Frictionless development with quantum workflows built in the cloud.
  2. Prototype software applications.
  3. Quantum Serverless.
  4. Threaded primitives.
  1. 408-qubit Crossbill processor. IBM’s first multi-chip quantum processor
  2. 462-qubit Flamingo processor.
  3. 1,386-qubit Flamingo multi-chip processor. Three 462-qubit Flamingo processor chips with quantum communication between them.
  4. Quantum volume is expected to at least double to 4096 (12 qubits).
  1. Call 1K+ qubit services from Cloud API.
  2. Investigate error correction.
  3. Error suppression and mitigation.
  4. Intelligent orchestration.
  1. 4,158-qubit Kookaburra processing. And more qubits.
  2. Quantum volume is expected to at least double to 8192 (13 qubits).
  1. Quantum software applications. Machine learning, Natural science, Optimization.
  2. Circuit knitting toolbox.
  1. Scaling to tens of thousands (10K-100K) of qubits. With classical and quantum communication.
  2. Quantum volume is expected to at least double each year to 16K (14 qubits).
  1. Circuit libraries.
  2. Error correction.

Roadmap documents and video

IBM posted their updated quantum development roadmap on May 10, 2022 as three documents and a video:

  1. Press release
  2. Web page
  3. Tweet from IBM Research
  4. Tweet from Jay Gambetta
  5. Blog post
  6. Video
  7. HPC tech media coverage
  • IBM Unveils New Roadmap to Practical Quantum Computing Era; Plans to Deliver 4,000+ Qubit System
  • Orchestrated by intelligent software, new modular and networked processors to tap strengths of quantum and classical to reach near-term Quantum Advantage
  • Qiskit Runtime to broadly increase accessibility, simplicity, and power of quantum computing for developers
  • Ability to scale, without compromising speed and quality, will lay groundwork for quantum-centric supercomputers
  • Leading Quantum-Safe capabilities to protect today’s enterprise data from ‘harvest now, decrypt later’ attacks
  • May 10, 2022
  • Armonk, N.Y., May 10, 2022 — IBM (NYSE: IBM) today announced the expansion of its roadmap for achieving large-scale, practical quantum computing. This roadmap details plans for new modular architectures and networking that will allow IBM quantum systems to have larger qubit-counts — up to hundreds of thousands of qubits. To enable them with the speed and quality necessary for practical quantum computing, IBM plans to continue building an increasingly intelligent software orchestration layer to efficiently distribute workloads and abstract away infrastructure challenges.
  • https://newsroom.ibm.com/2022-05-10-IBM-Unveils-New-Roadmap-to-Practical-Quantum-Computing-Era-Plans-to-Deliver-4,000-Qubit-System
  • Our new 2022 Development Roadmap
  • These are our commitments to advance quantum technology between now and 2026.
  • The road to advantage
  • When we previewed the first development roadmap in 2020 we laid out an ambitious timeline for progressing quantum computing over the proceeding years.
  • To date, we have met all of these commitments and it is our belief we will continue to do so. Now our new 2022 development roadmap extends our new vision to 2025. We are excited to share our new breakthroughs with you.
  • https://www.ibm.com/quantum/roadmap
  • Expanding the IBM Quantum roadmap to anticipate the future of quantum-centric supercomputing
  • We are explorers. We’re working to explore the limits of computing, chart the course of a technology that has never been realized, and map how we think these technologies will benefit our clients and solve the world’s biggest challenges. But we can’t simply set out into the unknown. A good explorer needs a map.
  • Two years ago, we issued our first draft of that map to take our first steps: our ambitious three-year plan to develop quantum computing technology, called our development roadmap. Since then, our exploration has revealed new discoveries, gaining us insights that have allowed us to refine that map and travel even further than we’d planned. Today, we’re excited to present to you an update to that map: our plan to weave quantum processors, CPUs, and GPUs into a compute fabric capable of solving problems beyond the scope of classical resources alone.
  • Our goal is to build quantum-centric supercomputers. The quantum-centric supercomputer will incorporate quantum processors, classical processors, quantum communication networks, and classical networks, all working together to completely transform how we compute. In order to do so, we need to solve the challenge of scaling quantum processors, develop a runtime environment for providing quantum calculations with increased speed and quality, and introduce a serverless programming model to allow quantum and classical processors to work together frictionlessly.
  • https://research.ibm.com/blog/ibm-quantum-roadmap-2025
  • IBM Quantum 2022 Updated Development Roadmap
  • Jay Gambetta, IBM Fellow and VP of Quantum Computing, unveils the updated IBM Quantum development roadmap through to 2025.
  • We now believe we have what it takes to scale quantum computers into what we’re calling quantum-centric supercomputers, making it easier than ever for our clients to incorporate quantum capabilities into their respective domains, and access resources with a serverless programming model thanks to Qiskit runtime. In this video, the IBM Quantum team presents 3 new processors demonstrating breakthroughs in scaling by introducing modularity, allowing multi-chip processors, classical parallelization, and quantum parallelization to build larger, more capable systems.
  • https://www.youtube.com/watch?v=0ka20qanWzI

Previous IBM Quantum hardware roadmap

The IBM quantum hardware roadmap was published on September 15, 2020, as well as their quantum software development and ecosystem roadmap published on February 4, 2021.

Scale, quality, and speed as the three essential dimensions of quantum computing performance

IBM measures itself and the performance of its quantum computing systems by three key metrics or dimensions:

  1. Scale. Qubit count. Size.
  2. Quality. Qubit fidelity. Reliable execution of quantum algorithms. Quantum Volume (QV).
  3. Speed. How fast circuits can be executed. CLOPS (Circuit Layer Operations Per Second). How many circuit executions an application can expect each second. Execute more circuit repetitions (shots) per second. Execute a job in less time. Execute more jobs in a given amount of time. System throughput.

Performance = Scale + Quality + Speed

Restating the previous section more simply:

  • Performance = Scale + Quality + Speed

Four major objectives for 2022

As per Jay Gambetta in the roadmap update video, IBM has four major objectives for 2022 for quantum:

  1. Bring dynamic circuits to the stack.
  2. 433-qubit Osprey processor by end of the year.
  3. Demonstrate a quantum volume of 1024.
  4. Increase speed from 1.4K CLOPS to 10K CLOPS.

Achieve Quantum Volume of 1024 this year

As per Jay Gambetta in the roadmap update video, IBM has committed to achieving a Quantum Volume (QV) of 1024 this year.

My thoughts on Eagle and Osprey

The 127-qubit Eagle quantum processor was announced in the previous roadmap and introduced last November, 2021. I’ve posted my thoughts on it:

Osprey is not committed for more than Quantum Volume of 1024

IBM hasn’t explicitly committed to what Quantum Volume (QV) we can expect for Osprey later this year, although we can infer that it won’t be more than 1024 since that is the highest Quantum Volume that IBM has committed for this year. But we have no commitment that Osprey will have a Quantum Volume of 1024, just that some IBM quantum processor will have it, but it might be Falcon. Falcon could realistically achieve Quantum Volume of 1024 since it has already achieved 512.

Less than four months until Osprey is formally introduced

At the time I am writing this, it is less than four months until IBM formally introduces the 433-qubit Osprey processor, presumably at their annual Quantum Summit event in late November. At this point, the processor should be nearing completion, or at least its design should be virtually cast in concrete. I would imagine that IBM would want to be running tests and resolving last minute glitches and issues for the final two months, starting, say, in the middle of September.

133-qubit Heron is a classical multi-core quantum processor

Some number of 133-qubit Heron quantum processors can be classically interconnected. This is somewhat comparable to a classical processor with multiple cores, each capable of running a complete program, all in parallel. Or as IBM puts it, “classical parallelized quantum computing with multiple Heron processors connected by a single control system.

  1. How exactly does the classical communication between quantum processors really work? See a separate section, What exactly is classical communication between quantum processors?
  2. How many Heron quantum processors can be combined in a single quantum computer system? Presumably this will be limited by the capacity of the new IBM Quantum System Two. But what might the limit be? Could it be one or two? Might it always be three? Four? Five or six? Eight? Ten to twelve? Sixteen? Twenty? 32? More?
  • 133 qubits x p

Will the 133-qubit Heron processor offer much over the 127-qubit Eagle processor?

The 133-qubit Heron processor and the 127-qubit Eagle processor will offer a comparable number of qubits, so there’s not much advantage on that score.

Crossbill will be IBM’s first multi-chip quantum processor

IBM will introduce the 408-qubit Crossbill quantum processor in 2024. It will be composed of three 136-qubit chips with quantum interconnection between the chips — cross-chip couplers.

Crossbill may be more of an internal engineering milestone rather than offering any features to developers

Although the multi-chip Crossbill quantum processor will be an amazing engineering achievement, there won’t actually be any new features that developers can take advantage of.

Will the 408-qubit Crossbill offer any advantage over the 433-qubit Osprey?

The 433-qubit Osprey will already offer a comparable number of qubits to the 408-qubit Crossbill, and be available in 2022.

Flamingo is a modular quantum processor

While a quantum computer system based on the 133-qubit Heron consists of multiple quantum processors, multiple 462-qubit Flamingo quantum processor chips can be connected using quantum communication to act as a single quantum processor.

Will the 1,386-qubit Flamingo offer much advantage over the 1,121-qubit Condor?

Granted, the 1,386-qubit Flamingo processor will offer 24% more qubits than the 1,121-qubit Condor processor, but it doesn’t seem likely that quantum applications or quantum algorithms will be able to take advantage of many if any of such a large number of qubits anyway, so that’s a dubious advantage at best. I’d personally say that Flamingo and Condor are roughly comparable in terms of qubit count.

How many chips are in a Kookaburra processor?

The roadmap is a bit unclear whether the 1,386-qubit Kookaburra processor is itself a multi-chip processor. At one point the blog says 1,386 qubits as a multi-chip processor (ala Flamingo), but then it says three Kookaburra chips can be connected into a 4,158-qubit system, implying that 1,1386 qubits is a single chip. So which is it?! Maybe… they just meant that the 1,386-qubit Kookaburra can be used to compose a multi-chip processor when they said “Kookaburra will be a 1,386 qubit multi-chip processor with a quantum communication link.” Hard to say.

How many Kookaburra processors can be connected in a single system?

It’s also unclear how many 4,158-qubit Kookaburra processors can be connected into an even larger system.

Beyond 2026… or is it 2026 and Beyond?

The graphic for the roadmap has a final column headed Beyond 2026, but I suspect that is a typo and should be 2026 and Beyond or Beyond 2025.

Hardware for scaling to 10K-100K qubits

The roadmap does speak of Scaling to 10K-100K qubits with classical and quantum communication for Beyond 2026, but it’s unclear if that’s scaling with some number of 4,158-qubit Kookaburra processors or some other future processor.

At what stage will multiple Quantum System Two systems be linked?

The roadmap video does call for a “Long range coupler to connect chips through a cryogenic cable of around a meter long”, but it’s not clear at what stage this will occur. My notes from the roadmap video suggest that this will be done using Flamingo chips, but don’t indicate when that might happen.

Every processor should have qubit fidelity and Quantum Volume targets in addition to its qubit count

Raw qubit count alone is not a particularly useful metric for judging quantum hardware advances. Qubit fidelity is a very valuable metric, as is Quantum Volume (QV), which gives you an estimate of how many qubits can be used in a quantum algorithm.

  1. Qubit fidelity. Nines of qubit reliability.
  2. Quantum Volume (QV). log2(QV) is the largest number of qubits which can be reliably used in a quantum circuit.

Supply capabilities label for every processor in the roadmap

I have proposed a capabilities label for quantum computers. There are a variety of metrics in addition to qubit count, qubit fidelity, and Quantum Volume (QV).

Unclear if every new processor in a given year will meet the Quantum Volume target of doubling every year

Although IBM has made clear their intention to double Quantum Volume (QV) each year, it’s not at all clear if every quantum processor introduced in a given year will meet that target.

  1. 133-qubit Heron vs. 127-qubit Eagle.
  2. 1,386-qubit Flamingo vs. 1,121-qubit Condor.
  3. 408-qubit Crossbill vs. 433-qubit Osprey.

Will Osprey, Heron, and Condor necessarily exceed the qubit fidelity and Quantum Volume of the best Falcon from this year?

IBM has been making steady improvements in the qubit fidelity and Quantum Volume (QV) of the 27-bit Falcon processor. It’s well ahead of even the 65-qubit Hummingbird and the 127-qubit Eagle processors.

When can Falcon and Hummingbird be retired?

Given the availability of Eagle and the upcoming availability of Osprey, and the interest in driving towards practical quantum computing, it’s curious that the 27-qubit Falcon and 65-qubit Hummingbird are still around. IBM has given no indication when they might be retired.

Does Hummingbird have any value now that Eagle is available?

This is an interesting question — what are the relevant merits of the 65-qubit Hummingbird processor compared to the 127-qubit Eagle processor? The Quantum Volume (QV) of both processors is roughly comparable and not better than for the 27-qubit Falcom processor, so Hummingbird would seem to be obsolete and no longer filling any significant need.

When will IBM have a processor with better qubit quality than Falcon?

Continuing on the theme from the preceding section, qubit fidelity hasn’t been a priority for newer processors since Falcon. Even the supposedly game-changing Eagle is unable to match the qubit quality of Falcon. So, the question is when IBM will introduce a new quantum processor which actually has better qubit quality than Falcon.

Are all of the new processors NISQ devices?

A quantum computer (processor) is a NISQ device if it meets two criteria:

  1. Noisy qubits. Errors are fairly frequent.
  2. Intermediate scale. 50 to hundreds of qubits.
  • Quantum Computing in the NISQ era and beyond
  • For this talk, I needed a name to describe this impending new era, so I made up a word: NISQ. This stands for Noisy IntermediateScale Quantum. Here “intermediate scale” refers to the size of quantum computers which will be available in the next few years, with a number of qubits ranging from 50 to a few hundred. 50 qubits is a significant milestone, because that’s beyond what can be simulated by brute force using the most powerful existing digital supercomputers. “Noisy” emphasizes that we’ll have imperfect control over those qubits; the noise will place serious limitations on what quantum devices can achieve in the near term.
  • https://arxiv.org/abs/1801.00862

Intelligent software orchestration layer

My apologies if this is a little vague, but this is as much as I could glean from the IBM announcement documents and video about the intelligent software orchestration layer.

  1. Efficiently distribute workloads.
  2. Abstract away infrastructure challenges.
  3. Be able to deploy workflows seamlessly across both quantum and classical resources at scale.
  4. Powerful paradigm to enable flexible quantum/classical resource combinations without requiring developers to be infrastructure experts.
  5. Stitch quantum and classical data streams together into an overall workflow.

Serverless programming model to allow quantum and classical processors to work together frictionlessly

IBM will introduce a serverless programming model, Quantum Serverless, to allow quantum and classical processors to work together frictionlessly.

Capabilities and metrics that are not mentioned in the IBM roadmap

  1. No indication of what functional advantages might come from larger numbers of qubits.
  2. No mention of whether or when quantum networking will be supported. Other than one-meter cryogenic cable between adjacent cryostats — which isn’t listed on the roadmap graphic, but briefly mentioned in the video.
  3. No mention of raw qubit quality per se.we need to solve the challenge of scaling quantum processors, develop a runtime environment for providing quantum calculations with increased speed and quality, and introduce a serverless programming model to allow quantum and classical processors to work together frictionlessly.” Previously, IBM had committed to doubling Quantum Volume each year, effectively adding a single qubit to algorithm size. They do talk about error suppression, mitigation, and correction, but not about physical qubit fidelity. Although the new Heron processor has a new hardware design with “Completely redesigned gates, New tunable couplers that allow fast gates, While simultaneously limiting crosstalk”, which has some potential for improved qubit fidelity — but IBM didn’t explicitly say that or commit to it.
  4. No roadmap for milestones for nines of qubit fidelity.
  5. No milestones for achievement of near-perfect qubits.
  6. No roadmap milestones for qubit measurement fidelity.
  7. No mention of improving connectivity between qubits within a chip or within a quantum processor. Focus on inter-chip and inter-processor connectivity for modularity.
  8. No recognition of the need to support large quantum Fourier transforms.
  9. No milestones for increase in coherence time.
  10. No milestones for decrease in gate execution time.
  11. No milestones for maximum circuit size. Or maximum size for each processor in the roadmap.
  12. No milestones for when larger algorithms — like using 40 qubits — will become possible.
  13. No definition or metrics or milestones for quantum advantage. When might truly significant or mind-boggling dramatic quantum advantage be achieved? Will IBM achieve even minimal quantum advantage by the end of their hardware roadmap (2026)? Be clear about the metric to be measured and achieved.
  14. No clarity as to what exactly is meant by software milestones to improve error suppression and mitigation.
  15. No Falcon or Eagle enhancements are noted. Need for Super-Falcon, Super-Hummingbird, and Super-Eagle, or even a 48-qubit quantum processor with higher qubit fidelity and improved qubit connectivity.
  16. Osprey isn’t promising more than just more qubits, with no suggestion that they will be higher-quality qubits or with any better connectivity.
  17. No milestones for finer granularity of phase and probability amplitude. Needed for larger quantum Fourier transform (QFT) and quantum phase estimation (QPE).
  18. No milestones for size supported for both quantum Fourier transform (QFT) and quantum phase estimation (QPE).
  19. No milestones for when quantum chemists (among others) will be able to rely on quantum Fourier transform (QFT) and quantum phase estimation (QPE)?
  20. When might The ENIAC Moment be achieved? First production-scale practical real-world application.
  21. No milestones for what applications or types or categories of applications might be enabled in terms of support for production-scale data at each technical milestone? Starting with The ENIAC Moment.
  22. No milestones for configurable packaged quantum solutions.
  23. No milestones for Quantum Volume. IBM has previously stated that their intention is to double quantum volume every year. And in the roadmap video Jay stated the intention to demonstrate Quantum Volume of 1024 by end of this year, but no hint of which processors would support the improved Quantum Volume — Osprey or Falcon??
  24. No milestone for replacement of the Quantum Volume metric. Since it only works to 2⁵⁰ or so, or maybe only 2⁴⁰ or 2³² — largest classical simulation.
  25. No indication of focus on rich collection of algorithmic building blocks.
  26. No indication of focus on rich collection of design patterns.
  27. No milestones for supporting a higher-level programming model.
  28. No milestones for supporting a quantum-native programming language. For quantum algorithms.
  29. No milestone for when full quantum error correction (QEC) will be achieved.
  30. When might The FORTRAN Moment be achieved? Need higher-level programming model, quantum-native programming language, and full quantum error correction.
  31. No milestones for how many bits Shor’s algorithm can handle at each stage of the roadmap. When could they even factor six bits (factor 35 = 5 x 7, 39 = 3 x 13, 55 = 5 x 11, 57 = 3 x 19) or seven bits (factor 69 = 3 x 23, 77 = 7 x 11, 87 = 3 x 29, 91 = 7 x 13) or eight bits (133 = 7 x 19, 143 = 11 x 13, 171 = 3 x 57, 221 = 13 x 17, 247 = 13 x 19). Need quantum Fourier transform for 12 to 16 bits.
  32. No mention of simulator roadmap. Qubit capacity — push beyond 32, to 36, 40, 44, and even 48 qubits. Performance. Maximum circuit size. Maximum quantum states. Quantum Volume (QV) capacity. Or debugging. Or configuring connectivity, noise, and errors to match real hardware, current and projected.

Additional needs not covered by the IBM roadmap

  1. Need for debugging capabilities.
  2. Need for testing capabilities.
  3. Need for dramatic improvements in documentation and technical specifications at each milestone.
  4. Need a full Principles of Operation manual for every quantum processor.
  5. When will IBM offer production-scale quantum computing as a commercial product or service? No longer a mere laboratory curiosity, suitable only for the most elite technical teams and the lunatic fringe.
  6. Need for configurable packaged quantum solutions. The next level up from quantum applications, where IBM’s roadmap ends.
  7. Need for development of industry standards. Although it may be a little too soon since there is so much innovation going on and no real stability that could be standardized.

Critical needs for quantum computing

I see that there are four essential, critical needs for quantum computing:

  1. Moderate number of qubits. Not a lot, just enough.
  2. High fidelity for qubits. Don’t need full quantum error correction, but a fairly high level of reliability of raw physical qubits. Near-perfect qubits.
  3. Reasonable connectivity for qubits. Essential for sophisticated techniques such as quantum Fourier transform (QFT). Really do need full any-to-any connectivity.
  4. Sufficiently fine granularity of phase and probability amplitude to support quantum Fourier transform for 20 bits. Ditto — essential for sophisticated techniques such as quantum Fourier transform (QFT).

Three distinct developer personas: kernel developers, algorithm developers, and model developers

In IBM’s approach, there are three distinct personas of developers, each with their own abilities, interests and needs, each requiring a distinct level of support:

  1. Kernel developers. Concerned with the details of constructing and executing quantum circuits, at the gate level. I would call these quantum algorithm designers.
  2. Algorithm developers. Concerned with how to build quantum applications using quantum algorithms. I would call these quantum application developers.
  3. Model developers. Concerned with how to apply quantum applications to solve high-level application problems. I would call these subject-matter experts or solution experts or solution specialists, and they should be working at the level I have proposed for configurable packaged quantum solutions.

Model is an ambiguous term — generic design vs. high-level application

The term model gets used ambiguously by IBM:

  1. Generic design or approach. A model for how to do things, or a design for a solution to a problem.
  2. A high-level application. Not just any piece of software, but software that is focused on a specific end-user problem or need. Used by a subject-matter expert.

Model developers — developing high-level applications

Just to highlight and emphasize the main focus of model developers.

Models seem roughly comparable to my configurable packaged quantum solutions

Separately I have written about my proposal for configurable packaged quantum solutions which would enable subject-matter experts to work in terms that make sense to them, not in the terms of quantum mechanics or either quantum or classical computing. This is not an exact match for IBM’s conception of model developers, but is at least in the right ballpark.

Tens of thousands of qubits

The roadmap mentions growing to support tens of thousands of qubits.

Hundreds of thousands of qubits

While elsewhere IBM indicates growing to support tens of thousands of qubits, in a couple of places they refer to hundreds of thousands of qubits.

Misguided to focus so heavily on more qubits since people have been unable to effectively use even 53, 65, or 127 qubits effectively so far

It’s still rare to encounter quantum algorithms for practical real-world applications using more than a handful of qubits. Maybe occasionally 10–12. Rarely even 16. Only a rare few using more than 16 qubits. I’ve seen one algorithm using 21 qubits, and another using 23 qubits. And that’s about it.

  • QV 1024 = 10 qubits, log2(QV)/n = 10/433 = 2.31%

IBM has not provided a justification for the excessive focus on qubit count over qubit fidelity and qubit connectivity (scale over quality)

What’s missing from the roadmap is any simple statement which offers a justification for why IBM is focusing so heavily on increasing qubit count with a priority over increasing qubit fidelity and qubit connectivity (within the chip and processor) — prioritizing scale over quality.

  1. Scaling is easier. Took less time.
  2. Quality is harder. Will take more time.
  3. Gave quality a higher priority, but research efforts didn’t pan out.
  4. Blindsided. Got the mistaken impression that boosting qubit quality was a piece of cake.
  5. Unspoken priority and intent to ramp up quantum error correction and logical qubits. Need even more physical qubits to get enough logical qubits for a practical quantum computer. Belief that quantum error correction is the best and fastest path to high qubit fidelity.
  6. Quantum error correction (QEC) is much harder than expected. That may have thought they would have QEC done by now or coming real soon, like within the next two years.
  7. Misguided faith in NISQ. Too many people and too much hype that amazing algorithms are possible even with noisy NISQ qubits. So where are all of the 40-qubit algorithms?
  8. Other. Plenty of reasons I haven’t thought of.

What do we need all of these qubits for?

IBM is intent on giving us all of these qubits, but to what end? This is a good question, an open question. We can speculate, but it would have been better if IBM had been upfront and clear as to their motivation.

  1. Circuit repetitions (or shots). The combination of the probabilistic nature of quantum computing and a bothersome error rate make it necessary to execute each circuit some number of times so that a statistical distribution can be constructed to endeavor to determine the expectation value for a quantum computation. The more parallel processors the better. Typical shot counts could be 100, 1,000, 10,000, 25,000 or more.
  2. Parallel execution of multiple quantum algorithms in the same quantum application. Different quantum algorithms, or maybe the same quantum algorithm with different input data or parameters. And each quantum algorithm likely requires many shots as well. So the more parallel processors the better.

We need lots of processors, not lots of qubits

Just to highlight that last point from the previous section, having a single quantum processor with lots of qubits is not terribly useful at this stage. But shots (circuit repetitions) are a very pressing need even today and certainly in the coming years as we push towards production-scale practical quantum computing.

We need lots of processors for circuit repetitions for large shot counts

Just to reiterate and highlight a point from the preceding sections, that having lots of quantum processors could allow the same circuit to be run many times in parallel to perform circuit repetitions when the shot count is non-trivial — thousands or tens of thousands of repetitions. Or even when it is trivial — tens or hundreds.

Modular processors needed for quantum knitting of larger quantum circuits

Limitations on qubit fidelity and qubit connectivity will plague quantum computing for quite a while over the next few years. Circuit knitting is one approach to deal with larger quantum circuits which cannot be accommodated on a single quantum processor due to qubit fidelity and qubit connectivity issues.

Two approaches to circuit knitting

The roadmap update discusses circuit knitting primarily in terms of running a larger quantum circuit on multiple smaller quantum processors, but from a bigger-picture perspective there are two distinct use cases for circuit knitting:

  1. Classical simulation of larger circuits. For a quantum circuit larger than the largest quantum circuit that can be simulated. Break up the quantum circuit into smaller quantum circuits which can be simulated separately, and then knit together the results from the separate simulations.
  2. Multi-processor quantum computer system with classical communication between the processors. Such as the multiple 133-qubit Heron quantum processors connected with classical communication. Similar break up of the larger quantum circuit into smaller quantum circuits which can each be run on a separate processor — in parallel, and then knit the results. In some cases the intermediate results can be directly communicated to the other processors, but in some cases the knitting must be performed using classical software after circuit execution has completed.

Using classical communication for circuit knitting with multiple, parallel quantum processors

Just to highlight and emphasize the point from the preceding section how classical communication between multiple, parallel quantum processors can facilitate execution of a larger circuit than can execute on a single processor.

Paper on simulating larger quantum circuits on smaller quantum computers

IBM has a blog post and technical paper on simulating larger quantum circuits on smaller quantum computers.

  • At what cost can we simulate large quantum circuits on small quantum computers?
  • One major challenge of near-term quantum computation is the limited number of available qubits. Suppose we want to run a circuit consisting of 400 qubits, but we only have 100-qubit devices available. What do we do?
  • Over the course of the past year, the IBM Quantum team has begun researching a host of computational methods called circuit knitting. Circuit knitting techniques allow us to partition large quantum circuits into subcircuits that fit on smaller devices, incorporating classical simulation to “knit” together the results to achieve the target answer. The cost is a simulation overhead that scales exponentially in the number of knitted gates.
  • https://research.ibm.com/blog/circuit-knitting-with-classical-communication
  • Circuit knitting with classical communication
  • Christophe Piveteau, David Sutter
  • The scarcity of qubits is a major obstacle to the practical usage of quantum computers in the near future. To circumvent this problem, various circuit knitting techniques have been developed to partition large quantum circuits into subcircuits that fit on smaller devices, at the cost of a simulation overhead. In this work, we study a particular method of circuit knitting based on quasiprobability simulation of nonlocal gates with operations that act locally on the subcircuits. We investigate whether classical communication between these local quantum computers can help. We provide a positive answer by showing that for circuits containing n nonlocal CNOT gates connecting two circuit parts, the simulation overhead can be reduced from O(9n) to O(4n) if one allows for classical information exchange. Similar improvements can be obtained for general Clifford gates and, at least in a restricted form, for other gates such as controlled rotation gates.
  • https://arxiv.org/abs/2205.00016

What exactly is classical communication between quantum processors?

Unfortunately, it’s not quite clear what classical communication between quantum processors really means. Presumably only strict classical binary 0 and 1 can be transferred between the quantum processors. This then raises some questions:

  1. Is the quantum state of the transferred qubits on the source quantum processor collapsed as in traditional qubit measurement?
  2. How are qubits selected to be transferred?
  3. How is the classical information transferred? Presumably some sort of bus, but what exactly is that?
  4. How exactly does the incoming classical information affect the state of any qubits on the destination processor? Is an actual quantum logic gate executed? If so, what gate? How does the classical bit participate in the gate, if any? Is a destination qubit initialized to the state of the incoming classical bit? Is a destination qubit flipped or not flipped based on the incoming classical bit? Or… what?

Not even a mention of improving connectivity between qubits within a chip or within a quantum processor

The roadmap offers no mention of improving connectivity between qubits within a chip or within a quantum processor. There is discussion on inter-chip and inter-processor connectivity, but no mention of improving connectivity within the chip or processor.

Is this a Tower of Babel, too complex and with too many moving parts?

I have a general concern that this is all getting far too complex, with too many moving parts. Developers need simplicity, with fewer moving parts, not a… Tower of Babel.

Rising complexity — need simplicity, eventually

It may be possible for initial applications of quantum computing to tolerate substantial complexity since the work requires elite technical teams and caters to the lunatic fringe. But that level of complexity will drastically limit expansion of the quantum computing sector.

What is Qiskit Runtime?

Qiskit Runtime allows the quantum application developer to package classical application code with quantum adgorithms and send the combination to an IBM quantum computer system as a job to be executed together. The classical code runs on the classical computer embedded inside the IBM quantum computer system, with fast, direct access to the quantum processor, with no network latency between the classical code and the quantum circuit execution.

  1. Variational method algorithms. Will execute the same quantum algorithm a significant number of times, with classical optimization between the runs.
  2. Significant number of quantum algorithm invocations. The quantum application uses a lot of quantum algorithms, or needs to invoke some quantum algorithms a number of times.

What are Qiskit Runtime Primitives all about?

These are simply functions available in Qiskit Runtime which facilitate interaction between a quantum application and a quantum algorithm.

  1. Sampler. Quasi-probability distribution.
  2. Estimator. Expectation value.

What is Quantum Serverless?

It’s an odd term since clearly an IBM quantum computer system is a server accessed over the Internet. The essential meaning of Quantum Serverless is that the user can run their quantum workload on a server without having to provision that server specifically for the user’s workload. The user doesn’t need to worry about deployment and infrastructure.

  • … we need to ensure that our users can take advantage of quantum resources at scale without having to worry about the intricacies of the hardware — we call this frictionless development — which we hope to achieve with a serverless execution model.
  • https://research.ibm.com/blog/quantum-serverless-programming
  1. Introducing Quantum Serverless, a new programming model for leveraging quantum and classical resources
  2. To bring value to our users and clients with our systems we need our programing model to fit seamlessly into their workflows, where they can focus on their code and not have to worry about the deployment and infrastructure. In other words, we need a serverless architecture.
  3. The rate of progress in any field is often dominated by iteration times, or how long it takes to try a new idea in order to discover whether it works. Long iteration times encourage careful behavior and incremental advances, because the cost of making a mistake is high. Fast iterations, meanwhile, unlock the ability to experiment with new ideas and break out of old ways of doing things. Accelerating progress therefore relies on increasing the speed we can iterate. It is time to bring a flexible platform that enables fast iteration to quantum computing.
  4. https://research.ibm.com/blog/quantum-serverless-programming

A little confusion between Quantum Serverless, Qiskit Runtime, and Qiskit Runtime Primitives

IBM hasn’t drawn quite enough of a bright-line distinction between the conceptual meaning of Quantum Serverless, Qiskit Runtime, and Qiskit Runtime Primitives. These terms are being conflated to simultaneously mean the same thing and different parts of the same thing.

  1. Orchestrating quantum and classical
  2. The unique power of quantum computers is their ability to generate non-classical probability distributions at their outputs.
  3. In 2023 we will introduce Quantum Serverless to our stack and provide tools for quantum algorithm developers to sample and estimate properties of these distributions.
  4. These tools will include intelligent orchestration and the Circuit Knitting toolbox. With these powerful tools developers will be able to deploy workflows seamlessly across both quantum and classical resources at scale, without the need for deep infrastructure expertise.
  5. Finally, at the very top of our stack, we plan to work with our partners and wider ecosystems to build application services into software applications, empowering the widest adoption of quantum computing.
  6. https://www.ibm.com/quantum/roadmap

A little confusion between Frictionless Development and Quantum Serverless

Frictionless development is more about the benefit to developers, while Quantum Serverless is the method by which that benefit is achieved.

  1. Frictionless development. Makes development easier.
  2. Quantum Serverless. Enables frictionless development.
  3. Qiskit Runtime. Enables Quantum Serverless.

IBM’s commitment to double Quantum Volume (QV) each year

IBM had previously announced a commitment to double Quantum Volume (QV) each year back in 2019:

Will Quantum Volume double every year?

The roadmap itself doesn’t give any indication of milestones for Quantum Volume.

  1. 2022. QV 1024. 10 qubits.
  2. 2023. QV 2048. 11 qubits.
  3. 2024. QV 4096. 12 qubits.
  4. 2025. QV 8192. 13 qubits.
  5. 2026. QV 16K. 14 qubits.
  6. 2027. QV 32K. 15 qubits.
  7. 2028. QV 64K. 16 qubits.
  8. 2029. QV 128K. 17 qubits.

Will anything on the roadmap make a significant difference to the average quantum algorithm designer or quantum application developer in the near term? Not really

There’s a lot of interesting stuff on the updated roadmap, but so much is a few years away, so the question comes up as to whether there is anything on the roadmap that will make a significant difference to the average quantum algorithm designer or quantum application developer in the near term, like the next few to six months to a year.

It’s not on the roadmap, but we really need a processor with 48 fully-connected near-perfect qubits

A quantum processor with 48 fully-connected near-perfect qubits would enable a 20-bit quantum Fourier transform (QFT) and possibly achieve a significant quantum advantage of performance 1,000,000 X better than a classical processor.

No significant detail on logical qubits and quantum error correction

IBM briefly mentioned error correction, but provided no detail, and didn’t even mention logical qubits. This leaves us hanging:

  1. No detailed milestones for full quantum error correction (QEC).
  2. No sense of full quantum error correction being a high priority. Error mitigation may be a higher priority.
  3. No hint of physical qubit count per logical qubit. Is it 57 or 65 qubits, as an IBM paper seemed to suggest, or… what?
  4. When will IBM have enough qubits for full quantum error correction? For 1, 2, 5, 8, and 12 logical qubits? Just to get started and prove the concepts.
  5. No detailed milestones for logical qubit counts. Like 1, 5, 8, 12, 16, 20, 24, 28, 32, 48, 64, 80, 96, 128, 256, or more. Google offers milestones. Enough to support production-scale practical real-world quantum applications.
  6. What will the actual functional transition milestones be on the path to logical qubits?
  7. Will there be any residual error for logical qubits or will they be as perfect as classical bits?
  8. Will future machines support only logical qubits or will physical qubit circuits still be supported?

No explanation for error suppression

IBM hasn’t provided us with any explanation of what they mean by error suppression.

Physical qubit fidelity is a necessary base even for full quantum error correction, as well as error suppression and mitigation

IBM offers no insight on whether they intend to exert any significant effort to improve raw physical qubit fidelity. And neither Hummingbird, nor Eagle did either. IBM does talk about quantum error suppression and error mitigation, and eventually full quantum error correction. But in truth, enhancement of raw physical qubit fidelity is a useful and necessary foundation even if those other approaches are used.

Error suppression, error mitigation, and even full error correction are not valid substitutes for higher raw physical qubit fidelity

Restating the previous section a little differently, achieving error suppression, error mitigation, or even full error correction are not valid substitutes for achieving higher raw physical qubit fidelity — since higher raw physical qubit fidelity is the foundation upon which error suppression, error mitigation, and even full error correction are based.

Net qubit fidelity: raw physical qubit fidelity, error suppression, mitigation, correction, and statistical aggregation to determine expectation value

Just to tie it all together, the goal, the ultimate metric is the net qubit fidelity, which starts with and builds upon the raw physical qubit fidelity:

  1. Raw physical qubit fidelity.
  2. Error suppression.
  3. Error mitigation.
  4. Full quantum error correction (QEC).
  5. Statistical aggregation of multiple runs (shots) to determine expectation value. Examine the statistical distribution to determine the most common result.

Emphasis on variational methods won’t lead to any dramatic quantum advantage

If quantum Fourier transform (QFT) cannot be used, primarily due to weak qubit fidelity and weak qubit connectivity, one category of alternatives are variational methods. Unfortunately, they are not anywhere near as powerful as quantum Fourier transform.

Premature integration with classical processing

Integration of quantum and classical processing is an important area to pursue, but I’m not convinced that the technology or timing is ready for much focus on this in terms of a feature that current quantum algorithm designers and quantum application developers can readily make use of.

Some day modular design and higher qubit counts will actually matter, but not now and not soon

I appreciate IBM’s interest, willingness, and commitment to modular processor design and higher qubit counts, and some day both will matter very urgently, but that day is not today and won’t be any time soon.

Nuances of the various approaches to interconnections leads to more complex tooling and burden on developers

Having a variety of approaches to connectivity can provide opportunities for more flexible approaches to algorithm design, but it can also have two negative side effects:

  1. More complex tooling is needed. Even if the nuances of distinction between the various approaches to interconnection can in fact be reduced to a set of rules, it can lead to requiring that tools, particularly compilers and transpilers, must be significantly more complicated. That won’t come for free. It will impact somebody.
  2. Impact on algorithm design and application design. Not all of the nuances of interconnection can be reduced to simple rules which can be handled automatically by compilers, transpilers, and other tools. Eventually some of these nuances bubble up and impact the designers of quantum algorithms and circuits, and even the developers of quantum applications.
  3. Efficiency considerations which can’t be fully automated and fully mitigated. The efficiency considerations of the nuances may have a negative impact on performance which can’t be fully automated and fully mitigated, leading to performance degradation or the need for quantum algorithm designers and quantum application developers to jump through hoops to try to avoid the negative impacts, which they may or may not be able to successfully do.

Designers of quantum algorithms and developers of quantum applications need frictionless design and development, not more friction

Just to highlight and emphasize that last point from the preceding section — that the designers of quantum algorithms and the developers of quantum applications need to be further isolated from nuances of the hardware. They need frictionless design and development, not more friction.

This is still IBM research, not a commercial product engineering team

IBM has been doing great research and is proposing to do more great research, which is a good thing, but it’s also an issue that highlights that they are still deep in the pre-commercialization phase of quantum computing, and still years from being ready to transition to true commercialization, which requires that all of the research questions and issues have been addressed and resolved.

Risk of premature commercialization

As just mentioned, IBM is busy doing research and has lots more research to do before their quantum computing efforts can be turned into commercial products.

Will the IBM roadmap be enough to avoid a Quantum Winter? Unclear

It’s very difficult to say whether IBM’s quantum roadmap will be able to prevent the nascent quantum computing sector from falling into a Quantum Winter — when people grow disenchanted with progress and the available technology, realizing that it’s not ready for production deployment of production-scale practical real-world quantum applications.

Need to double down on research — and prototyping and experimentation

If premature commercialization is a risk, the cure is to double down on pre-commercialization, particularly research, but also prototyping and experimentation.

Need for development of industry standards

In the not too distant future it will be necessary to pursue a stabilization of many of the features of quantum computing, in the form of industry standards.

My raw notes from reviewing IBM’s announcement

The main reason I include my raw notes here is that I put a lot of work into taking these notes and I wanted to preserve them since not everything in them made it into the main body of this paper. I didn’t want to lose them; this seemed to be the best place to preserve them.

  1. IBM Unveils New Roadmap to Practical Quantum Computing Era; Plans to Deliver 4,000+ Qubit System
  2. Orchestrated by intelligent software, new modular and networked processors to tap strengths of quantum and classical to reach near-term Quantum Advantage
  3. Qiskit Runtime to broadly increase accessibility, simplicity, and power of quantum computing for developers
  4. Ability to scale, without compromising speed and quality, will lay groundwork for quantum-centric supercomputers
  5. Leading Quantum-Safe capabilities to protect today’s enterprise data from ‘harvest now, decrypt later’ attacks
  6. May 10, 2022
  7. Armonk, N.Y., May 10, 2022 — IBM (NYSE: IBM) today announced the expansion of its roadmap for achieving large-scale, practical quantum computing. This roadmap details plans for new modular architectures and networking that will allow IBM quantum systems to have larger qubit-counts — up to hundreds of thousands of qubits. To enable them with the speed and quality necessary for practical quantum computing, IBM plans to continue building an increasingly intelligent software orchestration layer to efficiently distribute workloads and abstract away infrastructure challenges.
  8. https://newsroom.ibm.com/2022-05-10-IBM-Unveils-New-Roadmap-to-Practical-Quantum-Computing-Era-Plans-to-Deliver-4,000-Qubit-System
  1. Efficiently distribute workloads
  2. Abstract away infrastructure challenges
  1. robust and scalable quantum hardware
  2. cutting-edge quantum software to orchestrate and enable accessible and powerful quantum programs
  3. a broad global ecosystem of quantum-ready organizations and communities
  1. Is frictionless development primarily about Qiskit runtime? Seems so.
  2. Is this serverless as well?? Seems… odd.
  1. classically communicate and parallelize operations across multiple processors. improved error mitigation techniques. intelligent workload orchestration. combining classical compute resources with quantum processors that can extend in size
  2. deploying short-range, chip-level couplers. closely connect multiple chips together to effectively form a single and larger processor and will introduce fundamental modularity that is key to scaling
  3. providing quantum communication links between quantum processors. IBM has proposed quantum communication links to connect clusters together into a larger quantum system
  1. Earlier this year, IBM launched Qiskit Runtime primitives that encapsulate common quantum hardware queries used in algorithms into easy-to-use interfaces. In 2023, IBM plans to expand these primitives, with capabilities that allow developers to run them on parallelized quantum processors thereby speeding up the user’s application. [What is this really??]
  2. These primitives will fuel IBM’s target to deliver Quantum Serverless into its core software stack in 2023, to enable developers to easily tap into flexible quantum and classical resources. As part of the updated roadmap, Quantum Serverless will also lay the groundwork for core functionality within IBM’s software stack to intelligently trade off and switch between elastic classical and quantum resources; forming the fabric of quantum-centric supercomputing. [Again, what is this really all about??]
  1. Will offer the infrastructure needed to successfully link together multiple quantum processors.
  2. A prototype of this system is targeted to be up and running in 2023.
  3. [Is the S Two not available until 2023 or just the multi-processor link?]
  1. cyber resiliency. quantum-safe cryptography
  2. IBM is home to some of the best cryptographic experts globally who have developed quantum-safe schemes that will be able to deliver practical solutions to this problem
  3. IBM is working in close cooperation with its academic and industrial partners, as well as the U.S. National Institute of Standards and Technology (NIST), to bring these schemes to the forefront of data security technologies
  4. IBM is announcing its forthcoming IBM Quantum Safe portfolio of cryptographic technologies and consulting expertise designed to protect clients’ most valuable data in the era of quantum
  5. IBM’s Quantum Safe portfolio
  1. Education
  2. Strategic guidance
  3. Risk assessment and discovery
  4. Migration to agile and quantum-safe cryptography. IBM has already implemented agile and quantum-safe cryptography to build z16, IBM’s first quantum-safe mainframe system to employ quantum-safe cryptography
  • Expanding the IBM Quantum roadmap to anticipate the future of quantum-centric supercomputing
  • We are explorers. We’re working to explore the limits of computing, chart the course of a technology that has never been realized, and map how we think these technologies will benefit our clients and solve the world’s biggest challenges. But we can’t simply set out into the unknown. A good explorer needs a map.
  • Two years ago, we issued our first draft of that map to take our first steps: our ambitious three-year plan to develop quantum computing technology, called our development roadmap. Since then, our exploration has revealed new discoveries, gaining us insights that have allowed us to refine that map and travel even further than we’d planned. Today, we’re excited to present to you an update to that map: our plan to weave quantum processors, CPUs, and GPUs into a compute fabric capable of solving problems beyond the scope of classical resources alone.
  • Our goal is to build quantum-centric supercomputers. The quantum-centric supercomputer will incorporate quantum processors, classical processors, quantum communication networks, and classical networks, all working together to completely transform how we compute. In order to do so, we need to solve the challenge of scaling quantum processors, develop a runtime environment for providing quantum calculations with increased speed and quality, and introduce a serverless programming model to allow quantum and classical processors to work together frictionlessly.
  • https://research.ibm.com/blog/ibm-quantum-roadmap-2025
  1. AFAICT, their description of algorithm developer is simply the application side code that looks at the raw quantum results and figures out what the final result will be for the application to use.
  2. This is the statistical analysis that I refer to for developing expectation value from circuit repetitions.
  3. And then kernel developer is focused on the actual quantum circuit, which was generated from the application — mapping the logic of the algorithm to specific gates of the circuit, although ultimately a compiler maps the logical circuit to an actual circuit.
  1. To bring value to our users and clients with our systems we need our programing model to fit seamlessly into their workflows, where they can focus on their code and not have to worry about the deployment and infrastructure. In other words, we need a serverless architecture.
  2. https://research.ibm.com/blog/quantum-serverless-programming
  1. those who develop quantum applications to find solutions to complex problems in their specific domains.
  2. We think by next year, we’ll begin prototyping quantum software applications for specific use cases.
  3. We’ll begin to define these services with our first test case — machine learning — working with partners to accelerate the path toward useful quantum software applications.
  4. By 2025, we think model developers will be able to explore quantum applications in machine learning, optimization, natural sciences, and beyond.
  1. 133-qubit Heron (2023) with real-time classical communication between separate processors, enabling the knitting techniques. [Some arbitrary number of Heron’s linked classically. Not so sure the processor’s themselves are linked, maybe just shared control logic. May be optimized for doing shots — multiple runs of the same circuit.]
  2. The second approach is to extend the size of quantum processors by enabling multi-chip processors. “Crossbill,” a 408 qubit processor, will be made from three chips connected by chip-to-chip couplers that allow for a continuous realization of the heavy-hex lattices across multiple chips. The goal of this architecture is to make users feel as if they’re just using just one, larger processor.
  3. in 2024, we also plan to introduce our third approach: quantum communication between processors to support quantum parallelization. We will introduce the 462-qubit “Flamingo” processor with a built-in quantum communication link, and then release a demonstration of this architecture by linking together at least three Flamingo processors into a 1,386-qubit system. We expect that this link will result in slower and lower-fidelity gates across processors. Our software needs to be aware of this architecture consideration in order for our users to best take advantage of this system.
  1. Now, IBM is ushering in the age of the quantum-centric supercomputer, where quantum resources — QPUs — will be woven together with CPUs and GPUs into a compute fabric.
  2. We think that the quantum-centric supercomputer will serve as an essential technology for those solving the toughest problems, those doing the most ground-breaking research, and those developing the most cutting-edge technology.
  3. Following our roadmap will require us to solve some incredibly tough engineering and physics problems.
  4. we’ve gotten this far, after all, with the new help of our world-leading team of researchers, the IBM Quantum Network, the Qiskit open source community, and our growing community of kernel, algorithm, and model developers.
  1. IBM Quantum 2022 Updated Development Roadmap
  2. Jay Gambetta, IBM Fellow and VP of Quantum Computing, unveils the updated IBM Quantum development roadmap through to 2025.
  3. We now believe we have what it takes to scale quantum computers into what we’re calling quantum-centric supercomputers, making it easier than ever for our clients to incorporate quantum capabilities into their respective domains, and access resources with a serverless programming model thanks to Qiskit runtime. In this video, the IBM Quantum team presents 3 new processors demonstrating breakthroughs in scaling by introducing modularity, allowing multi-chip processors, classical parallelization, and quantum parallelization to build larger, more capable systems.
  4. https://www.youtube.com/watch?v=0ka20qanWzI
  1. increase the performance of the processor
  2. develop a better understanding of how to deal with the errors
  3. simplify how a quantum computer is programmed
  1. Bring dynamic circuits to the stack
  2. 433-qubit Osprey processor by end of the year
  3. Demonstrate a quantum volume of 1024
  4. Increase speed from 1.4K CLOPS to 10K CLOPS
  1. Number of qubit
  2. Quantum Volume
  3. CLOPS (Circuit Layer Operations Per Second)
  1. Heron processor in 2023–133 qubits.
  2. Crossbill process in 2024–408 qubits
  3. Flamingo processor in 2024–1,386 qubits
  4. Kookaburra processor in 2025–4,158 qubits
  1. Completely redesigned gates
  2. New tunable couplers that allow fast gates
  3. While simultaneously limiting crosstalk
  1. Short-range chip-to-chip couplers
  2. Enabling quantum computing with classical communications
  3. Classically parallelized systems
  4. 133 x p
  1. Long range coupler to connect chips through a cryogenic cable of around a meter long
  2. Quantum parallelization of quantum processors
  3. But slower and lower fidelity since it involves a physical cable
  4. Each chip will have only a few connections to other chips
  5. Demonstrate 1,386-qubit Flamingo in 2024
  1. Kernel developers. Focuses on making quantum circuits run better and faster on real hardware
  2. Algorithm developers. Uses circuits within classical routines and is focused on applications which demonstrate quantum advantage
  3. Model developers. Uses applications to find useful solutions to complex problems in a specific domain.
  1. On exploratory systems by May
  2. Reducing circuit depth
  3. Alternative models for algorithms
  4. Parity checks for QEC
  5. 3rd generation control system
  6. OpenQASM3 — circuit language description
  7. OpenQASM3 native compiler
  1. Sampler. Quasi-probability distribution
  2. Estimator. Expectation value
  1. Orchestration level to stitch quantum and classical data streams together
  2. Powerful paradigm to enable flexible quantum/classical resource combinations without requiring developers to be infrastructure experts
  1. Entanglement forging
  2. Quantum embedding
  3. Circuit cutting
  4. Circuit knitting toolbox 2025
  1. Building software for model developers. Use Runtime and serverless to address specific use cases. Focus on domain experts
  2. Enable software applications for domain experts to bring quantum algorithms and data together
  3. Prototype quantum software applications. Become quantum software applications
  4. Work with partners who will help us accelerate our path to software applications
  5. Integrate machine learning and kernel algorithms into model developer applications
  1. Where quantum resources — QPUs, CPUs, and GPUs are woven together into a compute fabric
  2. With this compute fabric we will build the essential technology of the 21st century
  3. We’ve got a lot of science to do! [Not engineering?!]
  1. In 2024, we will debut Crossbill, the first single processor made from multiple chips. [Not clear what chips they would be since 408 is not a multiple of 133 or 127. If 4 chips, 4 x 102 = 408. If 3 chips, 3 x 136 = 408. Or maybe they use Heron but just not all of the qubits accessible]
  2. Dynamic Circuits. extend what the hardware can do by reducing circuit depth
  3. In 2024 we will incorporate error suppression and error mitigation to help kernel developers manage quantum hardware noise and take further steps on the path to error correction. [But do further steps on the path to error correction actually refer to QEC with logical qubits, or just mitigation?]
  1. IBM Unveils Expanded Quantum Roadmap; Talks Up ‘Quantum-Centric Supercomputer’
  2. By John Russell
  3. May 10, 2022
  4. https://www.hpcwire.com/2022/05/10/ibm-unveils-expanded-quantum-roadmap-talks-up-quantum-centric-supercomputer/
  5. Mostly follows the blog post
  1. https://thequantuminsider.com/2022/05/10/ibms-latest-roadmap-shows-path-to-deliver-4000-qubit-system/
  2. https://thequantuminsider.com/2022/06/30/a-century-in-the-making-ibm-quantums-development-roadmap-building-the-future-of-a-nascent-technology/

My original proposal for this topic

For reference, here is the original proposal I had for this topic. It may have some value for some people wanting a more concise summary of this paper.

  • Thoughts on the 2022 IBM Quantum Roadmap update. Tower of Babel? Too complex, too many moving parts. Improper priority of scaling qubit count over basic qubit quality. Insufficient detail on milestones for full automatic and transparent quantum error correction. No hints on any improvement to qubit connectivity. No detail or even mention of improvements to classical simulation, either performance or capacity. No hints or mentions of any enhancements to Falcon, Hummingbird, or Eagle.

Summary and conclusions

  1. Major focus on modularity and scaling of hardware architecture, software and tools for applications, and partners and building an ecosystem.
  2. The hardware architectural advances are technically impressive.
  3. Too much focus on higher qubit count. With no clear purpose.
  4. No real focus on higher qubit fidelity. No specific milestones listed. It just comes across as being an afterthought rather than a primary focus. And right now quality (qubit fidelity) is seriously lagging behind scaling (qubit count.)
  5. No attention given to qubit connectivity. No recognition of the problem or path to addressing it.
  6. A lot of extra complexity. With little benefit to developers.
  7. No real focus on a simpler developer experience. No serious attempt to minimize or reduce developer complexity. So-called Frictionless development is still very high friction.
  8. Too vague on milestones for full quantum error correction.
  9. No milestones or metrics for path to quantum advantage. How will we know when we’ve reached quantum advantage and what can we say about it.
  10. No true sense of exactly when we would finally arrive at practical quantum computing. Again, what specific metrics.
  11. No sense of when IBM would offer a commercial product or service. Still focused on research, prototyping, and experimentation — pre-commercialization.
  12. No hint of quality or connectivity updates for Falcon, Hummingbird, or Eagle.
  13. Good to see such transparency.
  14. But significantly more transparency and detail is needed.
  15. Unclear if sufficient to avert a Quantum Winter to two to three years.

--

--

Freelance Consultant

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store