Thoughts on the 2022 IBM Quantum Roadmap Update

Jack Krupansky
77 min readAug 10, 2022

--

IBM posted an update to its quantum roadmap for achieving large-scale, practical quantum computing back in May (2022). This informal paper gives my thoughts on this update to their previous roadmap.

Since IBM issued their roadmap update in May, I’ve had three months to study and digest it in enough detail to organize my thoughts in at least a semi-coherent manner.

This update extends their previous roadmap from 2020. Most of my thoughts from that previous roadmap remain intact. IBM has already met some of their earlier milestones, although I do have some reservations about the results.

This paper only briefly summarizes my thoughts from the previous roadmap, focusing on the additional milestones added with this update.

The key highlights of my reaction to the roadmap update:

  1. Major focus on modularity and scaling of hardware architecture, software and tools for applications, and partners and building an ecosystem.
  2. The hardware architectural advances are technically impressive.
  3. Too much focus on higher qubit count. With no clear purpose.
  4. No real focus on higher qubit fidelity. No specific milestones listed. It just comes across as being an afterthought rather than a primary focus. And right now quality (qubit fidelity) is seriously lagging behind scaling (qubit count.)
  5. No attention given to qubit connectivity. No recognition of the problem or path to addressing it.
  6. A lot of extra complexity. With little benefit to developers.
  7. No real focus on a simpler developer experience. No serious attempt to minimize or reduce developer complexity. So-called Frictionless development is still very high friction.
  8. Too vague on milestones for full quantum error correction.
  9. No milestones or metrics for path to quantum advantage. How will we know when we’ve reached quantum advantage and what can we say about it.
  10. No true sense of exactly when we would finally arrive at practical quantum computing. Again, what specific metrics.
  11. No sense of when IBM would offer a commercial product or service. Still focused on research, prototyping, and experimentation — pre-commercialization.
  12. No hint of quality or connectivity updates for Falcon, Hummingbird, or Eagle.
  13. Good to see such transparency.
  14. But significantly more transparency and detail is needed.
  15. Unclear if sufficient to avert a Quantum Winter to two to three years.

This paper won’t provide full detail for the IBM roadmap, but will provide a meaningful summary of the highlights. The details of the roadmap are contained in three documents and a video from IBM, linked and summarized in the section Roadmap documents and video.

Topics discussed in this paper:

  1. My thoughts on IBM’s previous roadmap
  2. Major positive highlights
  3. Major negative highlights
  4. Three pillars to usher in an era of practical quantum computing
  5. Summary of new hardware
  6. Summary of new software capabilities
  7. Summary of roadmap milestones by year
  8. Roadmap documents and video
  9. Previous IBM Quantum hardware roadmap
  10. Scale, quality, and speed as the three essential dimensions of quantum computing performance
  11. Performance = Scale + Quality + Speed
  12. Four major objectives for 2022
  13. Achieve Quantum Volume of 1024 this year
  14. My thoughts on Eagle and Osprey
  15. Osprey is not committed for more than Quantum Volume of 1024
  16. Less than four months until Osprey is formally introduced
  17. 133-qubit Heron is a classical multi-core quantum processor
  18. Will the 133-qubit Heron processor offer much over the 127-qubit Eagle processor?
  19. Crossbill will be IBM’s first multi-chip quantum processor
  20. Crossbill may be more of an internal engineering milestone rather than offering any features to developers
  21. Will the 408-qubit Crossbill offer any advantage over the 433-qubit Osprey?
  22. Flamingo is a modular quantum processor
  23. Will the 1,386-qubit Flamingo offer much advantage over the 1,121-qubit Condor?
  24. How many chips are in a Kookaburra processor?
  25. How many Kookaburra processors can be connected in a single system?
  26. Beyond 2026… or is it 2026 and Beyond?
  27. Hardware for scaling to 10K-100K qubits
  28. At what stage will multiple Quantum System Two systems be linked?
  29. Every processor should have qubit fidelity and Quantum Volume targets in addition to its qubit count
  30. Supply capabilities label for every processor in the roadmap
  31. Unclear if every new processor in a given year will meet the Quantum Volume target of doubling every year
  32. Will Osprey, Heron, and Condor necessarily exceed the qubit fidelity and Quantum Volume of the best Falcon from this year?
  33. When can Falcon and Hummingbird be retired?
  34. Does Hummingbird have any value now that Eagle is available?
  35. When will IBM have a processor with better qubit quality than Falcon?
  36. Are all of the new processors NISQ devices?
  37. Intelligent software orchestration layer
  38. Serverless programming model to allow quantum and classical processors to work together frictionlessly
  39. Capabilities and metrics that are not mentioned in the IBM roadmap
  40. Additional needs not covered by the IBM roadmap
  41. Critical needs for quantum computing
  42. Three distinct developer personas: kernel developers, algorithm developers, and model developers
  43. Model is an ambiguous term — generic design vs. high-level application
  44. Model developers — developing high-level applications
  45. Models seem roughly comparable to my configurable packaged quantum solutions
  46. Tens of thousands of qubits
  47. Hundreds of thousands of qubits
  48. Misguided to focus so heavily on more qubits since people have been unable to effectively use even 53, 65, or 127 qubits effectively so far
  49. IBM has not provided a justification for the excessive focus on qubit count over qubit fidelity and qubit connectivity (scale over quality)
  50. What do we need all of these qubits for?
  51. We need lots of processors, not lots of qubits
  52. We need lots of processors for circuit repetitions for large shot counts
  53. Modular processors needed for quantum knitting of larger quantum circuits
  54. Two approaches to circuit knitting
  55. Using classical communication for circuit knitting with multiple, parallel quantum processors
  56. Paper on simulating larger quantum circuits on smaller quantum computers
  57. What exactly is classical communication between quantum processors?
  58. Not even a mention of improving connectivity between qubits within a chip or within a quantum processor
  59. Is this a Tower of Babel, too complex and with too many moving parts?
  60. Rising complexity — need simplicity, eventually
  61. What is Qiskit Runtime?
  62. What are Qiskit Runtime Primitives all about?
  63. What is Quantum Serverless?
  64. A little confusion between Quantum Serverless, Qiskit Runtime, and Qiskit Runtime Primitives
  65. A little confusion between Frictionless Development and Quantum Serverless
  66. IBM’s commitment to double Quantum Volume (QV) each year
  67. Will Quantum Volume double every year?
  68. Will anything on the roadmap make a significant difference to the average quantum algorithm designer or quantum application developer in the near term? Not really
  69. It’s not on the roadmap, but we really need a processor with 48 fully-connected near-perfect qubits
  70. No significant detail on logical qubits and quantum error correction
  71. No explanation for error suppression
  72. Physical qubit fidelity is a necessary base even for full quantum error correction, as well as error suppression and mitigation
  73. Error suppression, error mitigation, and even full error correction are not valid substitutes for higher raw physical qubit fidelity
  74. Net qubit fidelity: raw physical qubit fidelity, error suppression, mitigation, correction, and statistical aggregation to determine expectation value
  75. Emphasis on variational methods won’t lead to any dramatic quantum advantage
  76. Premature integration with classical processing
  77. Some day modular design and higher qubit counts will actually matter, but not now and not soon
  78. Nuances of the various approaches to interconnections leads to more complex tooling and burden on developers
  79. Designers of quantum algorithms and developers of quantum applications need frictionless design and development, not more friction
  80. This is still IBM research, not a commercial product engineering team
  81. Risk of premature commercialization
  82. Will the IBM roadmap be enough to avoid a Quantum Winter? Unclear
  83. Need to double down on research — and prototyping and experimentation
  84. Need for development of industry standards
  85. LATE BREAKING: Notes on IBM’s September 14, 2022 Paper on the Future of Quantum Computing (with Superconducting Qubits)
  86. My raw notes from reviewing IBM’s announcement
  87. My original proposal for this topic
  88. Summary and conclusions

My thoughts on IBM’s previous roadmap

For a baseline, you can review my thoughts on IBM’s previous roadmap, written in 2021 on the roadmap from 2020, here:

Although, many of the concerns from two years ago are already included in this paper since they are still relevant.

Major positive highlights

  1. Thank IBM for being this transparent and for such a long time horizon.
  2. Plenty of interesting engineering advances.
  3. Focus on modular quantum systems.
  4. Many more qubits.
  5. Long range coupler to connect chips through a cryogenic cable of around a meter long.
  6. Plenty of interesting software and tool advances.
  7. The new IBM Quantum System Two, with interconnections between systems.

Major negative highlights

  1. Many interesting technical capabilities or metrics which don’t show up on the roadmap. See separate section — Capabilities and metrics that are not mentioned in the IBM roadmap.
  2. Little improvement in qubit fidelity.
  3. No milestones for qubit fidelity or Quantum Volume (QV).
  4. No improvement in qubit connectivity. Within a processor or within a chip.
  5. Too brief — need more detail on each milestone.
  6. Limited transparency — I’m sure IBM has the desired detail in their internal plans.
  7. No indication of when practical quantum computing will be achieved.
  8. No milestones or metrics for degrees of quantum advantage.
  9. No indication of when a commercial product offering will be achieved.

Three pillars to usher in an era of practical quantum computing

IBM characterizes their approach to quantum computing as having three pillars. They want to leverage three pillars:

  1. Robust and scalable quantum hardware.
  2. Cutting-edge quantum software to orchestrate and enable accessible and powerful quantum programs.
  3. A broad global ecosystem of quantum-ready organizations and communities.

Or as they put it in the press release for the roadmap update:

Summary of new hardware

  1. 433-qubit Osprey processor. Previously announced. Coming in just a few months, in 2022.
  2. IBM Quantum System Two overall quantum computer system packaging. Previously announced.
  3. 1,121-qubit Condor processor. Previously announced.
  4. 133-qubit Heron processor. Modular processor. New announcement.
  5. Classical communication link between quantum processors. New announcement.
  6. Quantum communication between modular chips. For modular processors. New announcement.
  7. 408-qubit Crossbill processor. Modular processor. IBM’s first multi-chip processor. New announcement.
  8. 1,386-qubit Flaming processor. Modular processor. New announcement.
  9. 4,158-qubit Kookaburra processor. New announcement.
  10. One-meter quantum cryogenic communication link between quantum computer systems. New announcement.
  11. Potential for scaling to 10K to 100K qubits using modular processors with classical and quantum communication. New announcement.

Summary of new software capabilities

  1. Preparing for serverless quantum computation.
  2. Quantum Serverless.users can take advantage of quantum resources at scale without having to worry about the intricacies of the hardware — we call this frictionless development — which we hope to achieve with a serverless execution model.
  3. Intelligent orchestration.
  4. Dynamic circuits.
  5. Circuit knitting.
  6. Threaded primitives.
  7. Error mitigation and suppression techniques.
  8. Qiskit Runtime Primitives. Sampling. Estimation.
  9. Application services.
  10. Prototype software applications.
  11. Circuit libraries.
  12. Preparation for full error correction.

Summary of roadmap milestones by year

These milestones are based on the graphic roadmap supplied by IBM plus milestones mentioned in the video or textual documents of the roadmap.

2022

Hardware:

  1. 433-qubit Osprey processor by end of the year.
  2. Demonstrate a quantum volume of 1024.
  3. Increase speed from 1.4K CLOPS to 10K CLOPS.

Software:

  1. Bring dynamic circuits to the stack. For increased circuit variety and algorithmic complexity.

2023

Hardware:

  1. 1,121-qubit Condor processor.
  2. 133-qubit Heron processor. Support multiple processors — 133 x p, connected with a classical communication link. Classical parallelized quantum computing with multiple Heron processors connected by a single control system.
  3. Quantum volume is expected to at least double to 2048 (11 qubits).

Software:

  1. Frictionless development with quantum workflows built in the cloud.
  2. Prototype software applications.
  3. Quantum Serverless.
  4. Threaded primitives.

2024

Hardware:

  1. 408-qubit Crossbill processor. IBM’s first multi-chip quantum processor
  2. 462-qubit Flamingo processor.
  3. 1,386-qubit Flamingo multi-chip processor. Three 462-qubit Flamingo processor chips with quantum communication between them.
  4. Quantum volume is expected to at least double to 4096 (12 qubits).

Software:

  1. Call 1K+ qubit services from Cloud API.
  2. Investigate error correction.
  3. Error suppression and mitigation.
  4. Intelligent orchestration.

2025

Hardware:

  1. 4,158-qubit Kookaburra processing. And more qubits.
  2. Quantum volume is expected to at least double to 8192 (13 qubits).

Software:

  1. Quantum software applications. Machine learning, Natural science, Optimization.
  2. Circuit knitting toolbox.

2026 and beyond

Hardware:

  1. Scaling to tens of thousands (10K-100K) of qubits. With classical and quantum communication.
  2. Quantum volume is expected to at least double each year to 16K (14 qubits).

Software:

  1. Circuit libraries.
  2. Error correction.

Roadmap documents and video

IBM posted their updated quantum development roadmap on May 10, 2022 as three documents and a video:

  1. Press release
  2. Web page
  3. Tweet from IBM Research
  4. Tweet from Jay Gambetta
  5. Blog post
  6. Video
  7. HPC tech media coverage

From the press release:

  • IBM Unveils New Roadmap to Practical Quantum Computing Era; Plans to Deliver 4,000+ Qubit System
  • Orchestrated by intelligent software, new modular and networked processors to tap strengths of quantum and classical to reach near-term Quantum Advantage
  • Qiskit Runtime to broadly increase accessibility, simplicity, and power of quantum computing for developers
  • Ability to scale, without compromising speed and quality, will lay groundwork for quantum-centric supercomputers
  • Leading Quantum-Safe capabilities to protect today’s enterprise data from ‘harvest now, decrypt later’ attacks
  • May 10, 2022
  • Armonk, N.Y., May 10, 2022 — IBM (NYSE: IBM) today announced the expansion of its roadmap for achieving large-scale, practical quantum computing. This roadmap details plans for new modular architectures and networking that will allow IBM quantum systems to have larger qubit-counts — up to hundreds of thousands of qubits. To enable them with the speed and quality necessary for practical quantum computing, IBM plans to continue building an increasingly intelligent software orchestration layer to efficiently distribute workloads and abstract away infrastructure challenges.
  • https://newsroom.ibm.com/2022-05-10-IBM-Unveils-New-Roadmap-to-Practical-Quantum-Computing-Era-Plans-to-Deliver-4,000-Qubit-System

From the roadmap web page:

  • Our new 2022 Development Roadmap
  • These are our commitments to advance quantum technology between now and 2026.
  • The road to advantage
  • When we previewed the first development roadmap in 2020 we laid out an ambitious timeline for progressing quantum computing over the proceeding years.
  • To date, we have met all of these commitments and it is our belief we will continue to do so. Now our new 2022 development roadmap extends our new vision to 2025. We are excited to share our new breakthroughs with you.
  • https://www.ibm.com/quantum/roadmap

Tweet from IBM Research:

Tweet from Jay Gambetta:

Blog post by Jay Gambetta:

  • Expanding the IBM Quantum roadmap to anticipate the future of quantum-centric supercomputing
  • We are explorers. We’re working to explore the limits of computing, chart the course of a technology that has never been realized, and map how we think these technologies will benefit our clients and solve the world’s biggest challenges. But we can’t simply set out into the unknown. A good explorer needs a map.
  • Two years ago, we issued our first draft of that map to take our first steps: our ambitious three-year plan to develop quantum computing technology, called our development roadmap. Since then, our exploration has revealed new discoveries, gaining us insights that have allowed us to refine that map and travel even further than we’d planned. Today, we’re excited to present to you an update to that map: our plan to weave quantum processors, CPUs, and GPUs into a compute fabric capable of solving problems beyond the scope of classical resources alone.
  • Our goal is to build quantum-centric supercomputers. The quantum-centric supercomputer will incorporate quantum processors, classical processors, quantum communication networks, and classical networks, all working together to completely transform how we compute. In order to do so, we need to solve the challenge of scaling quantum processors, develop a runtime environment for providing quantum calculations with increased speed and quality, and introduce a serverless programming model to allow quantum and classical processors to work together frictionlessly.
  • https://research.ibm.com/blog/ibm-quantum-roadmap-2025

Video:

  • IBM Quantum 2022 Updated Development Roadmap
  • Jay Gambetta, IBM Fellow and VP of Quantum Computing, unveils the updated IBM Quantum development roadmap through to 2025.
  • We now believe we have what it takes to scale quantum computers into what we’re calling quantum-centric supercomputers, making it easier than ever for our clients to incorporate quantum capabilities into their respective domains, and access resources with a serverless programming model thanks to Qiskit runtime. In this video, the IBM Quantum team presents 3 new processors demonstrating breakthroughs in scaling by introducing modularity, allowing multi-chip processors, classical parallelization, and quantum parallelization to build larger, more capable systems.
  • https://www.youtube.com/watch?v=0ka20qanWzI

HPCwire tech media coverage:

Previous IBM Quantum hardware roadmap

The IBM quantum hardware roadmap was published on September 15, 2020, as well as their quantum software development and ecosystem roadmap published on February 4, 2021.

The IBM quantum hardware roadmap can be found here:

The IBM quantum software development and ecosystem roadmap can be found here:

Scale, quality, and speed as the three essential dimensions of quantum computing performance

IBM measures itself and the performance of its quantum computing systems by three key metrics or dimensions:

  1. Scale. Qubit count. Size.
  2. Quality. Qubit fidelity. Reliable execution of quantum algorithms. Quantum Volume (QV).
  3. Speed. How fast circuits can be executed. CLOPS (Circuit Layer Operations Per Second). How many circuit executions an application can expect each second. Execute more circuit repetitions (shots) per second. Execute a job in less time. Execute more jobs in a given amount of time. System throughput.

Performance = Scale + Quality + Speed

Restating the previous section more simply:

  • Performance = Scale + Quality + Speed

Four major objectives for 2022

As per Jay Gambetta in the roadmap update video, IBM has four major objectives for 2022 for quantum:

  1. Bring dynamic circuits to the stack.
  2. 433-qubit Osprey processor by end of the year.
  3. Demonstrate a quantum volume of 1024.
  4. Increase speed from 1.4K CLOPS to 10K CLOPS.

Jay committed all four of these objectives by the end of the year.

Achieve Quantum Volume of 1024 this year

As per Jay Gambetta in the roadmap update video, IBM has committed to achieving a Quantum Volume (QV) of 1024 this year.

A Quantum Volume (QV) of 1024 means that you could execute a quantum circuit that uses 10 qubits with a depth of 10 layers of computation for each of those qubits. 2¹⁰ = 1024.

Technically, they say they will demonstrate it, not actually committing that all of their quantum computer systems will have it. Or even whether any of the IBM systems available in their cloud-based service will support it this year.

Whether this will be demonstrated on Osprey, Falcon, Hummingbird, or Eagle is unknown at this time. Or at least IBM hasn’t made any public statements, yet.

My thoughts on Eagle and Osprey

The 127-qubit Eagle quantum processor was announced in the previous roadmap and introduced last November, 2021. I’ve posted my thoughts on it:

The 433-qubit Osprey quantum processor was announced in the previous roadmap, along with the 1,121-qubit Condor quantum processor, but won’t be introduced until later this year. Nonetheless, I reviewed the scant information, as well of the available data on Eagle, and speculated about Osprey:

Osprey is not committed for more than Quantum Volume of 1024

IBM hasn’t explicitly committed to what Quantum Volume (QV) we can expect for Osprey later this year, although we can infer that it won’t be more than 1024 since that is the highest Quantum Volume that IBM has committed for this year. But we have no commitment that Osprey will have a Quantum Volume of 1024, just that some IBM quantum processor will have it, but it might be Falcon. Falcon could realistically achieve Quantum Volume of 1024 since it has already achieved 512.

A Quantum Volume (QV) or 1024 implies a maximum qubit count of 10 qubits for a reliable quantum circuit. Not terribly exciting. It is real progress, but we have a very long way to go, to get to even 24 or 32 or 40 or 48-qubit quantum circuits.

Less than four months until Osprey is formally introduced

At the time I am writing this, it is less than four months until IBM formally introduces the 433-qubit Osprey processor, presumably at their annual Quantum Summit event in late November. At this point, the processor should be nearing completion, or at least its design should be virtually cast in concrete. I would imagine that IBM would want to be running tests and resolving last minute glitches and issues for the final two months, starting, say, in the middle of September.

The new IBM Quantum System Two overall quantum computer system packaging should also be near completion, including the new cryostat.

I can only wonder what aspects of Osprey could change between now and November — or even by September.

I would presume that qubit connectivity is fully baked into the cake at this stage. And I don’t expect any improvement or change in connectivity from Eagle, Hummingbird, and Falcon.

Qubit fidelity is also likely fully baked into the cake, although there is plenty of room for calibration and decisions about how to precisely control the microwave pulses which control and read qubits, so that the final, net qubit fidelity might still be somewhat up in the air even until November, especially as full testing begins in earnest.

The exact final qubit count could be a little different from 433 qubits since there is always the possibility of chip fabrication issues which might render some qubits less than fully satisfactory.

133-qubit Heron is a classical multi-core quantum processor

Some number of 133-qubit Heron quantum processors can be classically interconnected. This is somewhat comparable to a classical processor with multiple cores, each capable of running a complete program, all in parallel. Or as IBM puts it, “classical parallelized quantum computing with multiple Heron processors connected by a single control system.

In addition, Heron supports limited classical communication between Herson quantum processors.

But, there is no quantum communication between Heron chips. They act as independent quantum processors. Again, analogous to multiple cores in a classical computer.

There are two areas of uncertainty about Heron:

  1. How exactly does the classical communication between quantum processors really work? See a separate section, What exactly is classical communication between quantum processors?
  2. How many Heron quantum processors can be combined in a single quantum computer system? Presumably this will be limited by the capacity of the new IBM Quantum System Two. But what might the limit be? Could it be one or two? Might it always be three? Four? Five or six? Eight? Ten to twelve? Sixteen? Twenty? 32? More?

The roadmap graphic lists Heron as:

  • 133 qubits x p

But with no commentary on what p might be allowed to be.

Actually, the graphic in the roadmap appears to show five Heron chips, with an additional connection, suggesting at least a sixth chip.

Will the 133-qubit Heron processor offer much over the 127-qubit Eagle processor?

The 133-qubit Heron processor and the 127-qubit Eagle processor will offer a comparable number of qubits, so there’s not much advantage on that score.

The main difference between a single Heron processor and Eagle is that Heron may have a significantly higher Quantum Volume (QV). Eagle currently clocks in at only QV of 64. If IBM sticks to its plan of doubling Quantum Volume every year, Heron could have a QV of 2048, which would be a significant improvement — supporting 11-qubit quantum circuits in contrast to the 6-qubit quantum circuits of Eagle.

Generally, the primary advantage of Heron will be for quantum applications and quantum circuits which can exploit multiple Heron processors running in parallel. And optionally utilizing the classical communication link between the Heron processors.

Crossbill will be IBM’s first multi-chip quantum processor

IBM will introduce the 408-qubit Crossbill quantum processor in 2024. It will be composed of three 136-qubit chips with quantum interconnection between the chips — cross-chip couplers.

Technically, it’s possible that there could be variants of Crossbill with more than three chips, but IBM has not indicated such a plan on the roadmap.

Crossbill may be more of an internal engineering milestone rather than offering any features to developers

Although the multi-chip Crossbill quantum processor will be an amazing engineering achievement, there won’t actually be any new features that developers can take advantage of.

Other than maybe improved Quantum Volume (QV), but IBM has not made an explicit commitment on that score, other than the general commitment to double Quantum Volume each year.

The engineering benefit of Crossbill may be to test out and prove multi-chip quantum processors with cross-chip couplers, which is technology needed for the Flamingo and Kookaburra multi-chip processors.

Will the 408-qubit Crossbill offer any advantage over the 433-qubit Osprey?

The 433-qubit Osprey will already offer a comparable number of qubits to the 408-qubit Crossbill, and be available in 2022.

The only possibility of any advantage would be if Crossbill happens to have a higher qubit fidelity and higher Quantum Volume (QV), which is unclear at this stage. It’s possible, but not certain, that Osprey might have a Quantum Volume of 1024, while Crossbill might have double that at QV of 4096 — if IBM consistently meets its target of doubling Quantum Volume every year. If so, Crossbill could run 12-qubit quantum circuits in contrast to the 10-qubit circuits of Osprey.

Flamingo is a modular quantum processor

While a quantum computer system based on the 133-qubit Heron consists of multiple quantum processors, multiple 462-qubit Flamingo quantum processor chips can be connected using quantum communication to act as a single quantum processor.

IBM says it intends to demonstrate connecting three 462-qubit Flamingo chips to form a 1,386-qubit quantum processor.

There is no clarity as to how many Flamingo chips can be connected in a single quantum computer system. Presumably it is limited by the capacity of the IBM Quantum System Two, but there may be other technical limits as well.

Where it makes sense to have only one or two Flamingo chips in a single system is unclear.

Whether three chips is optimal is unclear.

Whether there is any reason not to link four or six or eight Flamingo chips is unclear as well.

Will the 1,386-qubit Flamingo offer much advantage over the 1,121-qubit Condor?

Granted, the 1,386-qubit Flamingo processor will offer 24% more qubits than the 1,121-qubit Condor processor, but it doesn’t seem likely that quantum applications or quantum algorithms will be able to take advantage of many if any of such a large number of qubits anyway, so that’s a dubious advantage at best. I’d personally say that Flamingo and Condor are roughly comparable in terms of qubit count.

The only real advantage that might materialize is if IBM sticks to its plan of doubling Quantum Volume (QV) every year, in which case Flamingo might have a QV of 4096 in 2024 while Condor might have a QV of 2048 in 2023. But, IBM has not made commitments on either of those QV projections of mine. If those QV projections are achieved, Flamingo will be able to handle 12-qubit quantum circuits in contrast to Condor being able to handle only 11-qubit quantum circuits. I’d personally say that Flamingo and Condor are roughly comparable in terms of maximum algorithm size.

So, Flamingo looks poised to offer some advantage over Condor, but not much.

How many chips are in a Kookaburra processor?

The roadmap is a bit unclear whether the 1,386-qubit Kookaburra processor is itself a multi-chip processor. At one point the blog says 1,386 qubits as a multi-chip processor (ala Flamingo), but then it says three Kookaburra chips can be connected into a 4,158-qubit system, implying that 1,1386 qubits is a single chip. So which is it?! Maybe… they just meant that the 1,386-qubit Kookaburra can be used to compose a multi-chip processor when they said “Kookaburra will be a 1,386 qubit multi-chip processor with a quantum communication link.” Hard to say.

It is clear that at least three 1,386-qubit Kookaburra processors can be connected to form a 4,158-qubit processor.

How many Kookaburra processors can be connected in a single system?

It’s also unclear how many 4,158-qubit Kookaburra processors can be connected into an even larger system.

The roadmap does say 4,158+ qubits, suggesting more than a single Kookaburra processor in the same system.

Beyond 2026… or is it 2026 and Beyond?

The graphic for the roadmap has a final column headed Beyond 2026, but I suspect that is a typo and should be 2026 and Beyond or Beyond 2025.

Error correction and Circuit libraries are listed under Beyond 2026, so does that mean they won’t happen in 2026 or does it leave the door open that they could happen in 2026?

Does Beyond 2026 mean that no new hardware will be introduced in 2026 itself? Does it preclude a Kookaburra-based system with more than 4,158 qubits being introduced in 2026?

Hardware for scaling to 10K-100K qubits

The roadmap does speak of Scaling to 10K-100K qubits with classical and quantum communication for Beyond 2026, but it’s unclear if that’s scaling with some number of 4,158-qubit Kookaburra processors or some other future processor.

It’s also unclear if any of this scaling to 10K and above can be performed using a single IBM Quantum System Two or whether multiple Quantum System Twos must be used with one-meter cryogenic connections between the systems.

At what stage will multiple Quantum System Two systems be linked?

The roadmap video does call for a “Long range coupler to connect chips through a cryogenic cable of around a meter long”, but it’s not clear at what stage this will occur. My notes from the roadmap video suggest that this will be done using Flamingo chips, but don’t indicate when that might happen.

Can Flamingo chips only be connected through this cryogenic cable, or is the cable simply an option?

Will Kookaburra use this cable? Will it be required or optional?

Every processor should have qubit fidelity and Quantum Volume targets in addition to its qubit count

Raw qubit count alone is not a particularly useful metric for judging quantum hardware advances. Qubit fidelity is a very valuable metric, as is Quantum Volume (QV), which gives you an estimate of how many qubits can be used in a quantum algorithm.

So, I would like to see these metrics added to future roadmaps:

  1. Qubit fidelity. Nines of qubit reliability.
  2. Quantum Volume (QV). log2(QV) is the largest number of qubits which can be reliably used in a quantum circuit.

Not all of the quantum processors introduced in a given year will have the same qubit fidelity or Quantum Volume, so this information is needed for each processor.

For detail on nines of qubit fidelity, see my paper:

Supply capabilities label for every processor in the roadmap

I have proposed a capabilities label for quantum computers. There are a variety of metrics in addition to qubit count, qubit fidelity, and Quantum Volume (QV).

My proposal was focused on actual quantum computers but can be used for proposed, planned, or projected quantum computers as well.

My proposed label can also be used to specify the capabilities requirements for quantum algorithms and quantum applications — what quantum computing hardware they need.

This would be too much information to display in the graphic diagram for the roadmap, but every processor should have a page for this label.

For more details on my proposal, see my paper:

Unclear if every new processor in a given year will meet the Quantum Volume target of doubling every year

Although IBM has made clear their intention to double Quantum Volume (QV) each year, it’s not at all clear if every quantum processor introduced in a given year will meet that target.

It seems plausible that some processors may do substantially better than that goal while other processors may fall short.

I suspect that IBM will be content if even a single processor meets that doubling objective.

For this reason, I recommend that IBM set a separate Quantum Volume (QV) target for each processor in the roadmap.

Of course, maybe IBM wants to set the target to be the same across all processors, at least as a target even if it differs in actuality.

It might also be that the doubling could be for processors in the same size category, such as:

  1. 133-qubit Heron vs. 127-qubit Eagle.
  2. 1,386-qubit Flamingo vs. 1,121-qubit Condor.
  3. 408-qubit Crossbill vs. 433-qubit Osprey.

Will Osprey, Heron, and Condor necessarily exceed the qubit fidelity and Quantum Volume of the best Falcon from this year?

IBM has been making steady improvements in the qubit fidelity and Quantum Volume (QV) of the 27-bit Falcon processor. It’s well ahead of even the 65-qubit Hummingbird and the 127-qubit Eagle processors.

There’s absolutely no commitment or forecast as to what the qubit fidelity and Quantum Volume (QV) of Osprey will be when it is introduced later this year, compared to Falcon, Hummingbird, or Eagle.

There’s no clarity as to whether Falcon or Osprey will be the processor which meets IBM’s target of achieving Quantum Volume of 1024 this year. It could be either or both, but it’s hard to say for sure.

Ditto for next year when the 133-qubit Heron and 1,121-qubit Condor processors are introduced. IBM might achieve their annual doubling goal of 2048 with yet another Falcon upgrade, or with Heron or Condor — or both, or maybe all three.

When can Falcon and Hummingbird be retired?

Given the availability of Eagle and the upcoming availability of Osprey, and the interest in driving towards practical quantum computing, it’s curious that the 27-qubit Falcon and 65-qubit Hummingbird are still around. IBM has given no indication when they might be retired.

But, in truth, Falcon is still the workhorse for IBM and is still being actively enhanced. Advances in qubit quality or Quantum Volume (QV) are coming from Falcon, not Eagle of Hummingbird. Falcon still has higher qubit quality than Hummingbird or Eagle.

Hummingbird has roughly comparable qubit quality to Eagle, so it doesn’t seem that Hummingbird is needed any longer. But since Eagle itself has mediocre qubit quality this doesn’t say much.

Maybe there is some hope that Osprey will finally match the qubit quality of Falcon, and eclipse all three processors, Falcon, Hummingbird, and Eagle.

Does Hummingbird have any value now that Eagle is available?

This is an interesting question — what are the relevant merits of the 65-qubit Hummingbird processor compared to the 127-qubit Eagle processor? The Quantum Volume (QV) of both processors is roughly comparable and not better than for the 27-qubit Falcom processor, so Hummingbird would seem to be obsolete and no longer filling any significant need.

But IBM still has it listed in the roadmap, still listed on their system dashboard, hasn’t given any indication of it being obsolete — and still listing it as Exploratory, and hasn’t even mentioned it recently, so its relevance and future are rather unclear.

When will IBM have a processor with better qubit quality than Falcon?

Continuing on the theme from the preceding section, qubit fidelity hasn’t been a priority for newer processors since Falcon. Even the supposedly game-changing Eagle is unable to match the qubit quality of Falcon. So, the question is when IBM will introduce a new quantum processor which actually has better qubit quality than Falcon.

It remains to be seen whether it will be Osprey, Condor, Heron, Crossbill, or Flamingo which finally eclipses the qubit quality of Falcon.

Are all of the new processors NISQ devices?

A quantum computer (processor) is a NISQ device if it meets two criteria:

  1. Noisy qubits. Errors are fairly frequent.
  2. Intermediate scale. 50 to hundreds of qubits.

These criteria were laid out by Prof. John Preskill:

  • Quantum Computing in the NISQ era and beyond
  • For this talk, I needed a name to describe this impending new era, so I made up a word: NISQ. This stands for Noisy IntermediateScale Quantum. Here “intermediate scale” refers to the size of quantum computers which will be available in the next few years, with a number of qubits ranging from 50 to a few hundred. 50 qubits is a significant milestone, because that’s beyond what can be simulated by brute force using the most powerful existing digital supercomputers. “Noisy” emphasizes that we’ll have imperfect control over those qubits; the noise will place serious limitations on what quantum devices can achieve in the near term.
  • https://arxiv.org/abs/1801.00862

All of the new IBM quantum processors have noisy qubits, so that meets the first criterion.

And all of the new IBM processors have more than 50 qubits, so they meet the second criteria as well.

Oops… actually, there are two parts to the second criterion — 50 or more qubits, and limited to a few hundred qubits. So the 133-Qubit Heron, the 408-qubit Crossbill, and the 443-qubit Osprey would qualify by the few hundred qubits criterion, so they would be NISQ devices.

But the 1,121-qubit Condor, the 1,386 Flamingo, and the 4,158-qubit Kookaburra would not qualify as NISQ devices since they have more than a few hundred qubits.

In my own nomenclature, I would classify those larger devices (more than a few hundred qubits) as large scaleNoisy Large-Scale Quantum devicesNLSQ.

In fact, except for the 27-qubit Falcon processor and the older, smaller processors, all of IBM’s current main production processors, including the 65-qubit Hummingbird and the 127-qubit Eagle, qualify as intermediate scale — and noisy.

But to be clear, the 27-qubit Falcon is not technically a NISQ device.

In my own nomenclature, I classify the 27-qubit Falcon as a Noisy Small-Scale Quantum deviceNSSQ.

All of that said, I have to acknowledge that it is quite common for most people to say that all current quantum computers are NISQ devices since they have noisy qubits, regardless of how many qubits they have, and only a few current systems have 50 or more qubits.

For details on my nomenclature, including both qubit count and qubit fidelity, see my paper:

Intelligent software orchestration layer

My apologies if this is a little vague, but this is as much as I could glean from the IBM announcement documents and video about the intelligent software orchestration layer.

  1. Efficiently distribute workloads.
  2. Abstract away infrastructure challenges.
  3. Be able to deploy workflows seamlessly across both quantum and classical resources at scale.
  4. Powerful paradigm to enable flexible quantum/classical resource combinations without requiring developers to be infrastructure experts.
  5. Stitch quantum and classical data streams together into an overall workflow.

Serverless programming model to allow quantum and classical processors to work together frictionlessly

IBM will introduce a serverless programming model, Quantum Serverless, to allow quantum and classical processors to work together frictionlessly.

This involves additional functions for Qiskit Runtime to facilitate greater interaction between classical application code and quantum algorithms.

Capabilities and metrics that are not mentioned in the IBM roadmap

  1. No indication of what functional advantages might come from larger numbers of qubits.
  2. No mention of whether or when quantum networking will be supported. Other than one-meter cryogenic cable between adjacent cryostats — which isn’t listed on the roadmap graphic, but briefly mentioned in the video.
  3. No mention of raw qubit quality per se.we need to solve the challenge of scaling quantum processors, develop a runtime environment for providing quantum calculations with increased speed and quality, and introduce a serverless programming model to allow quantum and classical processors to work together frictionlessly.” Previously, IBM had committed to doubling Quantum Volume each year, effectively adding a single qubit to algorithm size. They do talk about error suppression, mitigation, and correction, but not about physical qubit fidelity. Although the new Heron processor has a new hardware design with “Completely redesigned gates, New tunable couplers that allow fast gates, While simultaneously limiting crosstalk”, which has some potential for improved qubit fidelity — but IBM didn’t explicitly say that or commit to it.
  4. No roadmap for milestones for nines of qubit fidelity.
  5. No milestones for achievement of near-perfect qubits.
  6. No roadmap milestones for qubit measurement fidelity.
  7. No mention of improving connectivity between qubits within a chip or within a quantum processor. Focus on inter-chip and inter-processor connectivity for modularity.
  8. No recognition of the need to support large quantum Fourier transforms.
  9. No milestones for increase in coherence time.
  10. No milestones for decrease in gate execution time.
  11. No milestones for maximum circuit size. Or maximum size for each processor in the roadmap.
  12. No milestones for when larger algorithms — like using 40 qubits — will become possible.
  13. No definition or metrics or milestones for quantum advantage. When might truly significant or mind-boggling dramatic quantum advantage be achieved? Will IBM achieve even minimal quantum advantage by the end of their hardware roadmap (2026)? Be clear about the metric to be measured and achieved.
  14. No clarity as to what exactly is meant by software milestones to improve error suppression and mitigation.
  15. No Falcon or Eagle enhancements are noted. Need for Super-Falcon, Super-Hummingbird, and Super-Eagle, or even a 48-qubit quantum processor with higher qubit fidelity and improved qubit connectivity.
  16. Osprey isn’t promising more than just more qubits, with no suggestion that they will be higher-quality qubits or with any better connectivity.
  17. No milestones for finer granularity of phase and probability amplitude. Needed for larger quantum Fourier transform (QFT) and quantum phase estimation (QPE).
  18. No milestones for size supported for both quantum Fourier transform (QFT) and quantum phase estimation (QPE).
  19. No milestones for when quantum chemists (among others) will be able to rely on quantum Fourier transform (QFT) and quantum phase estimation (QPE)?
  20. When might The ENIAC Moment be achieved? First production-scale practical real-world application.
  21. No milestones for what applications or types or categories of applications might be enabled in terms of support for production-scale data at each technical milestone? Starting with The ENIAC Moment.
  22. No milestones for configurable packaged quantum solutions.
  23. No milestones for Quantum Volume. IBM has previously stated that their intention is to double quantum volume every year. And in the roadmap video Jay stated the intention to demonstrate Quantum Volume of 1024 by end of this year, but no hint of which processors would support the improved Quantum Volume — Osprey or Falcon??
  24. No milestone for replacement of the Quantum Volume metric. Since it only works to 2⁵⁰ or so, or maybe only 2⁴⁰ or 2³² — largest classical simulation.
  25. No indication of focus on rich collection of algorithmic building blocks.
  26. No indication of focus on rich collection of design patterns.
  27. No milestones for supporting a higher-level programming model.
  28. No milestones for supporting a quantum-native programming language. For quantum algorithms.
  29. No milestone for when full quantum error correction (QEC) will be achieved.
  30. When might The FORTRAN Moment be achieved? Need higher-level programming model, quantum-native programming language, and full quantum error correction.
  31. No milestones for how many bits Shor’s algorithm can handle at each stage of the roadmap. When could they even factor six bits (factor 35 = 5 x 7, 39 = 3 x 13, 55 = 5 x 11, 57 = 3 x 19) or seven bits (factor 69 = 3 x 23, 77 = 7 x 11, 87 = 3 x 29, 91 = 7 x 13) or eight bits (133 = 7 x 19, 143 = 11 x 13, 171 = 3 x 57, 221 = 13 x 17, 247 = 13 x 19). Need quantum Fourier transform for 12 to 16 bits.
  32. No mention of simulator roadmap. Qubit capacity — push beyond 32, to 36, 40, 44, and even 48 qubits. Performance. Maximum circuit size. Maximum quantum states. Quantum Volume (QV) capacity. Or debugging. Or configuring connectivity, noise, and errors to match real hardware, current and projected.

Additional needs not covered by the IBM roadmap

  1. Need for debugging capabilities.
  2. Need for testing capabilities.
  3. Need for dramatic improvements in documentation and technical specifications at each milestone.
  4. Need a full Principles of Operation manual for every quantum processor.
  5. When will IBM offer production-scale quantum computing as a commercial product or service? No longer a mere laboratory curiosity, suitable only for the most elite technical teams and the lunatic fringe.
  6. Need for configurable packaged quantum solutions. The next level up from quantum applications, where IBM’s roadmap ends.
  7. Need for development of industry standards. Although it may be a little too soon since there is so much innovation going on and no real stability that could be standardized.

Critical needs for quantum computing

I see that there are four essential, critical needs for quantum computing:

  1. Moderate number of qubits. Not a lot, just enough.
  2. High fidelity for qubits. Don’t need full quantum error correction, but a fairly high level of reliability of raw physical qubits. Near-perfect qubits.
  3. Reasonable connectivity for qubits. Essential for sophisticated techniques such as quantum Fourier transform (QFT). Really do need full any-to-any connectivity.
  4. Sufficiently fine granularity of phase and probability amplitude to support quantum Fourier transform for 20 bits. Ditto — essential for sophisticated techniques such as quantum Fourier transform (QFT).

Unfortunately, IBM is going overboard on qubit count, but falling short on the remaining critical needs.

Three distinct developer personas: kernel developers, algorithm developers, and model developers

In IBM’s approach, there are three distinct personas of developers, each with their own abilities, interests and needs, each requiring a distinct level of support:

  1. Kernel developers. Concerned with the details of constructing and executing quantum circuits, at the gate level. I would call these quantum algorithm designers.
  2. Algorithm developers. Concerned with how to build quantum applications using quantum algorithms. I would call these quantum application developers.
  3. Model developers. Concerned with how to apply quantum applications to solve high-level application problems. I would call these subject-matter experts or solution experts or solution specialists, and they should be working at the level I have proposed for configurable packaged quantum solutions.

Model is an ambiguous term — generic design vs. high-level application

The term model gets used ambiguously by IBM:

  1. Generic design or approach. A model for how to do things, or a design for a solution to a problem.
  2. A high-level application. Not just any piece of software, but software that is focused on a specific end-user problem or need. Used by a subject-matter expert.

Model developers — developing high-level applications

Just to highlight and emphasize the main focus of model developers.

They’re not developing just any software or application, but high-level applications, where subject-matter expertise is critical. Such as a speciality in machine learning, natural science, or optimization.

Models seem roughly comparable to my configurable packaged quantum solutions

Separately I have written about my proposal for configurable packaged quantum solutions which would enable subject-matter experts to work in terms that make sense to them, not in the terms of quantum mechanics or either quantum or classical computing. This is not an exact match for IBM’s conception of model developers, but is at least in the right ballpark.

In truth, my conception of configurable packaged quantum solutions is an entire level higher than that of IBM’s model developers who actually do still need to be application developers. In my conception, the users, subject-matter experts, are able to configure the software without doing any software development, no classical application coding and no need to design quantum algorithms.

In my conception, an IBM model would be what I call a quantum solution, or maybe even a packaged quantum solution in some cases.

The whole point of my conception of configurable packaged quantum solutions is that the user can perform configuration without needing to worry about code or algorithms.

For more on my conception of configurable packaged quantum solutions, see my paper:

Tens of thousands of qubits

The roadmap mentions growing to support tens of thousands of qubits.

The roadmap diagram:

No hint of a timeframe.

Hundreds of thousands of qubits

While elsewhere IBM indicates growing to support tens of thousands of qubits, in a couple of places they refer to hundreds of thousands of qubits.

In the press release:

In the roadmap:

Again, no hint of a timeframe.

Misguided to focus so heavily on more qubits since people have been unable to effectively use even 53, 65, or 127 qubits effectively so far

It’s still rare to encounter quantum algorithms for practical real-world applications using more than a handful of qubits. Maybe occasionally 10–12. Rarely even 16. Only a rare few using more than 16 qubits. I’ve seen one algorithm using 21 qubits, and another using 23 qubits. And that’s about it.

With a Quantum Volume (QV) of no more than 64, 128, or 256, and now maybe 512 — that’s the capability of reliably using 6, 7, 8, or 9 qubits in a quantum algorithm, for IBM, even a majority of the 27 qubits on an IBM Falcon processor aren’t being utilized.

If even the 27 qubits of Falcon can’t effectively be used, it’s no surprise that the 53 or 65 qubits of Hummingbird, or the 127 qubits of Eagle can’t be effectively utilized.

Even if the 433-qubit Osprey can achieve a Quantum Volume (QV) of 1024 later this year, that’s using only 10 qubits — out of 433.

There’s no guarantee that QV 1024 will be achieved on Osprey — IBM committed to QV 1024 this year, but didn’t say which processor it would be on, so it could be on Falcon or even Eagle rather than Osprey.

So it’s completely baffling that IBM would be focusing so much attention on scale (more qubits) when that is far from being the gating factor for making progress to practical quantum computing.

We might want to consider a QV/n ratio — the percentage of qubits which can effectively be used compared to the total count of physical qubits in the processor. Actually, it should be log2(QV)/n:

  • QV 1024 = 10 qubits, log2(QV)/n = 10/433 = 2.31%

Separately, I’ve pondered why we aren’t seeing 40-qubit algorithms even though we have real hardware with more than 40 qubits and simulators for up to 40 qubits. See my paper:

IBM has not provided a justification for the excessive focus on qubit count over qubit fidelity and qubit connectivity (scale over quality)

What’s missing from the roadmap is any simple statement which offers a justification for why IBM is focusing so heavily on increasing qubit count with a priority over increasing qubit fidelity and qubit connectivity (within the chip and processor) — prioritizing scale over quality.

I’m sure they must have their reasons, but they just aren’t telling us.

Some possibilities:

  1. Scaling is easier. Took less time.
  2. Quality is harder. Will take more time.
  3. Gave quality a higher priority, but research efforts didn’t pan out.
  4. Blindsided. Got the mistaken impression that boosting qubit quality was a piece of cake.
  5. Unspoken priority and intent to ramp up quantum error correction and logical qubits. Need even more physical qubits to get enough logical qubits for a practical quantum computer. Belief that quantum error correction is the best and fastest path to high qubit fidelity.
  6. Quantum error correction (QEC) is much harder than expected. That may have thought they would have QEC done by now or coming real soon, like within the next two years.
  7. Misguided faith in NISQ. Too many people and too much hype that amazing algorithms are possible even with noisy NISQ qubits. So where are all of the 40-qubit algorithms?
  8. Other. Plenty of reasons I haven’t thought of.

What do we need all of these qubits for?

IBM is intent on giving us all of these qubits, but to what end? This is a good question, an open question. We can speculate, but it would have been better if IBM had been upfront and clear as to their motivation.

Quantum error correction (QEC) is one obvious use case for very large numbers of qubits, but none of the processors on the roadmap over the next 2–3 years would support any useful number of logical qubits. And even IBM doesn’t mention error correction in the roadmap until at least a year after the 4,158-qubit Kookaburra — no sooner than 2026, Beyond 2026. So, these hundred and 1,000 and more qubits aren’t needed for quantum error correction over the next few years.

The only need I can identify is not raw qubits per se, but having multiple processors to enable parallel execution of circuits for two use cases:

  1. Circuit repetitions (or shots). The combination of the probabilistic nature of quantum computing and a bothersome error rate make it necessary to execute each circuit some number of times so that a statistical distribution can be constructed to endeavor to determine the expectation value for a quantum computation. The more parallel processors the better. Typical shot counts could be 100, 1,000, 10,000, 25,000 or more.
  2. Parallel execution of multiple quantum algorithms in the same quantum application. Different quantum algorithms, or maybe the same quantum algorithm with different input data or parameters. And each quantum algorithm likely requires many shots as well. So the more parallel processors the better.

In short, the critical need is not lots of qubits, but lots of quantum processors, each with a modest to moderate number of qubits.

We need lots of processors, not lots of qubits

Just to highlight that last point from the previous section, having a single quantum processor with lots of qubits is not terribly useful at this stage. But shots (circuit repetitions) are a very pressing need even today and certainly in the coming years as we push towards production-scale practical quantum computing.

We need to be planning for quantum computer systems with hundreds or even thousands of smaller quantum processors even to run just a single, modest quantum circuit.

We need lots of processors for circuit repetitions for large shot counts

Just to reiterate and highlight a point from the preceding sections, that having lots of quantum processors could allow the same circuit to be run many times in parallel to perform circuit repetitions when the shot count is non-trivial — thousands or tens of thousands of repetitions. Or even when it is trivial — tens or hundreds.

Circuit repetitions needed to account for the probabilistic nature of quantum computing as well as the noisiness of qubits causing errors can dramatically reduce any quantum advantage a quantum application could have over a classical application.

The capability of running the multiple instances of the same circuit in parallel can then reduce the reduction due to circuit repetitions, boosting the net quantum advantage.

For some discussion of circuit repetitions, see my paper:

Modular processors needed for quantum knitting of larger quantum circuits

Limitations on qubit fidelity and qubit connectivity will plague quantum computing for quite a while over the next few years. Circuit knitting is one approach to deal with larger quantum circuits which cannot be accommodated on a single quantum processor due to qubit fidelity and qubit connectivity issues.

Even if you do have a 400-qubit quantum processor, it might not have the qubit fidelity and qubit connectivity to run correctly. Instead, you may be able to partition the 400-qubit algorithm into four 100-qubit chunks, run each 100-qubit chunk on a 100-qubit processor — in parallel — and then the intermediate results could be knit together to approximate running the full 400-qubit quantum circuit. At least that’s the theory.

Two approaches to circuit knitting

The roadmap update discusses circuit knitting primarily in terms of running a larger quantum circuit on multiple smaller quantum processors, but from a bigger-picture perspective there are two distinct use cases for circuit knitting:

  1. Classical simulation of larger circuits. For a quantum circuit larger than the largest quantum circuit that can be simulated. Break up the quantum circuit into smaller quantum circuits which can be simulated separately, and then knit together the results from the separate simulations.
  2. Multi-processor quantum computer system with classical communication between the processors. Such as the multiple 133-qubit Heron quantum processors connected with classical communication. Similar break up of the larger quantum circuit into smaller quantum circuits which can each be run on a separate processor — in parallel, and then knit the results. In some cases the intermediate results can be directly communicated to the other processors, but in some cases the knitting must be performed using classical software after circuit execution has completed.

Using classical communication for circuit knitting with multiple, parallel quantum processors

Just to highlight and emphasize the point from the preceding section how classical communication between multiple, parallel quantum processors can facilitate execution of a larger circuit than can execute on a single processor.

Paper on simulating larger quantum circuits on smaller quantum computers

IBM has a blog post and technical paper on simulating larger quantum circuits on smaller quantum computers.

The general concept is that circuit knitting will allow larger problems to be run on classically parallelized quantum processors.

So these are quantum processors, running in parallel, in a classical sense. There is no quantum benefit to this parallelism — n quantum processors running in parallel classically processes n times as much information, not 2^n as would be expected with quantum parallelism.

The blog post:

  • At what cost can we simulate large quantum circuits on small quantum computers?
  • One major challenge of near-term quantum computation is the limited number of available qubits. Suppose we want to run a circuit consisting of 400 qubits, but we only have 100-qubit devices available. What do we do?
  • Over the course of the past year, the IBM Quantum team has begun researching a host of computational methods called circuit knitting. Circuit knitting techniques allow us to partition large quantum circuits into subcircuits that fit on smaller devices, incorporating classical simulation to “knit” together the results to achieve the target answer. The cost is a simulation overhead that scales exponentially in the number of knitted gates.
  • https://research.ibm.com/blog/circuit-knitting-with-classical-communication

The detailed technical paper behind that post is available on the arXiv preprint server:

  • Circuit knitting with classical communication
  • Christophe Piveteau, David Sutter
  • The scarcity of qubits is a major obstacle to the practical usage of quantum computers in the near future. To circumvent this problem, various circuit knitting techniques have been developed to partition large quantum circuits into subcircuits that fit on smaller devices, at the cost of a simulation overhead. In this work, we study a particular method of circuit knitting based on quasiprobability simulation of nonlocal gates with operations that act locally on the subcircuits. We investigate whether classical communication between these local quantum computers can help. We provide a positive answer by showing that for circuits containing n nonlocal CNOT gates connecting two circuit parts, the simulation overhead can be reduced from O(9n) to O(4n) if one allows for classical information exchange. Similar improvements can be obtained for general Clifford gates and, at least in a restricted form, for other gates such as controlled rotation gates.
  • https://arxiv.org/abs/2205.00016

What exactly is classical communication between quantum processors?

Unfortunately, it’s not quite clear what classical communication between quantum processors really means. Presumably only strict classical binary 0 and 1 can be transferred between the quantum processors. This then raises some questions:

  1. Is the quantum state of the transferred qubits on the source quantum processor collapsed as in traditional qubit measurement?
  2. How are qubits selected to be transferred?
  3. How is the classical information transferred? Presumably some sort of bus, but what exactly is that?
  4. How exactly does the incoming classical information affect the state of any qubits on the destination processor? Is an actual quantum logic gate executed? If so, what gate? How does the classical bit participate in the gate, if any? Is a destination qubit initialized to the state of the incoming classical bit? Is a destination qubit flipped or not flipped based on the incoming classical bit? Or… what?

In short, how is the classical communication any different from the quantum algorithm explicitly measuring qubits from the source quantum processor, processing the classical bits with classical code, and then classically deciding what quantum logic gates to execute on the destination quantum processor?

Maybe some clues are offered in some of the various papers IBM has posted. I’ve only skimmed some of them.

Not even a mention of improving connectivity between qubits within a chip or within a quantum processor

The roadmap offers no mention of improving connectivity between qubits within a chip or within a quantum processor. There is discussion on inter-chip and inter-processor connectivity, but no mention of improving connectivity within the chip or processor.

There is no hint of a quantum bus within any of the chips.

There is discussion of classical bus communication between 133-qubit Heron processors, but that’s classical bits, not quantum state.

The roadmap does discuss chip-to-chip quantum couplers to construct the 408-qubit Crossbill processor.

The 462-qubit Flamingo does use a quantum communication link to connect between processors to allow at least three Flamingos to be connected. Although, the roadmap offers this caveat: “We expect that this link will result in slower and lower-fidelity gates across processors. Our software needs to be aware of this architecture consideration in order for our users to best take advantage of this system.

And no mention in the roadmap of improving the intra-chip connectivity of Falcon, Hummingbird, or Eagle.

Is this a Tower of Babel, too complex and with too many moving parts?

I have a general concern that this is all getting far too complex, with too many moving parts. Developers need simplicity, with fewer moving parts, not a… Tower of Babel.

Frictionless is supposed to be a goal of IBM’s approach, but I see far too much friction.

I could be wrong, but this is my considered judgment at this stage.

The big risk for IBM is that they continue down this path, introducing much additional complexity, until one day when they actually are capable of fully supporting practical quantum computing and enabling production-scale practical real-world quantum applications, and then somebody comes along and copies all the good stuff and discards all of the rest, the complexity, to produced a much simplified and streamlined product. Kind of like the way the Multics operating system was developed, succeeded technically but failed commercially, and then UNIX replicated many of the better, more practical, and simpler features of Multics and was a great success, technically and commercially, while Multics just disappeared.

But for now, all we can do is sit back and watch IBM pursue greater and greater complexity.

Rising complexity — need simplicity, eventually

It may be possible for initial applications of quantum computing to tolerate substantial complexity since the work requires elite technical teams and caters to the lunatic fringe. But that level of complexity will drastically limit expansion of the quantum computing sector.

Even the development and deployment of configurable packaged quantum solutions may be able to tolerate significant complexity, up to some point.

But at some point the rising complexity will just be too much — the Tower of Babel mentioned in the preceding section, where growth and expansion of the quantum computing sector begins to peter out.

This is where the so-called FORTRAN Moment comes to the rescue, dramatically simplifying the task of designing, developing, testing, and deploying quantum algorithms and quantum applications. This would require a dramatically simplified and streamlined programming model, a quantum-native programming language for quantum algorithms, near-perfect qubits, and likely even full quantum error correction (QEC).

Only then can widespread adoption of quantum computing begin in earnest.

What is Qiskit Runtime?

Qiskit Runtime allows the quantum application developer to package classical application code with quantum adgorithms and send the combination to an IBM quantum computer system as a job to be executed together. The classical code runs on the classical computer embedded inside the IBM quantum computer system, with fast, direct access to the quantum processor, with no network latency between the classical code and the quantum circuit execution.

Qiskit Runtime is especially appropriate for two use cases:

  1. Variational method algorithms. Will execute the same quantum algorithm a significant number of times, with classical optimization between the runs.
  2. Significant number of quantum algorithm invocations. The quantum application uses a lot of quantum algorithms, or needs to invoke some quantum algorithms a number of times.

In both cases, network latency is eliminated between invocations of the quantum algorithms.

As per IBM:

What are Qiskit Runtime Primitives all about?

These are simply functions available in Qiskit Runtime which facilitate interaction between a quantum application and a quantum algorithm.

As per IBM:

There will likely be additional Qiskit Runtime Primitives in the future, but this is the only specific area addressed at this stage — post-processing the results of a quantum computation.

Initially, only two primitive functions are offered, to facilitate working with the raw results of a quantum algorithm:

  1. Sampler. Quasi-probability distribution.
  2. Estimator. Expectation value.

Details… remain to be disclosed.

What is Quantum Serverless?

It’s an odd term since clearly an IBM quantum computer system is a server accessed over the Internet. The essential meaning of Quantum Serverless is that the user can run their quantum workload on a server without having to provision that server specifically for the user’s workload. The user doesn’t need to worry about deployment and infrastructure.

So, the user is using a server, but it’s a shared server, not their own server.

As IBM puts it:

  • … we need to ensure that our users can take advantage of quantum resources at scale without having to worry about the intricacies of the hardware — we call this frictionless development — which we hope to achieve with a serverless execution model.
  • https://research.ibm.com/blog/quantum-serverless-programming

The quantum circuit and application code are shipped to the remote IBM quantum computer system (a shared server) as part of the request to execute the quantum job.

Qiskit Runtime takes care of all of the logistics for executing both the classical application code and the quantum circuit.

Results from the execution of the job can then be streamed back to the main application which submitted the job.

The key motivation for Quantum Serverless is to facilitate rapid iteration, where the application code needs to repeatedly invoke a quantum algorithm in order to complete the entire quantum computation before streaming the processed results back to the main application which submitted the job.

IBM introduced Quantum Serverless back on November 16, 2021:

  1. Introducing Quantum Serverless, a new programming model for leveraging quantum and classical resources
  2. To bring value to our users and clients with our systems we need our programing model to fit seamlessly into their workflows, where they can focus on their code and not have to worry about the deployment and infrastructure. In other words, we need a serverless architecture.
  3. The rate of progress in any field is often dominated by iteration times, or how long it takes to try a new idea in order to discover whether it works. Long iteration times encourage careful behavior and incremental advances, because the cost of making a mistake is high. Fast iterations, meanwhile, unlock the ability to experiment with new ideas and break out of old ways of doing things. Accelerating progress therefore relies on increasing the speed we can iterate. It is time to bring a flexible platform that enables fast iteration to quantum computing.
  4. https://research.ibm.com/blog/quantum-serverless-programming

A little confusion between Quantum Serverless, Qiskit Runtime, and Qiskit Runtime Primitives

IBM hasn’t drawn quite enough of a bright-line distinction between the conceptual meaning of Quantum Serverless, Qiskit Runtime, and Qiskit Runtime Primitives. These terms are being conflated to simultaneously mean the same thing and different parts of the same thing.

In their roadmap web page they say:

  1. Orchestrating quantum and classical
  2. The unique power of quantum computers is their ability to generate non-classical probability distributions at their outputs.
  3. In 2023 we will introduce Quantum Serverless to our stack and provide tools for quantum algorithm developers to sample and estimate properties of these distributions.
  4. These tools will include intelligent orchestration and the Circuit Knitting toolbox. With these powerful tools developers will be able to deploy workflows seamlessly across both quantum and classical resources at scale, without the need for deep infrastructure expertise.
  5. Finally, at the very top of our stack, we plan to work with our partners and wider ecosystems to build application services into software applications, empowering the widest adoption of quantum computing.
  6. https://www.ibm.com/quantum/roadmap

Hopefully my preceding sections clarified the distinctions between these distinct conceptions.

For example, the functions of “sample and estimate properties of these distributions” are part of Qiskit Runtime Primitives, not specifically Quantum Serverless.

A little confusion between Frictionless Development and Quantum Serverless

Frictionless development is more about the benefit to developers, while Quantum Serverless is the method by which that benefit is achieved.

And in truth, it is really Qiskit Runtime which is the method, the software capability.

You could think of it as a hierarchy.

  1. Frictionless development. Makes development easier.
  2. Quantum Serverless. Enables frictionless development.
  3. Qiskit Runtime. Enables Quantum Serverless.

And then, Qiskit Runtime Primitives are a software capability (or collection of capabilities) which are enabled by Qiskit Runtime, but considered to be part of… Quantum Serverless.

And if that is still a little too confusing… blame it on IBM.

IBM’s commitment to double Quantum Volume (QV) each year

IBM had previously announced a commitment to double Quantum Volume (QV) each year back in 2019:

To be clear, doubling Quantum Volume will increase the size of the largest algorithm which can be reliably executed by one qubit. log2(2) = 1. For example, a Quantum Volume (QV) of 1024 means that algorithms as large as 10 qubits (log2(1024) = 10) can be reliably executed.

For more on Quantum Volume, see my paper:

Will Quantum Volume double every year?

The roadmap itself doesn’t give any indication of milestones for Quantum Volume.

Verbally, IBM has committed to achieving a Quantum Volume (QV) of 1024 by the end of this year. Or at least they will demonstrate it, but it won’t necessarily be available on all of their supported quantum computer systems.

IBM has previously stated that their intention is to double quantum volume every year. So, if they do in fact achieve QV 1024 this year, the implied milestones would be:

  1. 2022. QV 1024. 10 qubits.
  2. 2023. QV 2048. 11 qubits.
  3. 2024. QV 4096. 12 qubits.
  4. 2025. QV 8192. 13 qubits.
  5. 2026. QV 16K. 14 qubits.
  6. 2027. QV 32K. 15 qubits.
  7. 2028. QV 64K. 16 qubits.
  8. 2029. QV 128K. 17 qubits.

Despite their intention of doubling Quantum Volume each year, they have already doubled twice this year (to 256 and then to 512) and have publicly committed to another doubling by the end of the year, for a total of three doublings this year alone.

Whether IBM might be able to exceed their committed goal of doubling Quantum Volume each year remains to be seen. And whether this year is an extreme outlier fluke or might be repeated, at least on occasion, in the coming years also remains to be seen.

Personally, I think they need to get more aggressive on the qubit quality front, but this is what they have publicly committed to, so far.

Will anything on the roadmap make a significant difference to the average quantum algorithm designer or quantum application developer in the near term? Not really

There’s a lot of interesting stuff on the updated roadmap, but so much is a few years away, so the question comes up as to whether there is anything on the roadmap that will make a significant difference to the average quantum algorithm designer or quantum application developer in the near term, like the next few to six months to a year.

The 433-qubit Osprey is the most visible item over the next six months, but it isn’t promising more than just more qubits, with no suggestion that they will be significantly higher-quality qubits or improvement in connectivity, or anything else.

The roadmap doesn’t suggest any enhancements to Falcon or Eagle.

We do know that IBM has committed to demonstrating Quantum Volume (QV) of 1024 by the end of the year, but we don’t even know which processors that will be on. I lean towards it being Osprey since that’s where the biggest opportunity for significant engineering improvement seems to be, but it could also come from relatively minor tweaks to Falcon.

Personally, I think we need to see a Super-Falcon, Super-Hummingbird, and a Super-Eagle, all with improved qubit fidelity. But, I’m not holding my breath.

I also think we really need to see a determined effort to produce a 48-qubit quantum computer focused on near-perfect qubits and full any to any connectivity. See the next section. But, again, I’m not holding my breath.

It’s not on the roadmap, but we really need a processor with 48 fully-connected near-perfect qubits

A quantum processor with 48 fully-connected near-perfect qubits would enable a 20-bit quantum Fourier transform (QFT) and possibly achieve a significant quantum advantage of performance 1,000,000 X better than a classical processor.

IBM is not promising any such thing, but one can always hope.

And maybe next year or the year after, this idea might gain traction as people realize that a quantum computer with hundreds or a thousand or more noisy qubits with weak connectivity isn’t terribly useful in terms of practical applications.

For details on my suggestion, see my paper:

For discussion of near-perfect qubits (3–5 nines of qubit fidelity), see my paper:

No significant detail on logical qubits and quantum error correction

IBM briefly mentioned error correction, but provided no detail, and didn’t even mention logical qubits. This leaves us hanging:

  1. No detailed milestones for full quantum error correction (QEC).
  2. No sense of full quantum error correction being a high priority. Error mitigation may be a higher priority.
  3. No hint of physical qubit count per logical qubit. Is it 57 or 65 qubits, as an IBM paper seemed to suggest, or… what?
  4. When will IBM have enough qubits for full quantum error correction? For 1, 2, 5, 8, and 12 logical qubits? Just to get started and prove the concepts.
  5. No detailed milestones for logical qubit counts. Like 1, 5, 8, 12, 16, 20, 24, 28, 32, 48, 64, 80, 96, 128, 256, or more. Google offers milestones. Enough to support production-scale practical real-world quantum applications.
  6. What will the actual functional transition milestones be on the path to logical qubits?
  7. Will there be any residual error for logical qubits or will they be as perfect as classical bits?
  8. Will future machines support only logical qubits or will physical qubit circuits still be supported?

IBM didn’t even mention fault-tolerant quantum computing.

For detail and discussion of logical qubits, quantum error correction, and fault-tolerant quantum computing, including material from IBM, see my paper:

No explanation for error suppression

IBM hasn’t provided us with any explanation of what they mean by error suppression.

Maybe they just mean things like reducing crosstalk.

Maybe they actually mean higher raw physical qubit fidelity.

At this stage, there’s no way for us to know. The roadmap doesn’t offer us any detail.

Physical qubit fidelity is a necessary base even for full quantum error correction, as well as error suppression and mitigation

IBM offers no insight on whether they intend to exert any significant effort to improve raw physical qubit fidelity. And neither Hummingbird, nor Eagle did either. IBM does talk about quantum error suppression and error mitigation, and eventually full quantum error correction. But in truth, enhancement of raw physical qubit fidelity is a useful and necessary foundation even if those other approaches are used.

Each strategy for coping with physical qubit errors is essentially just a multiplier which enhances qubit reliability, but doesn’t cure qubit reliability and magically take it to 100%. As such, the base, the raw physical qubit fidelity, is a very critical factor in the ultimate net qubit fidelity.

Even full quantum error correction might provide only two to nine nines of improvement in reliability (reduction in error rate), so getting raw physical qubit fidelity as high as possible is essential for maximizing net qubit fidelity.

Error suppression, error mitigation, and even full error correction are not valid substitutes for higher raw physical qubit fidelity

Restating the previous section a little differently, achieving error suppression, error mitigation, or even full error correction are not valid substitutes for achieving higher raw physical qubit fidelity — since higher raw physical qubit fidelity is the foundation upon which error suppression, error mitigation, and even full error correction are based.

Keep in mind that even full quantum error correction (QEC) does not eliminate all errors — it may improve qubit reliability by two to nine nines. It may indeed dramatically reduce errors, but starting from a lower base of raw physical qubit errors will achieve an even lower final, net error rate even after error correction.

And the higher raw physical qubit fidelity is, the fewer physical qubits will be needed to achieve a given level of full quantum error correction, which also means more logical qubits for a given number of physical qubits. So, boosting raw physical qubit fidelity is a win all around.

Net qubit fidelity: raw physical qubit fidelity, error suppression, mitigation, correction, and statistical aggregation to determine expectation value

Just to tie it all together, the goal, the ultimate metric is the net qubit fidelity, which starts with and builds upon the raw physical qubit fidelity:

  1. Raw physical qubit fidelity.
  2. Error suppression.
  3. Error mitigation.
  4. Full quantum error correction (QEC).
  5. Statistical aggregation of multiple runs (shots) to determine expectation value. Examine the statistical distribution to determine the most common result.

Emphasis on variational methods won’t lead to any dramatic quantum advantage

If quantum Fourier transform (QFT) cannot be used, primarily due to weak qubit fidelity and weak qubit connectivity, one category of alternatives are variational methods. Unfortunately, they are not anywhere near as powerful as quantum Fourier transform.

They work, in a fashion, but don’t really offer a lot of computational power or opportunity for truly dramatic quantum advantage. Maybe they might work extremely well for some niche applications, but nobody has discovered any yet. So far, only mediocre results, at best.

And they have difficulties such as so-called barren plateaus which make them difficult to work with and problematic as well.

Mostly such an approach simply confirms that a solution can be implemented on a quantum computer, not that such a solution has any great advantage over classical solutions.

The incremental and iterative nature of a variational method eliminates the potential for any dramatic quantum advantage, even if some more modest fractional quantum advantage might still be possible.

While a quantum Fourier transform might evaluate 2^n possible solutions all at once, a variational method will only evaluate 2^k possible solutions at a time, where k is much smaller than n, and a sequence of attempts for different ranges of 2^k solutions must be attempted iteratively using classical code. So, there is far less than a 2^n computational advantage over classical methods. In fact, the advantage isn’t even 2^k since a sequence of attempts must be made, with a classical optimization step between each of them.

In short, reliance on variational methods will not deliver the full promise of quantum computing, no dramatic quantum advantage. Any quantum advantage of a variational method will be modest at best, a fractional quantum advantage.

Unfortunately, a fair amount of IBM’s hardware architectural improvements in the roadmap seem predicated on the use of variational methods rather than transitioning more rapidly to the more powerful computational techniques of quantum Fourier transform (QFT) and quantum phase estimation (QPE).

For more on variational methods, see the Google tutorial:

For a discussion of dramatic quantum advantage, see my paper:

For a discussion of fractional quantum advantage, see my paper:

Premature integration with classical processing

Integration of quantum and classical processing is an important area to pursue, but I’m not convinced that the technology or timing is ready for much focus on this in terms of a feature that current quantum algorithm designers and quantum application developers can readily make use of.

It’s definitely a good research area, but it’s too soon to commit to anything.

It would make more sense to raise the capabilities of quantum computing much higher first, before pursuing tighter integration with classical processing.

Some day modular design and higher qubit counts will actually matter, but not now and not soon

I appreciate IBM’s interest, willingness, and commitment to modular processor design and higher qubit counts, and some day both will matter very urgently, but that day is not today and won’t be any time soon.

Maybe in three to five years higher qubit counts will become an urgent need. And then modular quantum computer system designs will be key to higher qubit counts.

But right now, much higher qubit fidelity and full any to any qubit connectivity are critical unmet needs. People really need to be able to use quantum Fourier transform (QFT) and quantum phase estimation (QPE), for applications such as quantum computational chemistry, but weak qubit fidelity and weak qubit connectivity preclude that.

Hopefully qubit fidelity and qubit connectivity will have greatly advanced over the next two to three years, leaving users and applications poised to exploit modular quantum computer systems with much higher qubit counts.

Nuances of the various approaches to interconnections leads to more complex tooling and burden on developers

Having a variety of approaches to connectivity can provide opportunities for more flexible approaches to algorithm design, but it can also have two negative side effects:

  1. More complex tooling is needed. Even if the nuances of distinction between the various approaches to interconnection can in fact be reduced to a set of rules, it can lead to requiring that tools, particularly compilers and transpilers, must be significantly more complicated. That won’t come for free. It will impact somebody.
  2. Impact on algorithm design and application design. Not all of the nuances of interconnection can be reduced to simple rules which can be handled automatically by compilers, transpilers, and other tools. Eventually some of these nuances bubble up and impact the designers of quantum algorithms and circuits, and even the developers of quantum applications.
  3. Efficiency considerations which can’t be fully automated and fully mitigated. The efficiency considerations of the nuances may have a negative impact on performance which can’t be fully automated and fully mitigated, leading to performance degradation or the need for quantum algorithm designers and quantum application developers to jump through hoops to try to avoid the negative impacts, which they may or may not be able to successfully do.

It is not uncommon for hardware designers to presume that software is easy. Software may be easier than hardware, but that doesn’t mean it is free and it doesn’t mean that it is easy.

We need hardware that simplifies the design of quantum algorithms and the development of quantum applications, not hardware that puts an even greater burden on the design of quantum algorithms and the development of quantum applications, or on tool and software infrastructure developers, either.

We need a net reduction of complexity across the board.

And hardware nuances that intrude into the quantum algorithm design process or the quantum algorithm development process can be very insidious. Again, maybe not as hard as hardware, but quantum algorithms and quantum applications are hard enough as it is. Designers of quantum algorithms and developers of quantum applications desperately need frictionless, not more friction.

Designers of quantum algorithms and developers of quantum applications need frictionless design and development, not more friction

Just to highlight and emphasize that last point from the preceding section — that the designers of quantum algorithms and the developers of quantum applications need to be further isolated from nuances of the hardware. They need frictionless design and development, not more friction.

The modular features of the latest roadmap are nominally good, but not if they end up making life more difficult for the designers of quantum algorithms and the developers of quantum applications.

Making life easier in some niches of the design and development of quantum algorithms and quantum applications won’t count for much if the overall process is still fraught with friction in many areas.

This is still IBM research, not a commercial product engineering team

IBM has been doing great research and is proposing to do more great research, which is a good thing, but it’s also an issue that highlights that they are still deep in the pre-commercialization phase of quantum computing, and still years from being ready to transition to true commercialization, which requires that all of the research questions and issues have been addressed and resolved.

Yes, they also have some sales efforts going on, but that’s more about keeping interest alive with prospective customers rather than actually selling, delivering, and deploying products and services in the very near term.

And any engineering work while these sales efforts are going on is actually being handled by the research group.

They are all doing great work, but it’s still focused on research — and customers doing prototyping and experimentation — rather than commercial product engineering by a commercial product engineering team, rather than researchers.

I wonder what the org chart really looks like!

For more on pre-commercialization, see my paper:

Risk of premature commercialization

As just mentioned, IBM is busy doing research and has lots more research to do before their quantum computing efforts can be turned into commercial products.

A lot of people are chomping at the bit to commercialize quantum computing — the business people at IBM and customers alike — and the research folk as well, but there are risks if a new technology is moved out of the lab and into the field too quickly. Most importantly, there is the grave risk that customers will quickly become disenchanted when they realize that the current technology is not capable of enabling production deployment of production-scale practical real-world quantum applications.

For more on the risks of premature commercialization, see my paper:

Will the IBM roadmap be enough to avoid a Quantum Winter? Unclear

It’s very difficult to say whether IBM’s quantum roadmap will be able to prevent the nascent quantum computing sector from falling into a Quantum Winter — when people grow disenchanted with progress and the available technology, realizing that it’s not ready for production deployment of production-scale practical real-world quantum applications.

There’s a good chance that IBM’s roadmap will successfully avoid a Quantum Winter, but it critically depends on execution against that roadmap, including so many important details which aren’t listed in the roadmap.

Given the present roadmap, it doesn’t appear that qubit fidelity and qubit connectivity will be capable of enabling production deployment of production-scale practical real-world quantum applications two to three years from now, which will be the critical stage which either makes or breaks a potential Quantum Winter.

But… IBM could surprise us and advance both qubit fidelity and qubit connectivity enough to successfully avert a Quantum Winter.

If IBM does introduce a quantum computer system comparable to my proposal for 48 fully-connected near-perfect qubits, then it will be a slam dunk to avoid a Quantum Winter. But it’s not a slam dunk that IBM will do so.

In short, whether the IBM roadmap will be enough to avoid a Quantum Winter is unclear at present.

For more on Quantum Winter, see my paper:

For more on my proposal for a quantum computer with 48 fully-connected near-perfect qubits, see my paper:

Need to double down on research — and prototyping and experimentation

If premature commercialization is a risk, the cure is to double down on pre-commercialization, particularly research, but also prototyping and experimentation.

Doubling down on pre-commercialization will simultaneously reduce the risks of premature commercialization, but also bring pre-commercialization to completion more quickly. A double benefit.

For more detail on this aspect, see my paper:

For more on research that is needed in quantum computing, see my paper:

Need for development of industry standards

In the not too distant future it will be necessary to pursue a stabilization of many of the features of quantum computing, in the form of industry standards.

It may be a little too soon to pursue standardization since there is so much innovation going on and no real stability that could be standardized.

Still, at some point stability and standards need attention.

But maybe not while we’re still deep in pre-commercialization, focused on research, prototyping, and experimentation, where stability is not a valued priority.

LATE BREAKING: Notes on IBM’s September 14, 2022 Paper on the Future of Quantum Computing (with Superconducting Qubits)

Just a month after posting this paper, IBM posted a preprint of a paper on arXiv which should be considered an addendum to the roadmap update they posted in May:

A few weeks later, in early October, I posted my own notes and comments on that paper, which should be viewed as an addendum to my comments on the May roadmap update:

IBM’s September paper for the most part simply reiterates the May roadmap update, albeit with significantly more detail. There are some further refinements as well. My October paper reviews some of the additions, as well as some fresh high-level highlights.

Both of my papers, this one (August) and my October paper should be read to cover all of my thoughts and comments on both the May roadmap update and IBM’s September paper.

My raw notes from reviewing IBM’s announcement

The main reason I include my raw notes here is that I put a lot of work into taking these notes and I wanted to preserve them since not everything in them made it into the main body of this paper. I didn’t want to lose them; this seemed to be the best place to preserve them.

Also, some readers might appreciate the raw notes as well as how I later distilled them.

They might also be helpful to others because they are pulled together in one place, rather than scattered as the three separate documents are, as well as audio from the video.

My notes are terse and not intended to be grammatically correct — or even intended to be readable by others. They are raw and unedited in general.

Press release:

  1. IBM Unveils New Roadmap to Practical Quantum Computing Era; Plans to Deliver 4,000+ Qubit System
  2. Orchestrated by intelligent software, new modular and networked processors to tap strengths of quantum and classical to reach near-term Quantum Advantage
  3. Qiskit Runtime to broadly increase accessibility, simplicity, and power of quantum computing for developers
  4. Ability to scale, without compromising speed and quality, will lay groundwork for quantum-centric supercomputers
  5. Leading Quantum-Safe capabilities to protect today’s enterprise data from ‘harvest now, decrypt later’ attacks
  6. May 10, 2022
  7. Armonk, N.Y., May 10, 2022 — IBM (NYSE: IBM) today announced the expansion of its roadmap for achieving large-scale, practical quantum computing. This roadmap details plans for new modular architectures and networking that will allow IBM quantum systems to have larger qubit-counts — up to hundreds of thousands of qubits. To enable them with the speed and quality necessary for practical quantum computing, IBM plans to continue building an increasingly intelligent software orchestration layer to efficiently distribute workloads and abstract away infrastructure challenges.
  8. https://newsroom.ibm.com/2022-05-10-IBM-Unveils-New-Roadmap-to-Practical-Quantum-Computing-Era-Plans-to-Deliver-4,000-Qubit-System

Roadmap to Practical Quantum Computing Era

modular and networked processors

reach near-term Quantum Advantage

lay groundwork for quantum-centric supercomputers

Quantum-Safe capabilities to protect today’s enterprise data from ‘harvest now, decrypt later’ attacks

expansion of its roadmap

large-scale, practical quantum computing

larger qubit-counts — up to hundreds of thousands of qubits

speed and quality necessary for practical quantum computing

intelligent software orchestration layer

  1. Efficiently distribute workloads
  2. Abstract away infrastructure challenges

leverage three pillars

  1. robust and scalable quantum hardware
  2. cutting-edge quantum software to orchestrate and enable accessible and powerful quantum programs
  3. a broad global ecosystem of quantum-ready organizations and communities

get us to the practical quantum computing era

an era of quantum-centric supercomputers that will open up large and powerful computational spaces for our developer community, partners and clients

Qiskit Runtime, IBM’s containerized quantum computing service and programming model

Later this year, IBM expects to continue the previously laid out targets on its roadmap and unveil its 433-qubit processor, IBM Osprey.

goals to build a frictionless development experience with Qiskit Runtime and workflows built right in the cloud

  1. Is frictionless development primarily about Qiskit runtime? Seems so.
  2. Is this serverless as well?? Seems… odd.

IBM intends to introduce IBM Condor, the world’s first universal quantum processor with over 1,000 qubits. world’s first universal quantum processor with over 1,000 qubits [In contrast to D-Wave 2K]

2022: we will add dynamic circuits, which allow for feedback and feedforward of quantum measurements to change or steer the course of future operations. Dynamic circuits extend what the hardware can do by reducing circuit depth, by allowing for alternative models of constructing circuits, and by enabling parity checks of the fundamental operations at the heart of quantum error correction.

achieve the scale, quality, and speed of computing necessary to unlock the promise of quantum technology

combining modular quantum processors with classical infrastructure

let users easily build quantum calculations into their workflows

IBM is targeting three regimes of scalability for its quantum processors — reaching true scalability

  1. classically communicate and parallelize operations across multiple processors. improved error mitigation techniques. intelligent workload orchestration. combining classical compute resources with quantum processors that can extend in size
  2. deploying short-range, chip-level couplers. closely connect multiple chips together to effectively form a single and larger processor and will introduce fundamental modularity that is key to scaling
  3. providing quantum communication links between quantum processors. IBM has proposed quantum communication links to connect clusters together into a larger quantum system

IBM’s 2025 goal: a 4,000+ qubit processor built with multiple clusters of modularly scaled processors.

software milestones to improve error suppression and mitigation. paving the path towards the error-corrected quantum systems of the future

Qiskit Runtime primitives

  1. Earlier this year, IBM launched Qiskit Runtime primitives that encapsulate common quantum hardware queries used in algorithms into easy-to-use interfaces. In 2023, IBM plans to expand these primitives, with capabilities that allow developers to run them on parallelized quantum processors thereby speeding up the user’s application. [What is this really??]
  2. These primitives will fuel IBM’s target to deliver Quantum Serverless into its core software stack in 2023, to enable developers to easily tap into flexible quantum and classical resources. As part of the updated roadmap, Quantum Serverless will also lay the groundwork for core functionality within IBM’s software stack to intelligently trade off and switch between elastic classical and quantum resources; forming the fabric of quantum-centric supercomputing. [Again, what is this really all about??]

IBM Quantum System Two

  1. Will offer the infrastructure needed to successfully link together multiple quantum processors.
  2. A prototype of this system is targeted to be up and running in 2023.
  3. [Is the S Two not available until 2023 or just the multi-processor link?]

Quantum-safe security

  1. cyber resiliency. quantum-safe cryptography
  2. IBM is home to some of the best cryptographic experts globally who have developed quantum-safe schemes that will be able to deliver practical solutions to this problem
  3. IBM is working in close cooperation with its academic and industrial partners, as well as the U.S. National Institute of Standards and Technology (NIST), to bring these schemes to the forefront of data security technologies
  4. IBM is announcing its forthcoming IBM Quantum Safe portfolio of cryptographic technologies and consulting expertise designed to protect clients’ most valuable data in the era of quantum
  5. IBM’s Quantum Safe portfolio

IBM’s Quantum Safe portfolio

  1. Education
  2. Strategic guidance
  3. Risk assessment and discovery
  4. Migration to agile and quantum-safe cryptography. IBM has already implemented agile and quantum-safe cryptography to build z16, IBM’s first quantum-safe mainframe system to employ quantum-safe cryptography

Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice and represent goals and objectives only.

Watch video — Take notes — too short, only 9 seconds, just some of the interconnection schemes.

Blog post by Jay Gambetta:

  • Expanding the IBM Quantum roadmap to anticipate the future of quantum-centric supercomputing
  • We are explorers. We’re working to explore the limits of computing, chart the course of a technology that has never been realized, and map how we think these technologies will benefit our clients and solve the world’s biggest challenges. But we can’t simply set out into the unknown. A good explorer needs a map.
  • Two years ago, we issued our first draft of that map to take our first steps: our ambitious three-year plan to develop quantum computing technology, called our development roadmap. Since then, our exploration has revealed new discoveries, gaining us insights that have allowed us to refine that map and travel even further than we’d planned. Today, we’re excited to present to you an update to that map: our plan to weave quantum processors, CPUs, and GPUs into a compute fabric capable of solving problems beyond the scope of classical resources alone.
  • Our goal is to build quantum-centric supercomputers. The quantum-centric supercomputer will incorporate quantum processors, classical processors, quantum communication networks, and classical networks, all working together to completely transform how we compute. In order to do so, we need to solve the challenge of scaling quantum processors, develop a runtime environment for providing quantum calculations with increased speed and quality, and introduce a serverless programming model to allow quantum and classical processors to work together frictionlessly.
  • https://research.ibm.com/blog/ibm-quantum-roadmap-2025

Development roadmap

Our plan to weave quantum processors, CPUs, and GPUs into a compute fabric capable of solving problems beyond the scope of classical resources alone

Our goal is to build quantum-centric supercomputers. The quantum-centric supercomputer will incorporate quantum processors, classical processors, quantum communication networks, and classical networks, all working together to completely transform how we compute.

A challenge of near-term quantum computation is the limited number of available qubits. Suppose we want to run a circuit for 400 qubits, but we only have 100 qubit devices available. What do we do? Read about circuit knitting with classical communication.

introduce a serverless programming model to allow quantum and classical processors to work together frictionlessly

Earlier this year, we launched the Qiskit Runtime Services with primitives: pre-built programs that allow algorithm developers easy access to the outputs of quantum computations without requiring intricate understanding of the hardware.

Preparing for serverless quantum computation

Different users have different needs and experiences, and we need to build tools for each persona: kernel developers, algorithm developers, and model developers.

My note… Distinction between kernel and algorithm seems… weird.

  1. AFAICT, their description of algorithm developer is simply the application side code that looks at the raw quantum results and figures out what the final result will be for the application to use.
  2. This is the statistical analysis that I refer to for developing expectation value from circuit repetitions.
  3. And then kernel developer is focused on the actual quantum circuit, which was generated from the application — mapping the logic of the algorithm to specific gates of the circuit, although ultimately a compiler maps the logical circuit to an actual circuit.

Dynamic circuits extend what the hardware can do by reducing circuit depth, by allowing for alternative models of constructing circuits, and by enabling parity checks of the fundamental operations at the heart of quantum error correction.

in 2023, we plan to bring threads to the Qiskit Runtime, allowing us to operate parallelized quantum processors, including automatically distributing work that is trivially parallelizable. Such as Heron x 3

In 2024 and 2025, we’ll introduce error mitigation and suppression techniques into Qiskit Runtime so that users can focus on improving the quality of the results obtained from quantum hardware. These techniques will help lay the groundwork for quantum error correction in the future.

The unique power of quantum computers is their ability to generate non-classical probability distributions at their outputs. Consequently, much of quantum algorithm development is related to sampling from, or estimating properties of these distributions. The primitives are a collection of core functions to easily and efficiently work with these distributions.

Introducing Quantum Serverless, a new programming model for leveraging quantum and classical resources

  1. To bring value to our users and clients with our systems we need our programing model to fit seamlessly into their workflows, where they can focus on their code and not have to worry about the deployment and infrastructure. In other words, we need a serverless architecture.
  2. https://research.ibm.com/blog/quantum-serverless-programming

Typically, algorithm developers require breaking problems into a series of smaller quantum and classical programs, with an orchestration layer to stitch the data streams together into an overall workflow. We call the infrastructure responsible for this stitching. To bring value to our users, we need our programing model to fit seamlessly into their workflows, where they can focus on their code and not have to worry about the deployment and infrastructure. We need a serverless architecture.Quantum Serverless. Quantum Serverless centers around enabling flexible quantum-classical resource combinations without requiring developers to be hardware and infrastructure experts, while allocating just those computing resources a developer needs when they need them. In 2023, we plan to integrate Quantum Serverless into our core software stack in order to enable core functionality such as circuit knitting.

What is circuit knitting? Circuit knitting techniques break larger circuits into smaller pieces to run on a quantum computer, and then knit the results back together using a classical computer. entanglement forging

With all of these pieces in place, we’ll soon have quantum computing ready for our model developers

  1. those who develop quantum applications to find solutions to complex problems in their specific domains.
  2. We think by next year, we’ll begin prototyping quantum software applications for specific use cases.
  3. We’ll begin to define these services with our first test case — machine learning — working with partners to accelerate the path toward useful quantum software applications.
  4. By 2025, we think model developers will be able to explore quantum applications in machine learning, optimization, natural sciences, and beyond.

Scaling — hardware, more qubits

We also know that a quantum computer capable of reaching its full potential could require hundreds of thousands, maybe millions of high-quality qubits, so we must figure out how to scale these processors up. [Unclear what that belief is based on — no justification given]

IBM Quantum System Two. Osprey (2022) and Condor (2023)

But we don’t plan to realize large-scale quantum computers on a giant chip. Instead, we’re developing ways to link processors together into a modular system capable of scaling without physics limitations.

Three distinct approaches

  1. 133-qubit Heron (2023) with real-time classical communication between separate processors, enabling the knitting techniques. [Some arbitrary number of Heron’s linked classically. Not so sure the processor’s themselves are linked, maybe just shared control logic. May be optimized for doing shots — multiple runs of the same circuit.]
  2. The second approach is to extend the size of quantum processors by enabling multi-chip processors. “Crossbill,” a 408 qubit processor, will be made from three chips connected by chip-to-chip couplers that allow for a continuous realization of the heavy-hex lattices across multiple chips. The goal of this architecture is to make users feel as if they’re just using just one, larger processor.
  3. in 2024, we also plan to introduce our third approach: quantum communication between processors to support quantum parallelization. We will introduce the 462-qubit “Flamingo” processor with a built-in quantum communication link, and then release a demonstration of this architecture by linking together at least three Flamingo processors into a 1,386-qubit system. We expect that this link will result in slower and lower-fidelity gates across processors. Our software needs to be aware of this architecture consideration in order for our users to best take advantage of this system.

in 2025, we’ll introduce the “Kookaburra” processor. Kookaburra will be a 1,386 qubit multi-chip processor with a quantum communication link. As a demonstration, we will connect three Kookaburra chips into a 4,158-qubit system connected by quantum communication for our users.

The combination of these technologies — classical parallelization, multi-chip quantum processors, and quantum parallelization — gives us all the ingredients we need to scale our computers to wherever our roadmap takes.

By 2025, we will have effectively removed the main boundaries in the way of scaling quantum processors up with modular quantum hardware and the accompanying control electronics and cryogenic infrastructure.

The quantum-centric supercomputer

  1. Now, IBM is ushering in the age of the quantum-centric supercomputer, where quantum resources — QPUs — will be woven together with CPUs and GPUs into a compute fabric.
  2. We think that the quantum-centric supercomputer will serve as an essential technology for those solving the toughest problems, those doing the most ground-breaking research, and those developing the most cutting-edge technology.
  3. Following our roadmap will require us to solve some incredibly tough engineering and physics problems.
  4. we’ve gotten this far, after all, with the new help of our world-leading team of researchers, the IBM Quantum Network, the Qiskit open source community, and our growing community of kernel, algorithm, and model developers.

Video:

  1. IBM Quantum 2022 Updated Development Roadmap
  2. Jay Gambetta, IBM Fellow and VP of Quantum Computing, unveils the updated IBM Quantum development roadmap through to 2025.
  3. We now believe we have what it takes to scale quantum computers into what we’re calling quantum-centric supercomputers, making it easier than ever for our clients to incorporate quantum capabilities into their respective domains, and access resources with a serverless programming model thanks to Qiskit runtime. In this video, the IBM Quantum team presents 3 new processors demonstrating breakthroughs in scaling by introducing modularity, allowing multi-chip processors, classical parallelization, and quantum parallelization to build larger, more capable systems.
  4. https://www.youtube.com/watch?v=0ka20qanWzI

Video — Jay Gambetta, IBM Fellow and VP of Quantum Computing:

3 new processors — demonstrating breakthroughs in scaling

Three goals — objectives:

  1. increase the performance of the processor
  2. develop a better understanding of how to deal with the errors
  3. simplify how a quantum computer is programmed

Plus need error mitigation, circuit knitting, error correction and much more built right into a software stack tightly coupled to hardware

Software stack tightly coupled to hardware

Quantum computing will never replace classical computing

Four major objectives for 2022:

  1. Bring dynamic circuits to the stack
  2. 433-qubit Osprey processor by end of the year
  3. Demonstrate a quantum volume of 1024
  4. Increase speed from 1.4K CLOPS to 10K CLOPS

All by the end of the year.

Performance = Scale + Quality + Speed

  1. Number of qubit
  2. Quantum Volume
  3. CLOPS (Circuit Layer Operations Per Second)

Oliver Dial, Quantum Hardware Architect

Everything on the development roadmap drives scale, quality, or speed

Today, scale and quality

Four developments:

  1. Heron processor in 2023–133 qubits.
  2. Crossbill process in 2024–408 qubits
  3. Flamingo processor in 2024–1,386 qubits
  4. Kookaburra processor in 2025–4,158 qubits

Heron processor:

2023

133 qubits

Pushes quality to the next level

  1. Completely redesigned gates
  2. New tunable couplers that allow fast gates
  3. While simultaneously limiting crosstalk

This architecture as a replacement for fixed coupler devices and basis for the other device announcements.

Control multiple Herons with the same control hardware:

  1. Short-range chip-to-chip couplers
  2. Enabling quantum computing with classical communications
  3. Classically parallelized systems
  4. 133 x p

Crossbill processor:

2024

408-qubits

But is this based on Heron or a different chip?

First multi-chip processor.

3 chips = 408 / 3 = 136 qubits per chip — but that doesn’t match the 133 qubits of Heron!

Flamingo processor:

  1. Long range coupler to connect chips through a cryogenic cable of around a meter long
  2. Quantum parallelization of quantum processors
  3. But slower and lower fidelity since it involves a physical cable
  4. Each chip will have only a few connections to other chips
  5. Demonstrate 1,386-qubit Flamingo in 2024

Kookaburra processor

By the end of 2025

4,158 qubits

Bring the three developments together in a single system

Quantum parallelism of multi-chip quantum processors

Use short-range chip to chip couplers with modular classical I/O and then long-range couplers

Scaling to 10K-100K qubits with classical and quantum communication

Multiple processors, chip to chip couplers, and long-range couplers enable scaling

Video — Blake Johnson, Quantum Platform Lead:

More than one kind of quantum developer. Ecosystem operating at three levels:

  1. Kernel developers. Focuses on making quantum circuits run better and faster on real hardware
  2. Algorithm developers. Uses circuits within classical routines and is focused on applications which demonstrate quantum advantage
  3. Model developers. Uses applications to find useful solutions to complex problems in a specific domain.

No change in objectives for 2022

Still on track for dynamic circuits

  1. On exploratory systems by May
  2. Reducing circuit depth
  3. Alternative models for algorithms
  4. Parity checks for QEC
  5. 3rd generation control system
  6. OpenQASM3 — circuit language description
  7. OpenQASM3 native compiler

Qiskit Runtime Core Primitives — new computational primitives — for the many new users, these primitives will be the bedrock of their programming experience — non-classical probability distributions at their outputs — sampling from or estimating properties of these distributions — collection of core functions

Two initial primitive functions:

  1. Sampler. Quasi-probability distribution
  2. Estimator. Expectation value

Enable threaded runtimes (primitives) 2023

Error suppression and mitigation 2024

Quantum serverless 2023

  1. Orchestration level to stitch quantum and classical data streams together
  2. Powerful paradigm to enable flexible quantum/classical resource combinations without requiring developers to be infrastructure experts

Intelligent orchestration 2024

Video — Katie Pizzolato, Quantum Theory, Applications & Software:

Optimizing between classical and quantum resources is going to be key to quantum advantage

Error mitigation and correction that is uploaded into an estimator or simulator Runtime Primitive. Qiskit Runtime

Circuit Knitting

  1. Entanglement forging
  2. Quantum embedding
  3. Circuit cutting
  4. Circuit knitting toolbox 2025

Circuit libraries 2026. Incorporate optimized circuits into libraries

Finally, at top of stack, application services

  1. Building software for model developers. Use Runtime and serverless to address specific use cases. Focus on domain experts
  2. Enable software applications for domain experts to bring quantum algorithms and data together
  3. Prototype quantum software applications. Become quantum software applications
  4. Work with partners who will help us accelerate our path to software applications
  5. Integrate machine learning and kernel algorithms into model developer applications

Video — Back to Jay:

Elastic classical computing to knit quantum programs together to solve important problems. Quantum Serverless

IBM is ushering in the age of quantum-centric supercomputers

  1. Where quantum resources — QPUs, CPUs, and GPUs are woven together into a compute fabric
  2. With this compute fabric we will build the essential technology of the 21st century
  3. We’ve got a lot of science to do! [Not engineering?!]

Roadmap web page:

Our new 2022 Development Roadmap

These are our commitments to advance quantum technology between now and 2026.

Watch the video

Solving the scaling problem. Going beyond single chip processors is the key to solving scale.

In 2023 we plan to introduce classical parallelized quantum computing with multiple Heron processors connected by a single control system. [No mention of how many Herons or total qubits, other than 133 x p]

  1. In 2024, we will debut Crossbill, the first single processor made from multiple chips. [Not clear what chips they would be since 408 is not a multiple of 133 or 127. If 4 chips, 4 x 102 = 408. If 3 chips, 3 x 136 = 408. Or maybe they use Heron but just not all of the qubits accessible]
  2. Dynamic Circuits. extend what the hardware can do by reducing circuit depth
  3. In 2024 we will incorporate error suppression and error mitigation to help kernel developers manage quantum hardware noise and take further steps on the path to error correction. [But do further steps on the path to error correction actually refer to QEC with logical qubits, or just mitigation?]

The unique power of quantum computers is their ability to generate non-classical probability distributions at their outputs. In 2023 we will introduce Quantum Serverless to our stack and provide tools for quantum algorithm developers to sample and estimate properties of these distributions. [I think this is Qiskit Runtime Primitives. Why that makes it “serverless” is not clear!]

intelligent orchestration

Circuit Knitting toolbox

HPCwire coverage:

  1. IBM Unveils Expanded Quantum Roadmap; Talks Up ‘Quantum-Centric Supercomputer’
  2. By John Russell
  3. May 10, 2022
  4. https://www.hpcwire.com/2022/05/10/ibm-unveils-expanded-quantum-roadmap-talks-up-quantum-centric-supercomputer/
  5. Mostly follows the blog post

Quantum Insider coverage:

  1. https://thequantuminsider.com/2022/05/10/ibms-latest-roadmap-shows-path-to-deliver-4000-qubit-system/
  2. https://thequantuminsider.com/2022/06/30/a-century-in-the-making-ibm-quantums-development-roadmap-building-the-future-of-a-nascent-technology/

My original proposal for this topic

For reference, here is the original proposal I had for this topic. It may have some value for some people wanting a more concise summary of this paper.

  • Thoughts on the 2022 IBM Quantum Roadmap update. Tower of Babel? Too complex, too many moving parts. Improper priority of scaling qubit count over basic qubit quality. Insufficient detail on milestones for full automatic and transparent quantum error correction. No hints on any improvement to qubit connectivity. No detail or even mention of improvements to classical simulation, either performance or capacity. No hints or mentions of any enhancements to Falcon, Hummingbird, or Eagle.

Summary and conclusions

  1. Major focus on modularity and scaling of hardware architecture, software and tools for applications, and partners and building an ecosystem.
  2. The hardware architectural advances are technically impressive.
  3. Too much focus on higher qubit count. With no clear purpose.
  4. No real focus on higher qubit fidelity. No specific milestones listed. It just comes across as being an afterthought rather than a primary focus. And right now quality (qubit fidelity) is seriously lagging behind scaling (qubit count.)
  5. No attention given to qubit connectivity. No recognition of the problem or path to addressing it.
  6. A lot of extra complexity. With little benefit to developers.
  7. No real focus on a simpler developer experience. No serious attempt to minimize or reduce developer complexity. So-called Frictionless development is still very high friction.
  8. Too vague on milestones for full quantum error correction.
  9. No milestones or metrics for path to quantum advantage. How will we know when we’ve reached quantum advantage and what can we say about it.
  10. No true sense of exactly when we would finally arrive at practical quantum computing. Again, what specific metrics.
  11. No sense of when IBM would offer a commercial product or service. Still focused on research, prototyping, and experimentation — pre-commercialization.
  12. No hint of quality or connectivity updates for Falcon, Hummingbird, or Eagle.
  13. Good to see such transparency.
  14. But significantly more transparency and detail is needed.
  15. Unclear if sufficient to avert a Quantum Winter to two to three years.

— -

For more of my writing: List of My Papers on Quantum Computing.

--

--