Sitemap

Thoughts on the 2025 IBM Quantum Roadmap Update

64 min readJul 18, 2025

Here are my initial thoughts on the IBM Quantum roadmap update for 2025, released by IBM on June 10, 2025.

This document is essentially a collection of my posts on LinkedIn which focused on IBM’s roadmap update. The content is unchanged — except for a summary section added at the end. This document only makes it easier to find, read, and search the content from those posts.

Topics discussed in this paper:

  1. Resources
  2. A few questions about Nighthawk based on Jerry Chow’s keynote at the Economist Impact — a few weeks before the roadmap was released
  3. IBM finally updates their quantum computing roadmap (the color graphic version) to include the new Nighthawk and Loon systems for this year, 2025
  4. Sorry, but commitments to FTQC/QEC at this time are outright Fraud (F-R-A-U-D)!
  5. Some preliminary observations and ruminations on the new IBM Quantum roadmap
  6. Save some time and effort and just watch the video
  7. Possible solution to the mystery of how Nighthawk can execute more complex circuits with fewer gates
  8. More points about the fact that the gate limit (5,000) is identical between IBM Quantum’s current Heron quantum computer and the upcoming and supposedly superior Nighthawk system
  9. IBM needs to split their quantum roadmap in half, not just innovation vs. development, but FTQC vs. non-FTQC (or pre-FTQC)
  10. The era of CLAIMED Quantum Advantage
  11. Is quantum circuit knitting now officially dead, at least from IBM?
  12. Confusion about multi-module Nighthawks
  13. Some preliminary questions about the roadmap
  14. Oliver Dial’s podcast on the roadmap
  15. Just to highlight the critical distinction between simulation and computing
  16. I finally managed to finish slogging through IBM Quantum’s roadmap blog post with a fine-tooth comb, including rummaging through the two papers it references
  17. How many physical qubits will IBM Quantum’s Starling and Blue Jay have?
  18. Poll: How much of your work on NISQ machines before FTQC do you expect will be fairly directly applicable to production-scale on FTQC machines?
  19. Oops, there are TWO Starlings that might not be the same, an innovation Starling in 2028 and then the development Starling in 2029
  20. It’s unclear what the future or lack of a future for physical qubit circuits will be once fault-tolerant quantum computers become readily available in 2029
  21. IBM gives maximum circuit size for its quantum computers, but no hint as to maximum circuit depth
  22. There is no hint as to how IBM arrived at the gate-count limits for any of its quantum processors, either NISQ or FTQC
  23. Exactly how many physical qubits there are per logical qubit under IBM FTQC?
  24. The most essential question I have about any FTQC or QEC scheme is what the residual error rate, the logical error rate, is that will be left after all of this fancy correction
  25. The IBM Quantum roadmap blog post is a frustrating, maddening cross between a blog post and a white paper
  26. IBM appears to be 100% focused on simulation rather than analytical computation
  27. There is apparently NO support for high-performance logical any-to-any (all-to-all) qubit connectivity on the FTQC systems, Starling or Blue Jay!
  28. It’s not fully clear when IBM is really committing to delivery of Blue Jay and goals of 2000 logical qubits and 1 billion gates, whether definitely in 2033, or maybe 2033 or maybe beyond 2033
  29. Who exactly will get quantum advantage and when?
  30. It seems odd that only two machines would be needed for the full leap to full FTQC
  31. Whether we will finally be able to start running Shor’s factoring algorithm, albeit for rather small RSA keys, like for 50 or 100 bits with Starling, or maybe even 500 or 1,000 bits with Blue Jay
  32. An amusing graphical image that captures the essence of my commentary on IBM Quantum’s 2025 roadmap
  33. People will need transition guidance for circuit repetitions (shots)
  34. What is the maximum length of the inter-module coupler that replaces the old Flamingo-style l-coupler?
  35. Disconcerting misrepresentation of what a logical qubit is
  36. A handful of random points before I get to any remaining major issues
  37. A disconcerting misrepresentation of classical bits, transistors, and classical error correction codes
  38. What are c-couplers now really all about, what are they good for, and who can use them?
  39. What physical qubit coherence time is needed to make logical qubits function as a long-term quantum memory for Starling and Blue Jay, and beyond?
  40. Do people really know what they will or could do with 2,000 logical qubits in eight years, 2033, when Blue Jay becomes available?
  41. IBM is misleading when they say “we have successfully delivered on each of our milestones”
  42. Something is missing: circuit cutting and knitting
  43. Whether 200 or 2,000 logical qubits is really large scale
  44. Enough with all of the discussion of error correction for individual logical qubits; the truth is that error correction should be for the quantum state itself
  45. Revisiting the gate limits in the roadmap (5K Heron, 5K Nighthawk, 7.5K Nighthawk, et al) in terms of practical quantum circuits, particularly what they mean for analytical computation as opposed to simulation
  46. It really bugs me to see the headline of the blog post proclaiming that this is a “clear path” when it is anything but clear. It’s as clear as… mud!
  47. What exactly is the long pole in the tent for advancing from Starling in 2029 to Blue Jay in 2033 that will take FOUR YEARS?!
  48. Overall, Nighthawk is the brightest spot on the roadmap
  49. Some final thoughts

Resources

The actual roadmap update can be found in this IBM Quantum blog post, dated June 10, 2025:

The official IBM press release:

Jay Gambetta’s own announcement, on LinkedIn:

Jay’s LinkedIn article:

Save yourself some time, effort, and energy and go straight to this IBM Quantum video which gives a fairly decent summary of the entire new 2025 IBM Quantum roadmap — in just six minutes:

I posted my last formal commentary on the IBM Quantum roadmap back in 2022:

The announcement was in June, but the story starts earlier in May 2025…

A few questions about Nighthawk based on Jerry Chow’s keynote at the Economist Impact — a few weeks before the roadmap was released

Note: This was before IBM released the 2025 Roadmap, May 19, 2025. But it did mention Nighthawk.

Jerry’s LinkedIn post with link to video:

My reaction…

A few questions about Nighthawk:

  1. Any expectation for improvement in qubit and gate fidelity and coherence time, especially relative to the new Heron R3?
  2. Just to clarify, Nighthawk won’t have the new c-coupler for non-local on-chip connectivity?
  3. When IBM switched to the heavy-hex qubit connectivity topology, it was claimed that this was needed to support quantum error correction and logical qubits, so is the new 4-degree nearest-neighbor square lattice topology of Nighthawk also intended to be better for error correction and logical qubits as well for the long-term, or is it a near-term stopgap measure to enhance physical qubit circuits until a newer system supports error correction and logical qubits and may not necessarily be the topology for future systems? Assuming Nighthawk doesn’t have c-couplers, that would suggest that either the error-corrected systems will have a totally new topology or the Nighthawk topology plus c-couplers.

4. When IBM introduced the heavy-hex topology, they said that the more limited connectivity enabled greater qubit fidelity — “the heavy-hex topology represents a slight reduction in qubit connectivity from previous generation systems, but, crucially, minimizes both qubit frequency collisions and spectator qubit errors that are detrimental to real-world quantum application performance”, so might the reverse be true with Nighthawk, that the greater connectivity might increase the error rate, or… has the technology advanced enough so that this will not be the case?

5. Might the regression in qubit count from 127, 133, and 156 down to 120 be an issue for larger applications? Although the second and third generations of Nighthawk will in theory support larger circuits with multiple processor chips connected with m-couplers.

IBM finally updates their quantum computing roadmap (the color graphic version) to include the new Nighthawk and Loon systems for this year, 2025

Finally! IBM finally updates their quantum computing roadmap (the color graphic version) to include the new Nighthawk and Loon systems for this year, 2025. Flamingo? Gone. Poof! Just like that. It’s all about Nighthawk — for public systems — until Starling in 2029.

Alas, the roadmap remains confused about Heron, showing a single development box with 133 qubits even though Heron now has 156 qubits. There is still a single 133-qubit Heron in the IBM fleet. It was more of an innovation system that snuck out into the public fleet, a hybrid between innovation and development. And the 156-qubit version was more of a repackaged Flamingo. So confusing!! At a minimum, this confusion warrants at least a footnote. Not to mention that a new, improved Heron is due very soon.

YouTube video explaining the new roadmap:

https://lnkd.in/eyQdCAkm

I may have more comments once I more carefully digest the new roadmap.

Alas, it does, as I fully expected, misguidedly focus much too heavily on quantum error correction (QEC), with magic state distillation (required for full QEC??) promised in… 2029, demonstrated in an innovation Starling in 2028. I remain convinced that the entire quantum error correction enterprise (spanning research, engineering, and business) is hopelessly doomed to complete failure due to its sheer complexity. Fast, high quality physical qubits well hit usability before logical qubits even get one leg of their pants on! Fix. The. Damn. Qubits!!! QEC is fantasy land. IBM (and everyone else) needs to get REAL. Real fast!

Sorry, but commitments to FTQC/QEC at this time are outright Fraud (F-R-A-U-D)!

Note: These comments of mine were in reaction to Jay Gambetta’s announcement of the 2025 update to the IBM Quantum roadmap.

Jay Gambetta’s announcement, on LinkedIn:

My reaction…

I need to take a much firmer stand on so-called quantum error correction (QEC), also referred to as fault-tolerant quantum computing (FTQC), specifically, that it is grossly irresponsible, incompetent, negligent, and outright fraudulent for any quantum computing vendor to put QEC/FTQC on ANY business roadmap at this time, or at any time before all of the required RESEARCH has been COMPLETED to discover, invent, and PROVE that it can be implemented and work as promised in the real world (okay, in a laboratory) for realistic use cases and as a useful scale. To do otherwise is outright fraud — promising something before there is credible evidence that it can really be provided.

Yes, that’s right: Fraud. FRAUD. F-R-A-U-D! Quantum Fraud!

IBM (and others!) may THINK and CLAIM that they now have the MAGICAL answer to how to do full-blown error correction, but it is abundantly clear that they don’t and that much more research is needed. And my prediction is that they (and the others) NEVER will, that the whole enterprise is far too complex and complicated to EVER work as promised.

The message should be loud and clear: Do the research first. Get satisfactory results. THEN we can talk about roadmaps!

Even for research, it’s irresponsible to use a roadmap which presumes results.

Too many people are currently living in Quantum Fantasy Land, on Quantum Fantasy Island! Hmmm… maybe Disney should get into quantum computing!! Or somebody should do a reality (yeah, right!!) TV show on it, maybe on YouTube.

For, for the near-term, for the indefinite future, the top, main, exclusive priority should be to… Fix.The.Damn.Qubits!! Four nines or Bust! (For a start!) And FULL any-to-any (all-to-all) qubit connectivity. Besides… for QEC to REALLY work, at scale, will require at least four nines of qubit and gate fidelity. Trust me on that. Or, don’t, and bear full responsibility for the disappointment and despair that will surely follow. Sigh.

Some preliminary observations and ruminations on the new IBM Quantum roadmap

Some preliminary observations and ruminations on the new IBM Quantum roadmap.

What’s your take on the fact there’s no change in circuit size (quality?) from Heron to Nighthawk — both 5K. So, what’s the advantage?! Shouldn’t the improved connectivity be a big win? Or maybe it’s not such a big win if it maybe introduces more crosstalk? And… it has fewer qubits, than even Eagle, 120 vs. 156 and 127! And if you really can have three processors for 360 qubits, what good is that if it doesn’t result in a larger circuit size?! Whatever happened to x 3 processors for Heron, anyway?! Seems rather confused.

Bottom line, what do you think you’ll be able to do with Nighthawk that you can’t already do with the current Heron, or the new r3, ibm_pittsburgh, that is claimed to be coming soon?

Maybe Nighthawk should have been billed as an innovation system, preparatory work for the 7.5K x 3 Nighthawk for Development in 2026.

In short, who will benefit the most from Nighthawk as opposed to Heron r3? And what will most users experience if they just shift their existing workload from Heron to Nighthawk? In theory. What’s the promise/commitment from IBM?

IBM Quantum definitely needs to clarify their story for Nighthawk — and Heron — in 2025!

Save some time and effort and just watch the video

Save yourself some time, effort, and energy and go straight to this IBM Quantum video which gives a fairly decent summary of the entire new 2025 IBM Quantum roadmap — in just six minutes.

The video:

The blog has a lot more technical detail, but start with this video, which is linked above the upper right corner of the roadmap graphic on the blog post.

The blog post itself — AFTER you watch the video:

https://lnkd.in/eKg2XGht

There is another video at the start of the blog post, but that focuses on the overall fault-tolerant quantum computing effort, rather than the overall roadmap, including before FTQC in 2029.

Possible solution to the mystery of how Nighthawk can execute more complex circuits with fewer gates

I think I may have solved at least part of the mystery as to why IBM Quantum’s new (later in 2025) Nighthawk quantum processor can execute only the same number of gates as the current Heron processor and that somehow this constitutes an advance. In fact, the new 2025 roadmap says that Nighthawk is “able to execute more complex circuits”, but it can’t execute more gates, which seems like a contradiction in terms!

I think the apparent contradiction is simply due to the fact that more complex circuits tend to require greater connectivity, which needs to be SIMULATED on Heron or other heavy-hex qubit topologies by using SWAP gates to move qubits around, which adds EXTRA GATES to the user’s source circuit, which comes out of the 5,000-gate limit. But on Nighthawk, those SWAP gates are not needed (or at least fewer of them), freeing up a larger fraction of the same 5,000-gate limit for use by a more complex source circuit. So the user’s SOURCE circuit can be larger on Nighthawk than on Heron, and it is the COMPILED circuit’s gates that come out of that 5,000-gate limit, not the SOURCE circuit’s gates. Tah Dah! Mystery solved. I think.

IBM’s roadmap blog post also says “Higher connectivity will allow Nighthawk to deliver roughly 16x the effective circuit depth of Heron, enabling our clients and users to run much more complex circuits.” But they don’t say where that 16x comes from. I doubt that it is due to SWAP gates, but maybe in extreme cases, it might. Anybody have any clues as to where the 16x metric comes from?

Also, does anybody know what “the effective circuit depth” (not total gates) is referring to?

The new square-lattice connectivity (4-degree nearest neighbor) might enable IBM to achieve a non-trivial jump in Quantum Volume, although they don’t measure that anymore anyway.

More points about the fact that the gate limit (5,000) is identical between IBM Quantum’s current Heron quantum computer and the upcoming and supposedly superior Nighthawk system

Several additional points about the fact that the gate limit (5,000) is identical between IBM Quantum’s current Heron quantum computer and the upcoming and supposedly superior Nighthawk system.

  1. It could be that the systems are now coherence time-limited (T1-limited) rather than limited by the error rate.
  2. Maybe an increase is circuit depth (such as due to lower error rate) in Nighthawk is balanced by the fewer qubits of Nighthawk — 120 vs. 156. If 5,000 gates were executed on 156 qubits on Heron, that would mean a circuit depth of 5,000/156 = 32.05, while executing 5,000 gates on a 120-qubit Nighthawk would mean a circuit depth of 5,000/120 = 41.67, a significant improvement in circuit depth, just using fewer qubits.
  3. Is parallel gate execution improved in any way — like degree or quality or error rate — in Nighthawk from Heron?
  4. We still have no clarity as to what degree of parallel gate execution is supported by these IBM systems, or whether the user can control that, for example, to reduce or eliminate crosstalk between gates executing in parallel on nearby qubits. This is all a big black hole. IBM needs to discuss and document and support this better.
  5. How is this 5,000-gate limit actually measured, calculated, inferred, estimated, a wild guess (or SWAG), or just made up as marketing hype?! IBM needs to communicate the details more clearly — they’re communicating nothing at all.
  6. Does a 5,000-gate limit mean a circuit can execute 50 gates on 100 qubits, or 100 gates on 50 qubits, or 500 gates on 10 qubits, or… what, exactly, or even approximately? IBM needs to provide us with guidance as to how to think about this 5,000-gate limit.
  7. Guidance is needed for how to think about and adjust shot count (circuit repetitions) when advancing to new hardware.

IBM needs to split their quantum roadmap in half, not just innovation vs. development, but FTQC vs. non-FTQC (or pre-FTQC)

IBM needs to split their quantum roadmap in half, not just innovation vs. development, but FTQC vs. non-FTQC (or pre-FTQC). If some of the work for FTQC will benefit non-FTQC, fine, but as the new roadmap stands, there are innovation machines and innovations which won’t directly benefit non-FTQC developers — and a bunch of non-FTQC machines (Nighthawks) which won’t benefit those interested in FTQC. These are really two different, very distinct camps, with very different interests.

People who will be actively developing over the next few years don’t have to need or care what (FTQC) capabilities won’t be available to them in those years.

Similarly, people focused on FTQC, don’t have to need or care about all of the innovations and machines that come out over the next few years that don’t feed fairly directly into FTQC.

To wit:

  1. Will any of the new capabilities of Loon in 2025 show up in Nighthawk in 2026 (or later)? Like six-way connectivity and c-couplers?
  2. If the main feature of Loon is two logical qubits, why should a Heron/Nighthawk user care about that?
  3. Will Loon ONLY be usable as (two!) logical qubits, or will it be able to run all quantum circuits as 112 physical qubits — with better qubit connectivity?
  4. Will any of the capabilities of the 2026 Kookaburra be available in the 2027 Nighthawk? Full, “Long” c-couplers, for example?
  5. Will the l-couplers of Cockatoo be available in a later Nighthawk?
  6. IBM has never been very clear about whether c-couplers will be only for FTQC logical-qubits or as a general feature for physical qubits.
  7. Will the later Nighthawks be multiple QPUs or single but multi-module QPUs? IOW, will any of the FTQC multi-module work of Flamingo, Crossbill, Kookaburra, and Cockatoo be incorporated into those later Nighthawks, or only in Starling and Blue Jay?

The era of CLAIMED Quantum Advantage

We may indeed be entering (about to, soon, maybe even already) the era of CLAIMED Quantum Advantage. Well, people can indeed claim anything. The open question is when and under what conditions we might enter the era of VERIFIED Quantum Advantage, which begs the question of what technical criteria and who will be qualified and trusted to do such verification. Don’t expect any good answers or any great clarity.

It will be quite ironic if people start claiming Quantum Advantage well before we’ve seen a full-scale Fault-Tolerant Quantum Computer (FTQC). Pray tell, why would we need a FTQC if we can achieve Quantum Advantage without one?!! Get ready for some serious Quantum Spin!!

What a mess! And it’s only doing to get worse. Quantum Tower of Babel, indeed!

Is quantum circuit knitting now officially dead, at least from IBM?

Is quantum circuit knitting (technically, circuit cutting and then knitting after execution) now officially dead, at least from IBM? It had been a fixture in IBM Quantum’s roadmap for some years now, including last year, with “Scalable circuit knitting — Circuit partitioning with classical reconstruction at HPC scale” on the Software innovation roadmap for 2025, but it’s gone from the updated 2025 roadmap. Anybody know what happened? Or, maybe it’s hidden under some other category now, or hibernating until it gets reincarnated at a later year. Maybe they tested it and found that it only worked (well) for relatively small circuits, smaller than the current 100+ qubit scale that is the current main focus.

I never really liked it. It superficially sounded like a great idea, but it smacked of being an illusory free lunch to me, with even IBM admitting that it wouldn’t work for all applications, maybe not even for a majority, or even a sizable minority. So, yes, a cool idea. A practical idea? Nope — except maybe in some niche cases. But not a general tool for everybody.

Confusion about multi-module Nighthawks

Following on from my recent post about the 5,000-gate limit for both Heron and the initial Nighthawk quantum computer systems from IBM Quantum, there seems to be some confusion about the follow-on Nighthawk systems which incrementally increase that gate limit to 7,500, 10,000, and eventually 15,000, as well as upping the qubit count by adding additional 120-qubit modules, initially to three and then to up to nine modules.

The first confusion is whether these are multiple QPUs in a single system or simply multiple modules in a single QPU. No hint from IBM. The blog post does refer to l-couplers rather than m-couplers, but is a bit confusing since it uses modules and l-couplers in the same sentence — “By 2028, Nighthawk will be able to run circuits with 15,000 gates, and we’ll be able to connect up to 9 modules with l-couplers to realize 1,080 connected qubits.”

The second confusion is whether the 7,500, 10,000, and eventually 15,000 gate limits are for the whole Nighthawk system (QPU?) or for each module. For now, I assume that the gate limits are for the whole ensemble of modules, but I’m not convinced.

I mean, if I have a 5,000-gate circuit that runs on the 120 qubits of a 5K Nighthawk, why can’t I just jam three copies of the circuit together, for a total of 15K gates, and run it on the three modules of a 120-qubit 7.5K Nighthawk since the three copies of the circuit are fully independent and can run in parallel?! Granted, IBM may have some reason not to fully support that, but it at least superficially seems plausible. Maybe IBM could clarify.

Ditto for the nine-module 10K and 15K Nighthawks — why can’t I jam up to nine copies of the 5K-gate circuit together and the copies run in parallel for a total of 9 x 5K = 45K gates.

This would let me run three or up to nine shots of the circuit at the same time.

Some preliminary questions about the roadmap

I’m still digesting the new 2025 roadmap for IBM Quantum. Three related questions have popped up:

  1. Will IBM’s FTQC systems still be able to run physical circuits accessing physical qubits, or only logical circuits executing logical gates on logical qubits? And do it in a 100% compatible manner, with equal or better performance than the earlier non-FTQC systems.
  2. Depending on the answer to the first question, will Nighthawk be the end of the line for support from IBM for execution of physical circuits? Will there be a hard break with no future non-FTQC systems from IBM?
  3. Will customers who managed to achieve Quantum Advantage with Nighthawk in 2026, 2027, and 2028 and put applications into production be able to seamlessly continue to keep those applications in production, indefinitely, years after the 2028 Nighthawk and years after FTQC is available?

The answer? Maybe, but there’s no great and firm clarity, just this one weak statement in the blog post for the roadmap:

“Now, while we’re confident in our plans to deliver fault-tolerance by 2029, we expect to achieve quantum advantage sooner — by 2026. We’ve laid out the tools needed to realize and extend quantum advantage with the updated IBM Quantum Development Roadmap, and we are working to ensure that advantages realized before 2029 will run seamlessly on the fault-tolerant quantum computers of 2029 and beyond. Waiting until 2029 to pursue quantum computing could cause companies to fall behind those who start developing advantage-scale applications now.”

The operative words there: “we are working to ensure that advantages realized before 2029 will run seamlessly on the fault-tolerant quantum computers of 2029 and beyond.”

But that doesn’t speak to whether that means support for physical circuits on physical qubits, or whether IBM will simply offer conversion tools to convert a physical Nighthawk circuit to a logical Starling/Blue Jay circuit.

The real hope should be that elite algorithm designers should be able to exploit physical circuits for much higher performance and capacity than can be achieved with logical gates and logical circuits.

Maybe IBM could clarify.

Oliver Dial’s podcast on the roadmap

Another approach to presenting IBM Quantum’s 2025 roadmap, by Oliver Dial of IBM.

About 32 minutes, with a little more detail and color.

It’s actually good to watch or read multiple presentations of the roadmap since there is so much detail that most mere mortals will only digest a fraction of it on each viewing/reading.

Link to Oliver’s post on LinkedIn with links to the podcast:

Just to highlight the critical distinction between simulation and computing

Just to highlight, again, the critical distinction between simulation and computing. IBM Quantum certainly has been focusing intensely on simulation more recently, but not true, analytical computation. Which isn’t a great surprise, since the relatively high error rate and lack of full any-to-any (all-to-all) effectively preclude any significant, non-trivial, non-toy analytical computation. So called many-body systems and their simulation is inherently localized, so it works reasonably well on a quantum computer with only limited and local qubit connectivity.

Quantum Volume (QV), a metric invented by IBM, was actually a decent measure of computational ability. But IBM never got past QV 512 due to error rate and connectivity. As they shifted their focus to simulation, QV was no longer relevant.

I billed Eagle (and Osprey) as a dud in large part because my own focus is on computation, analytical computation, not simulation. In the early days of Eagle there were NO real-world quantum algorithms with 40 or 50 let alone 100 qubits, since QV was so weak. But by shifting their focus to non-analytical simulation, IBM was no longer constrained by a weak QV, so quantum algorithms with 40, 50, 80, and even 100 qubits became realistic, but that’s only for simulation, not true analytical computation.

Hmmm…

  1. Will Starling and Blue Jay be similarly constrained to simulation, or will IBM address this limitation of weak connectivity for computation? IBM hasn’t been clear as to whether c-couplers are only for error correction or actually enable full any-to-any (all-to-all) qubit connectivity for all logical qubits.
  2. Will FTQC be gross overkill for many or most simulation use cases? Will a moderately lower error rate and modestly better connectivity (6 or 15-degree) be most of what most simulation users actually need?

I finally managed to finish slogging through IBM Quantum’s roadmap blog post with a fine-tooth comb, including rummaging through the two papers it references

Okay, yesterday I finally managed to finish slogging through IBM Quantum’s blog post for their new 2025 roadmap, with a fine-tooth comb, including rummaging through the two papers it references. Unlike their original roadmap and updates, I’m not going to write a long and detailed informal paper on everything I learned and concluded. I won’t bother to highlight ALL of the notable (or offensive) aspects of the post or roadmap, but I will endeavor to highlight some of the aspects which do stand out especially for me. I’ve already had a bunch of posts on the roadmap.

I had previously skimmed through the roadmap and digested various aspects, but every time I tried to literally read it from the top, I would quickly run into something odd, unclear, or confusing that sent me down the rabbit hole of deciphering the two papers to clarify things, sometimes successfully, sometimes not, or with more questions that I started with! Or when I could successfully read parts, I’d run into some outrageous or outright offensive hype that stopped my ability to calmly read the text, putting off my next attempt to read the post for a few days before I could read it more calmly.

I’ll try to limit my posts on the roadmap to one major question or issue. Maybe two or three related ones, at most. It may take me a half dozen or a dozen posts to work though even just the most important stuff.

I won’t repeat my previous first-look posts, but my overall impression was that although there are indeed some bright spots in the roadmap, overall, it’s a real mess, with an incomplete story for physical qubits, and an absolutely horrendously complex nightmare for so-called quantum error correction and so-called fault-tolerant quantum computing. Absolutely horrific, in my view. IBM (and ALL of us) would be much better off turning QEC/FTQC into a science fiction novel (YouTube series?!!) and focusing 200% of their technical talent on fixing their broken physical qubits and delivering full any-to-any (all-to-all) qubit connectivity.

My next post on the roadmap will address the question: How many physical qubits will Starling and Blue Jay have?

How many physical qubits will IBM Quantum’s Starling and Blue Jay have?

How many physical qubits will IBM Quantum’s Starling and Blue Jay have?

My calculation…

  1. IBM uses a modular architecture with 12 logical qubits in each module.
  2. IBM hasn’t said what the code block size will be for either system. The blog post and paper discuss TWO codes, the [[144,12,12]] gross code and the [[288,12,18]] two-gross code. Both codes have the same number of logical qubits in each code block — 12.
  3. The code block size doesn’t tell us the total number of physical qubits — even for a single code block. There are also syndrome qubits, 144 and 288 of them, respectively, for the two codes. So that’s 288 and 576 physical qubits… so far.
  4. Each module also has a Logical Processing Unit, LPU, for implementing internal operations on the logical qubits in a module, which itself contains a number of physical qubits, 90 for the 144-qubit gross code and 158 for the 288-qubit two-gross code.
  5. So that’s 288 + 90 = 378 physical qubits for the gross code, and 576 + 158 = 734 for the two-gross code.
  6. There are some additional physical qubits for things like the Magic State Factory, a separate module, as well as the possibility of spare modules to replace faulty modules, but I’ll skip over that for my purposes here.
  7. One of the 12 logical qubits is reserved as an ancilla for internal processing, so that leaves 11 logical qubits per module.
  8. So rather than divide 200 and 2,000 by 12, we divide 200 by 11 and get 18.18, and divide 2,000 by 11 and get 181.82.
  9. Oops, they didn’t divide evenly! Those logical qubit numbers were probably approximate, and rounded up. I’ll round down to get more realistic numbers for the module counts — 18 and 181.
  10. And since there will be (or could be) three modules per cryostat, I’ll round down to the closest multiple of 3, giving us 18 and 180 modules.
  11. That implies 6 cryostats for Starling and 60 cryostats for Blue Jay. Although I’m wondering if Blue Jay might be able to hold nine modules per cryostat, meaning 20 cryostats, which seems more in line with the graphical rendering we have seen.
  12. Multiplying those module counts by 11, I get 198 and 1,980, logical qubits.
  13. Now I’ll split into two pairs of numbers, one pair for each code block size.
  14. For the 144-qubit gross code, multiplying the physical qubits per module times the number of modules, I get 378 * 18 = 6,804 and 378 * 180 = 68,040 physical qubits for Starling and Blue Jay.
  15. And finally, for the 288-qubit gross code, multiplying the physical qubits per module times the number of modules, I get 734 * 18 = 13,212 and 734 * 180 = 132,120 physical qubits for Starling and Blue Jay.

Take your pick of assumptions!

Poll: How much of your work on NISQ machines before FTQC do you expect will be fairly directly applicable to production-scale on FTQC machines?

Contemplating IBM Quantum’s 2025 roadmap and the transition from NISQ to fault-tolerant quantum computing (FTQC), a critical question arises: How much of the work you do on NISQ machines before the advent of FTQC do you expect will be fairly directly (and hopefully automatically) transitioned to production-scale applications on FTQC machines, such as Starling (200 logical qubits) in 2029 and Blue Jay (2,000 logical qubits) in 2033?

AND, that it will be advantageous to do so. That it will offer a significant advantage over what you can already do on NISQ. Or that a redesign for FTQC would open up great new opportunities and advantages that your NISQ work is not capable of taking advantage of.

For example, are you using variational methods on NISQ devices because they can’t handle the complex circuits required for quantum phase estimation (QPE), but once FTQC is available, you will be much more interested in using QPE on FTQC.

One option that wouldn’t fit: I DO NOT WANT to even think about it! It’s too scary a proposition, and I’m afraid the answer would be none of it, and that we would have to start over from scratch with a whole new set of design constraints — and opportunities.

Another option that wouldn’t fit: Well, sure, we COULD run all of our existing NISQ work unchanged on FTQC, but there wouldn’t be any net benefit from doing that. Maybe since it’s optimized for a smaller number of qubits.

Or, maybe it works on NISQ using lots of explicit error mitigation, but then running it unchanged except for removing the error mitigation fails to run correctly and gives invalid results.

Another option: Ditch NISQ now (a great, catchy slogan!) and focus all algorithm work on scalable 50-qubit algorithms run on simulators, so that there will be a great chance that the algorithms will run unchanged on FTQC and can then be trivially scaled up to more, real, QEC logical qubits.

Vote for the closest option and then comment any clarifications, or Like my comments for the other choices.

If you don’t have current NISQ work or don’t want to publicly answer for it, vote for the advice you would give others about the NISQ/FTQC transition.

The poll post on LinkedIn:

The actual poll question:

  • How much of your work on NISQ machines before FTQC do you expect will be fairly directly applicable to production-scale on FTQC machines?

The options:

  1. Virtually all of it (0%)
  2. Most, but not all of it (27%)
  3. Only a modest fraction of it (36%)
  4. Little or none, start over (36%)

Oops, there are TWO Starlings that might not be the same, an innovation Starling in 2028 and then the development Starling in 2029

Studying the IBM Quantum 2025 roadmap a little more carefully, I realize that there are TWO Starlings that might not be the same, an innovation Starling in 2028 and then the development Starling in 2029, which is the one that I have been focusing on. How similar or dissimilar the two are is as clear as… mud. So much for the roadmap being a… “clear path” to FTQC!

One might assume that the two systems are identical, or close to it, but I notice that the 2028 Starling diagram says “O(3000) qubits”. IBM has not disclosed a physical qubit count for the 2029 Starling, which is why I calculated one. The 2028 Starling does not disclose a logical qubit count. The blog post does use the language “In 2029, Starling will scale to…”, strongly implying that the 2028 is a scaled-down version of the 2029 Starling.

My calculation in my previous post indicated that a 2029 Starling with 200 logical qubits using the 144-qubit gross code would require approximately 6,804 physical qubits, over twice the O(3000) number IBM cited for the 2028 Starling.

Using my calculation of 378 physical qubits per module for the 144-qubit gross code, that would be roughly 3000/378 = 8 modules, times 11 logical qubits per module would give 88 logical qubits. So, the 2028 Starling would be somewhat less than half the 2029 Starling. That’s cool, no problem. I just wish IBM would disclose this type of detail.

If the 2028 “proof-of-concept” (the blog post uses that language) Starling were to use the two-gross code, it would have 3000/734 = 4 modules, times 11 logical qubits would give 44 logical qubits. The blog post diagram does say “Multiple blocks of gross code…”, but that might be using the term in a generic sense.

In short, if someone says “Starling”, you need to ask them if they are talking about the 2028 innovation Starling (“proof-of-concept”) or the 2029 development Starling.

FWIW, the papers do not make any mention of the systems named in the roadmap, so the papers offer no clues to answer any questions about the roadmap itself, just technical details about the gross and two-gross (BB) codes.

It’s unclear what the future or lack of a future for physical qubit circuits will be once fault-tolerant quantum computers become readily available in 2029

One concern or uncertainty I have about IBM Quantum’s 2025 roadmap is that it’s unclear (despite the headline on the blog post!!) what the future or lack of the future for physical qubit circuits will be once fault-tolerant quantum computers become readily available in 2029.

  1. Will IBM continue to support physical qubit circuits indefinitely or rapidly sunset support for them?
  2. Will it be trivial to port physical circuits from Nighthawk to Starling and Blue Jay with no degradation of performance and a guarantee that results will be the same?
  3. Might there be some lingering performance benefit from raw physical qubits, plus a reprieve from any potential operational glitches by staying with Nighthawk?
  4. What happens between 2029 and 2033 for people who create circuits using over 200 physical qubits on Nighthawk between now and 2033, since only the 200-logical qubit Starling will be around until the big 2,000-logical qubit Blue Jay surfaces in 2033? I would expect them to want to stay with Nighthawk until Blue Jay is not only initially available, but has proven itself for at least three months to a year. And will 250 to 1,000-physical qubit Nighthawk circuits run on Starling and Blue Jay with no detectable negative compatibility issues?
  5. How long after 2029 will IBM fully support Nighthawk? At least until after Blue Jay in 2033 for 250 to 1,000-qubit circuits, right?
  6. What? No incremental Nighthawk improvements after 2028? Like support for 20K and 25K-gate circuits and incrementally lower error rates?

IBM gives maximum circuit size for its quantum computers, but no hint as to maximum circuit depth

IBM gives maximum circuit size for its quantum computers, but no hint as to maximum circuit depth. So when they say that Heron supports 5,000-gate circuits, presuming quite a few gates can execute in parallel, I presume that coherence time is limiting circuit depth, as well as error rate, so the limit of total gates is at least partially driven by coherence time which shows up as a maximum circuit depth. IBM needs to disclose circuit depth limits, explicitly.

Whether the IBM limits are driven primarily by coherence time and circuit depth or by error rate is unclear.

Ditto for Nighthawk. Is maximum circuit depth improving or not from 5K to 7.5K to 10K and to 15K gate limits? Show us the circuit depth limits!

Now, when we get to FTQC with Starling and Blue Jay, the concept of coherence time is no longer relevant since the first half of FTQC is to support Quantum Memory, which enables quantum data to persist indefinitely, well beyond raw coherence time, in theory. There is still some lingering error rate, the logical error rate even for quantum memory.

There may be a separate logical gate error rate distinct from the quantum memory error rate.

So the question arises as to exactly what factors determine the maximum circuit size gate counts given for Starling (100 million gates) and Blue Jay (1 billion gates.) It’s no longer raw coherence time. How much of those numbers is based on a presumption of circuit width?

Actually the 1 billion vs. 100 million difference between Blue Jay and Starling could simply be accounted for by the fact that Blue Jay has ten times as many logical qubits as Starling, so 1 billion gates may simply be a circuit ten times as wide but with the exact same depth.

Maybe the 100 million gate limit for Starling is based on a presumed circuit width of 200, so 100 million divided by 200 implies a circuit depth of 500,000 gates. I’m just guessing here! IBM needs greater transparency and disclosure. The papers don’t talk about these issues.

If we assume a circuit width of 2,000 qubits for Blue Jay, 1 billion gates divided by 2,000 logical qubits is the same circuit depth of 500,000 gates as for Starling.

Why no increase in maximum circuit depth for Blue Jay?

And this begs the question of what exactly is limiting circuit depth, first for Starling, and then for Blue Jay, with no increase in maximum circuit depth for Blue Jay. In general, circuits scale up in both dimensions, depth and width.

The bottom line here is that IBM’s implementation of quantum memory may have some non-trivial lingering error or a logical qubit coherence time of some sort. More transparency and disclosure is needed.

There is no hint as to how IBM arrived at the gate-count limits for any of its quantum processors, either NISQ or FTQC

One important point I neglected to mention yesterday in my latest post on IBM Quantum’s 2025 roadmap, related to circuit depth, is that there is no hint as to how IBM arrived at the gate-count limits for any of its quantum processors, either NISQ or FTQC. Why 100 million gates for Starling? Why not 50 million or 200 million? What’s the specific reason?

IBM gives a number of 100 million gates for Starling and one billion gates for Blue Jay, with no justification offered. These machines don’t exist yet, so clearly these were not measured limits. Presumably these numbers were modeled in some way. Or, maybe, they are just ballpark SWAGs (Scientific Wild-Assed Guesses!), or aspirational hopes.

Part of the benefit of FTQC is that logical qubits are now quantum memories, with an indefinite lifetime, not constrained by the coherence time of the physical qubits. So, if that’s true, why isn’t circuit length indefinite as well, effectively infinite?

Clearly, there is some sort of limiting factor. But despite the title of IBM’s blog post, they aren’t clear about it. Four possibilities:

  1. Cumulative logical error rate for gate execution. Related to circuit depth.
  2. Despite this being a quantum memory with indefinite lifetime, it’s not perfect, so maybe there is effectively a logical coherence time. So, exactly what is that!
  3. The gate counts are purely arbitrary, aspirational.
  4. Some other, undisclosed factor.

Will IBM disclose the reason, or not?

The paper does not discuss this issue since it doesn’t discuss the roadmap itself at all.

And this isn’t even getting into issues related to T gates on FTQC. What assumptions is IBM making about fraction of gates which are T gates? If the user circuit uses a smaller fraction of T gates can their circuits be bigger? If the user circuit uses a larger fraction of T gates will their circuits have to be smaller than the limit?

And this also isn’t getting into the distinction between source gates and compiled gates. IBM isn’t “clear” as to which they are referring to in the roadmap!

Exactly how many physical qubits there are per logical qubit under IBM FTQC?

Exactly how many physical qubits there are per logical qubit under IBM FTQC?

Alas, IBM doesn’t make it fully CLEAR at all.

It’s 12, right, 12 physical qubits per logical qubit?

Uh, no… it’s 12 logical qubits for the 144-qubit gross code for a single “code” module, but while that’s 12 DATA qubits (12 x 12 = 144) per module, there are another 144 SYNDROME qubits or 12 per logical qubit, for a total of 24.

But then there are another 90 physical qubits for the LPU (Logical Processing Unit), one per code module, which is 90 / 12 = 7.5 qubits for a total of 31.5 physical qubits per logical qubit.

But, one of the 12 logical qubits is for an internal ancilla, so there are only 11 logical qubits for the 144-qubit gross code, so it’s (144 + 144 + 90) / 11 = 378 / 11 = 34.36… (repeating the 36) physical qubits per logical qubit for the gross code.

For the two-gross code the math is not exactly double since the LPU requires 158 qubits, so it’s (288 + 288 + 158) / 11 = 734 /11 = 66.72… (repeating the 72) physical qubits per logical qubit, which is slightly less than double the gross code — double would have been 68.72… (repeating the 72.)

So, for Starling that works out to 31.5 x 200 = 6,300 total physical qubits for the gross code and 66.73 x 200 = 13,346 total physical qubits for the two-gross code.

That’s 6,300 / 378 = 17 or 18 modules (quantum chips) for the gross code and 13,346 / 734 = 18 modules (quantum chips) for the two-gross code, roughly. The module count will be the same, just more physical qubits per chip.

For Blue Jay that works out to 31.5 x 2,000 = 63,000 physical qubits for the gross code and 66.73 x 2,000 = 133,460 physical qubits for the two-gross code.

That’s 63,000 / 378 = 167 or 168 modules (quantum chips) for the gross code and 133,460 / 734 = 182 or 183 modules for the two-gross code, roughly.

Caveat: These are all rough, approximate calculations and do not account for the magic state factory (MSF), the universal adapter(s), bridge qubits (maybe) and the potential for spare modules to replace bad and failing modules; it’s just looking at logical qubits per code module. All of that does indeed matter, although it might be relatively minor relative to code qubits, but I just don’t have enough information at this stage.

Curious why IBM doesn’t feature this metric (these THREE metrics, physical qubits per logical qubit, total physical qubits, and module count) more prominently. Or feature it (them) AT ALL!

The most essential question I have about any FTQC or QEC scheme is what the residual error rate, the logical error rate, is that will be left after all of this fancy correction

The most essential question I have about any FTQC or QEC scheme is what the residual error rate, the logical error rate, is that will be left after all of this fancy correction. IOW, how many nines of qubit and gate fidelity are they adding beyond the physical qubit and gate fidelity.

So, how many nines of qubit and gate fidelity is IBM adding beyond the physical qubit and gate fidelity?

The simple answer: Very unclear!

But, despite this urgent necessity, IBM does not have a clear answer to this. Really!

Even rummaging through their “gross” bicycle paper, you can’t easily find it. Really!

Oh, indeed, they do have a fair amount of discussion of logical error rate and even a bunch of numbers, a lot of numbers, but no hint of anything resembling the simple and crisp metrics that we are so used to with classical computing. Single-qubit error rate? Nope! Two-qubit error rate? Nope! Measurement error rate? Nope! None of that!!

The bottom line is that they just don’t know, yet. Really!

The difficulty seems to be the sheer complexity of QEC, these codes.

It really is all way too complicated. Which is a large part of why I remain convinced that full-scale QEC is never going to happen — and live up to all of the wild promises being made for it.

Terminology should be simple, but they introduce terms like shift automorphism, idles, in-module measurement, inter-module measurement, T injection, non-Clifford gate injection, magic states — and their distillation and “cultivation”. Good luck trying to reconcile all of the new terminology with our beloved physical gate terminology! And then be able to explain it to those unfamiliar with the new terminology!

And all of these differences have their own logical error rates.

And then you get multiple sets of error rates based on whether the gross or two-gross code is used, as well as whether the starting point is three or four nines of physical gate fidelity.

Much of this madness is because this is modeling of a hypothetical machine, not measurement of a real machine.

There’s no clarity as to whether they are using single-gate or two-gate fidelity for physical gates as their baseline. Or measurement error rate, for that matter.

I was going to give at least a few of the numbers, but it would require so much fine print that it’s just not worth it.

What a confused, very confused mess!

Hopefully somebody else will untangle this!

I may have a follow-up post on logical error rate when and if I can figure out how to present it in a reasonably clear manner.

The IBM Quantum roadmap blog post is a frustrating, maddening cross between a blog post and a white paper

The IBM Quantum blog post for their 2025 FTQC roadmap is a frustrating, maddening cross between a blog post and a white paper. It adds a curious mish-mash of technical details, but not enough to understand what is really going on functionally.

It muddies, muddles, and confuses the high-level view, while depriving us of a full white paper (not an academic-style paper, which they do provide two of) which has all the relevant, interesting technical details, some of which actually belong in the roadmap anyway.

Things like…

  1. How many physical qubits?
  2. How many physical qubits per logical qubit?
  3. How many modules (chips)?
  4. How many cryostats?
  5. How many bays or system units?
  6. Logical gate execution time.
  7. Logical error rate, for both single and two-qubit logical gates.
  8. Intra- and inter-module physical gate error rate.
  9. Intra- and inter-module logical error rate.
  10. Logical measurement error rate.
  11. The rationale for 200 and 2,000 logical qubits for Starling and Blue Jay. Why not 32, 50, 64, 80, 100, 250, 500, 1000, 5,000, 10,000? (And make those multiples of 11 (not 12, the 12th is an ancilla!), or even 33 to allow for three modules per cryostat.)
  12. Why the big gap from 2029 to 2033 — FOUR years with no new or improved machines? Fill the gap with stepping stones and incremental improvements. Maybe even two per year as the technology begins to stabilize.
  13. Offer at least a speculative preview of what can be expected after 2033. Is 2,000 really the end of the line? What can be expected in 10, 12, 15, and 20 years?
  14. The future of physical qubits and gates after FTQC debuts. No new physical-gate machines after 2028? Really?! Why not?
  15. The transition plan from physical qubits to logical qubits.
  16. Circuits with hundreds of physical gates that will run fine in 2027 on Nighthawk won’t run on FTQC until… 2033! Really?! Does that make ANY sense?!
  17. And they really do need a full academic paper for the proposed roadmap for FTQC, with full details for the projected systems. The “gross” bicycle code paper doesn’t do that.

That kind of stuff.

In short, don’t clutter up the blog post and leave it vague as well, and don’t force people to dive into academic papers for such basic information, especially when the academic papers don’t even have all of the interesting technical details.

IBM appears to be 100% focused on simulation rather than analytical computation

IBM appears to be 100% focused on simulation rather than analytical computation.

This may be fine for simulation algorithms such as for quantum physics and quantum chemistry, but that’s not a sound basis for more general analytical computation, for, say, business and other commercial processes.

From a practical perspective, this surfaces in the form of IBM’s complete disregard if not outright disdain for the need for and the power of full any-to-any (all-to-all) qubit connectivity, which also surfaces in the lack of respect for the utility of large quantum Fourier transforms which enable quantum phase estimation, which is critical for true analytical computation and the exploitation of quantum parallelism to achieve truly dramatic quantum advantage.

To be sure, simulation does have a lot of utility, but to effectively exclude true analytical computation is just plain wrong. In my view.

Just saying.

There is apparently NO support for high-performance logical any-to-any (all-to-all) qubit connectivity on the FTQC systems, Starling or Blue Jay!

I was surprised to see that despite all the talk about couplers, adapters, and bridges, there is apparently NO support for high-performance logical any-to-any (all-to-all) qubit connectivity on the FTQC systems, Starling or Blue Jay. Really!!

In fact, the post and papers don’t even mention what connectivity model is available at the logical level. I would presume 4-degree nearest neighbor, but that’s just speculation.

The post and an imaginary white paper should be much more explicit as to what capabilities can be seen by a logical quantum algorithm designer.

This means that quantum algorithms will still need to rely on SWAP gates. Really?!

This may be fine for simulation algorithms such as for quantum physics and quantum chemistry, but that’s not a sound basis for more general analytical computation, for, say, business and other commercial processes.

From a practical perspective, this surfaces in the form of IBM’s complete disregard if not outright disdain for the need for and the power of full any-to-any (all-to-all) qubit connectivity, which also surfaces in the lack of respect for the utility of large quantum Fourier transforms which enable quantum phase estimation, which is critical for true analytical computation and the exploitation of quantum parallelism to achieve truly dramatic quantum advantage.

To be sure, simulation does have a lot of utility, but to effectively exclude true analytical computation is just plain wrong, wrongheaded, and misguided. But, this is IBM’s prerogative, the path they have chosen.

It’s not fully clear when IBM is really committing to delivery of Blue Jay and goals of 2000 logical qubits and 1 billion gates, whether definitely in 2033, or maybe 2033 or maybe beyond 2033

It’s not fully clear when IBM is really committing to delivery of Blue Jay and goals of 2000 logical qubits and 1 billion gates. In particular whether their official target is strictly 2033, or “2033 or beyond” (“2033+”.)

I just realized that I may not have been reading the IBM Quantum roadmap correctly (or maybe I was!!) I have been referring to 2033 as when Blue Jay will be delivered, but technically, the roadmap graphic says “2033+” with the text “Beyond 2033, quantum computers will run circuits comprising a billion gates on up to 2000 logical qubits, unlocking the full power of quantum computing.” The text strongly suggests that Blue Jay will be delivered “Beyond 2033”.

So, what’s the true story, does IBM intend to deliver Blue Jay in 2033 or not? If so, just drop the “+” after “2033.”

Maybe the bottom line is that IBM hasn’t figured out what they want to say that directly applies to “2033” versus to “Beyond 2033”.

The raw text roadmap for “2033+” does say “For the future, we will scale beyond Blue Jay with the development of distributed quantum computing, bringing together the fields of quantum communication and quantum computation.” That makes more sense in a separate column for “Beyond 2033.” Technically that text is misleading since such a distributed quantum computing environment would, by definition rely on quantum NETWORKING, not quantum COMMUNICATION, which are two distinct capabilities. Who actually reviews this stuff?!!

What does the FTQC blog post say about 2033? Nada! Really, the blog post doesn’t even mention a year for Blue Jay, other than that the rendering for the Poughkeepsie data center labels Blue Jay as “2033+” and the only reference to Blue Jay is in the caption for that rendering!! Good grief, is this any way to run a professional business?!!

IBM may clarify this with public statements, social media, or private communications or events, but they really should clarify in the roadmap itself, and the accompanying blog post.

“clear path”… hardly!

Who exactly will get quantum advantage and when?

Who exactly will get quantum advantage and when? I mean, it may indeed be a feather in someone’s cap to be the first, or one of the first, but who really cares about that other than a relatively handful of quantum insiders? Rather, when will quantum advantage be truly widespread, literally, to anyone who wants it, and relatively easily and cheaply. And before that, when will quantum advantage be readily observable among more than a relative handful of top organizations.

So, three stages: something/anything/somewhere, more than a couple of top organizations, and everyone/everywhere/easily/cheaply.

IBM’s roadmap fails to give us two things: their estimates of when these three stages will occur, and how their roadmap dovetails with and enables each of these three stages. No… “clear path”!

Quantum advantage is not a one-size-fits-all proposition, it will not be achieved in a singular moment in time.

Also, it won’t be a matter of the first to claim quantum advantage, but the first to have their claim fully validated.

Even then, the question arises as to what degree or level of quantum advantage they have achieved. I want to see a truly substantial advantage, not merely a modest advantage.

Personally, I’d suggest waiting to see three to five validated claims before celebrating that quantum advantage has indeed really been achieved.

It seems odd that only two machines would be needed for the full leap to full FTQC

It seems odd that only two machines would be needed for the full leap to full FTQC. That doesn’t make sense to me for such a new and very complex and very risky technology, even despite the fact that there will be four innovation systems before that to test out components for FTQC. I’d recommend at least four if not eight jumps, not two, maybe a pair of jumps each year, one for development and one a technology stretch for innovation, and make 2028 Starling a crossover on both innovation and development for advanced evaluation.

  1. 2028: Starling 40–64 logical qubits for early familiarization and feedback, and 100–160 logical qubits for further validation and to demonstrate modest scaling. Crossover for both innovation and development.
  2. 2029: 200 logical qubit Starling. Full development.
  3. 2030: 500 logical qubit Super Starling to enable a lot of Nighthawk circuits using hundreds of qubits.
  4. 2031: 1,000 logical qubit Maximum Starling to enable virtually all Nighthawk circuits.
  5. 2032: 1,500 logical qubit Maximum Starling II, and 2,000-qubit Blue Jay. Crossover for both innovation and development.
  6. 2033: 4,000 logical qubit Super Blue Jay.
  7. 2034: 7,500 logical qubit Super Blue Jay II.
  8. 2035: 10,000 logical qubit Maximum Blue Jay.

That would make a lot more sense to me. Yeah, more expensive, but more aggressive and put more technology in the hands of users much more rapidly. multiple teams for both innovation and development, to both foster competition and to encourage greater technological leaps.

And this looks out ten full years, well beyond the nominal start of full FTQC.

Whether we will finally be able to start running Shor’s factoring algorithm, albeit for rather small RSA keys, like for 50 or 100 bits with Starling, or maybe even 500 or 1,000 bits with Blue Jay

The question arises of whether we will finally be able to start running Shor’s factoring algorithm, albeit for rather small RSA keys (moduli or modulus is the correct term!), like for 50 or 100 bits with Starling, or maybe even 500 or 1,000 bits with Blue Jay, or is there still some additional capability needed, or maybe that Starling or Blue Jay will definitively prove that Shor’s was always an unrealizable fantasy concept for non-trivial keys.

Shouldn’t Starling and Blue Jay be a critical moment of truth for the whole concept of Shor’s algorithm being able to factor RSA moduli?

How far can Shor’s factoring algorithm go on Starling and Blue Jay, and what are the limiting factors, or is the number of (logical) qubits the only limiting factor?

Shouldn’t IBM — and every other quantum computer vendor or researcher — specifically disclose the RSA moduli size that their fancy new machine will (might) be able to crack (factor), or admit that it is ZERO? Maybe even admit that it is a SOCIAL RISK for their device!

Am I the only one very interested in seeing this tested on Starling or whatever the first logical qubit FTQC machine is?

Do people have scalable versions of Shor’s algorithm sitting on the shelf, ready to just drop it in and ready to run (correctly and without any modification!)? Please comment or repost with the GitHub repository for such #ShorsAlgorithmReadyToRun. Including the application scaffolding for an API and web service for factoring RSA moduli as a service.

An amusing graphical image that captures the essence of my commentary on IBM Quantum’s 2025 roadmap

Suggested caption for this great graphic: Jack Krupansky comments on IBM’s quantum roadmap!

Press enter or click to view image in full size
Jack Krupansky comments on IBM’s quantum roadmap

Credit: Courtesy of Brian Siegelwax

Sorry, guys!! My bad! You understand, right? I had to do it; it’s just in my nature!

Marshmallows, anyone?!

Quantum meets Burning Man!

People will need transition guidance for circuit repetitions (shots)

It occurs to me that people will need transition guidance for circuit repetitions (shots). Some people think of shots only as a poor-man’s error correction (which result gets more hits), but shots have TWO distinct purposes: 1) statistical error correction, and 2) to capture the probability distribution for inherently probabilistic quantum computations and to develop an expectation value.

So, even if the shots needed for statistical error correction can indeed go to zero under FTQC, applications still need to be cognizant of the probability distribution for the expectation value for their probabilistic quantum computation. Given your shot count under NISQ, what fraction of that needs to be retained for FTQC? A little guidance would help.

Hmmm… maybe the API should change even under NISQ to give two separate shot counts, one for error mitigation, and one for expectation value discovery, where both are added on NISQ, while the error mitigation shot count will get ignored under FTQC. That way, people can get prepared for FTQC now, with one less headache for the actual transition to FTQC when it becomes available.

Also, people need guidance for how to calculate shots as their algorithm scales, so as they leap from toy algorithms on very small numbers of qubits, how does the shot count scale to 50, 75, 100, 125, 150, 175, and 200 qubits, and all the way up to 2000 qubits, and beyond. And, very importantly, is it a linear or hopefully sublinear scaling or is it polynomial or even exponential or even super-exponential which will be a real problem.

This is an area of algorithm design which has gotten painfully little attention. Prepare to be shocked!

What is the maximum length of the inter-module coupler that replaces the old Flamingo-style l-coupler?

I am wondering what the maximum length of the inter-module coupler is that replaces the old Flamingo-style l-coupler, which was one meter, but the length of a Starling or Blue Jay system with half a dozen or a dozen or so bays is more than a few meters.

2. Is it a microwave waveguide or optical link with microwave-optical transducers?

3. Or are there more than one type of inter-module coupler?

4. Are there several modules per cryostat, or a single module per cryostat?

5. Is there a direct connection between the two most distant cryostats, or are they daisy-chained between adjacent cryostats in each bay so that each physical coupler only needs to go a single meter?

6. Is there an identical or physically distinct coupler for connections between chips within a cryostat and between cryostats in the same bay, and between bays of cryostats?

7. And what exactly is the path, a short horizontal link less than a meter, or up to the top of the cryostat, maybe most of a meter itself, horizontal to an adjacent cryostat, and down to the bottom that other cryostat, also the better part of a meter, so that the total length from module to module — chip to chip — between two physically adjacent cryostats is actually more than a meter?

8. Are the universal adaptors a form of coupler, maybe even a replacement for the old m-coupler of Crossbill? What is their maximum length and do they directly connect the most distant cryostats or are they simply between physically adjacent modules or at least the modules within a cryostat or within the cryostats in a bay, or daisy-chained between adjacent cryostats or adjacent bays so that each universal adapter only needs to go a single meter and maybe a lot less, or a lot more?

And maybe IBM could disclose the answer to the question of what happened to the old m-coupler.

IBM needs to disclose more of these architectural details. A detailed and fully-labeled block diagram is needed.

Disconcerting misrepresentation of what a logical qubit is

The IBM press release for the roadmap says “A logical qubit is a unit of an error-corrected quantum computer tasked with storing one qubit’s worth of quantum information.” This is the fatal flaw in quantum error correction (QEC), that a qubit is not information, as with classical bits, since a qubit is a device, not information.

The quantum state is the information, and even then, it is actually a representation of the information, a physical state, not the information itself, which is a logical abstraction.

Far worse, n entangled logical qubits can represent up to 2^n logical quantum states. So, the notion of “one qubit’s worth of quantum information” is flawed, false. n entangled logical qubits are up to 2^n logical quantum states’ worth of quantum information. Up to 2 x 2^n probability amplitudes, each of which might be in error.

The whole foundational premise of QEC is fatally, horribly flawed.

Real errors, the ones that really matter won’t be mere bit and phase flips of isolated qubits, but subtle variations in probability amplitude for entangled logical states. The kind that don’t show up in small, toy algorithms, which is all we have now.

Simulation algorithms won’t care as much about that, but it will be fatal for complex analytical calculations that require precision, not fuzzy approximations.

It reminds me of the fable of the Emperor’s Clothes. Sigh.

We all get to watch this trainwreck play out. It won’t be pretty, but they’ll say that it was “years in the making”!

Buckle up!

A handful of random points before I get to any remaining major issues

Here are a handful of random points before I get to any remaining major issues.

  1. What fraction of the promises for quantum computing will Blue Jay handle at 2000 qubits and with what quantum advantage?
  2. IBM has no credible story for simulators and analysis tools to maximize the chance that your quantum algorithm will work correctly the first time when you move it to real IBM hardware.
  3. No guidance is given for the relative performance benefits for simulation versus analytic computation. Is IBM committing to benefit both equally or favor one over the other? At present, IBM appears to favor simulation algorithms with fuzzy approximations rather than exact results for analytical computations.
  4. What is the logical gate execution time versus physical gate execution time? How many physical steps per logical gate? Although the paper does have some sketchy information about it, there needs to be some upfront, ballpark, plain language summary discussion of how long a given circuit will take to run on Starling and Blue Jay relative to how long that same circuit takes to run on Nighthawk. Granted, even if a logical circuit takes significantly longer to run, the shots or repetitions used to mitigate errors on NISQ will not be needed. Discuss, in plain language.
  5. Roughly, what will logical CLOPS be versus physical CLOPS? The same, better or significantly worse?
  6. General confusion as to whether users will see any functional or performance benefit from c-couplers, or will it all be under the hood and exclusively used for the error correction logic or for logical gate execution.
  7. Need general discussion on whether or not users still need to know about SWAP networks and their performance impact. Do users need to even think about qubit connectivity, or not?
  8. Do magic states work with dynamic circuits — since magic state distillation requires analysis of the full circuit?
  9. Will dynamic circuits still be supported — and work the same as on Nighthawk, as-is, unchanged on FTQC?

A disconcerting misrepresentation of classical bits, transistors, and classical error correction codes

I read this disconcerting misrepresentation of classical bits, transistors, and classical error correction codes:

“If we have three physical transistors and want to encode one binary digit’s worth of information into them, then we could represent 0 as 000, and we could represent 1 as 111. We can define correction as majority voting — so even if one transistor errors, the encoded data isn’t corrupted.”

No, a single transistor can’t be used to encode a single bit.

Multiple transistors are needed to provide a memory for a bit of information. SRAM may use up to seven transistors to store even a single bit. And that’s without any correction.

A classical flip-flop used to store and operate on a bit in a register uses even more transistors, but none for error correction.

DRAM memory does indeed use a single transistor per bit, but not for storing the bit. The actual bit is stored in a capacitor. The transistor is simply used to read and write the bit contained in the capacitor.

That’s all before any error correction…

Unlike qubits where dozens of physical qubits are needed for each logical qubit, classical error detection and correction codes such as ECC are SUBLINEAR, needing fewer check bits than actual data bits — a LOT fewer. For example, ECC for 32 bits of data is only seven check bits, and only eight check-bits for 64 data bits. These basic facts are a big part of why I am so antagonistic towards QEC error correction architectures. Offer a sublinear QEC code, and MAYBE then we can talk!

So, NO, you would never use three transistors as an error correction scheme for classical bits!

I’m actually surprised that the technical authors of this blog post were so unaware of these facts. I’m sure that IBM has plenty of classical computer engineers on staff who do know all of these facts. An IBMer even invented DRAM! It makes me suspect that they actually didn’t do the writing themselves.

What are c-couplers now really all about, what are they good for, and who can use them?

The question arises as to what c-couplers are really all about, what they are good for, and who can use them.

I know what c-couplers were in the previous architecture from 2022 — “on-chip non-local couplers”, but what roles do they now play in the new 2025 roadmap? There was a large amount of vagueness in IBM’s 2022 architecture paper, so all of those questions and issues remain, somewhat reflected below.

First, some of the upcoming connectivity enhancements don’t appear to require c-couplers:

  1. 4-way connectivity for Nighthawk. Doesn’t appear to use c-couplers.
  2. 6-way connectivity for Loon. Doesn’t appear to use c-couplers.

For c-couplers, specifically:

  1. c-coupler demo for Loon. What is it and how does it differ from “Long” c-coupler of Kookaburra?
  2. “Long” c-coupler for Kookaburra. Does it provide full connectivity for all qubit pairs, or just for selected pairs needed for gross code and LPU?
  3. Are c-couplers strictly intra-module, or is there inter-module support? A full plain language description for c-couplers would make this clear.
  4. Does the user see any functional benefit from c-couplers other than it’s needed for error correction?
  5. Will SWAP networks to simulate greater qubit connectivity be ancient history for FTQC with c-couplers or will they still be required?
  6. Will the Loon c-coupler be retrofitted to the 7.5K Nighthawk?
  7. Oddly, c-couplers are mentioned in the roadmap blog post, but not in the “gross” paper. What’s really going on here?!!

What physical qubit coherence time is needed to make logical qubits function as a long-term quantum memory for Starling and Blue Jay, and beyond?

Finally nearing the endgame of my journey into the IBM Quantum 2025 roadmap for fault-tolerant quantum computing (FTQC/QEC), the question arises as to what physical qubit coherence time is needed to make logical qubits function as a long-term quantum memory for Starling and Blue Jay, and beyond.

  1. In theory, physical qubit coherence time can be much shorter since you only need to last to the next stabilization of logical qubits rather than execute a long and deep quantum circuit.
  2. But you don’t want it to be too short since any decay makes logical qubit stabilization harder.
  3. Or, maybe it should be even much longer, so that the decay rate is significantly reduced and fidelity of physical qubits kept higher for longer, reducing the effort needed to stabilize qubits for a long-term quantum memory.
  4. In any case, we should be able to calculate or speculate how long it needs to be before the coherence level has decayed by some tiny fraction, comparable to the threshold error rate.
  5. What is the minimum coherence time that still enables Starling and Blue Jay to meet their FTQC goals?
  6. What is the maximum coherence time that still delivers further incremental benefits to users before diminishing returns kick in strongly, or there are finally no detectable further returns?
  7. And what is the nominal sweet spot that the designers of Starling and Blue Jay should aim for to achieve optimal return on effort?
  8. And what tolerance or variance from that sweet spot can be tolerated and maximize the chances for meeting the performance goals of Starling and Blue Jay in the most economical and reliable manner.

In short, what should the physical qubit coherence time goal be to enable Starling and Blue Jay to meet their FTQC goals?! Compared to Heron and Nighthawk.

Do people really know what they will or could do with 2,000 logical qubits in eight years, 2033, when Blue Jay becomes available?

One step closer to the endgame of my journey into the IBM Quantum 2025 roadmap for fault-tolerant quantum computing (FTQC/QEC), the question arises as to whether people really know what they will or could do with 2,000 logical qubits in eight years, 2033, when Blue Jay becomes available?

I thought of running this as a poll, comparable to the poll I already have running which asks “How much of your work on NISQ machines before FTQC do you expect will be fairly directly applicable to production-scale on FTQC machines?” but the focus of this post is a bit different and focused on Blue Jay and the general quantity of 2,000 logical qubits. For this post, I couldn’t care less whether you were able to reuse any of your NISQ work. And, I am more interested in nuance and details, not a simple four-way choice.

Some issues that I am curious about, which people can comment on, if they choose, or repost with more details:

  1. Do you have scalable algorithms all coded and ready to go, ready to just drop in and plug-and-play on Blue Jay as soon as it becomes available? Where’s the GitHub repository?
  2. Do you have one or more clearly-defined use cases that are a decent match for Blue Jay?
  3. Do you need a lot more than 2,000 qubits? How much?
  4. Is 2,000 qubits overkill?
  5. Is 200 qubits on Starling not enough, but 2,000 on Blue Jay is overkill, although you’ll take it even if it does cost an arm and a leg?
  6. If 2,000 is overkill, how many would be enough, 250, 375, 500, 750, 1,000, 1,250, 1,500, 1,750?
  7. If you really needed 2,500 or even 2,250 qubits, how confident are you that you could or couldn’t come up with some clever tricks to get that down to running in 2,000 qubits?
  8. Do you really need a billion gates? How many do you need, 25K, 50K, 100K 250K, 500K, 750K, 1M, 2.5M, 5M, 10M, 20–25M, 50M, 100M, 250M, 500M?
  9. Are you expecting to have full any-to-any (all-to-all) logical qubit connectivity, or expecting to continue to rely on SWAP networks?
  10. How much of your work on NISQ machines or Starling do you expect will be fairly directly applicable to Blue Jay?

IBM is misleading when they say “we have successfully delivered on each of our milestones”

Closing in on the endgame of my journey into the IBM Quantum 2025 roadmap for fault-tolerant quantum computing (FTQC/QEC), their blog post says “we have successfully delivered on each of our milestones”, but that’s not quite true.

  1. Some of them they didn’t, or they did technically, but not in a way that was useful to anyone in the real-world, or they delivered the labels on the roadmap but not the substance, or not for specific technical milestones that they talked about elsewhere.
  2. They claimed they would hit Quantum Volume (QV) 1024, but didn’t, and worse, abandoned that metric entirely, even though they had publicly claimed they would be doubling it each year.
  3. They claimed they would hit three nines of two-qubit gate fidelity a couple of years ago, but didn’t.
  4. They delivered Osprey, but it was a dud and delivered months late and then withdrawn barely five months later, with no evidence that anybody did anything useful with it.
  5. They originally had Condor as a commitment on the roadmap, then switched it to innovation, showed a chip and some slides, but did not make it publicly available, unlike the much more successful Heron which they did make publicly available even though it was on only the innovation roadmap that year.
  6. Most recently, Heron was committed as a multiprocessor, initially as three processors connected classically, but there’s no evidence of delivery of Heron as a three-chip machine, no evidence of anyone using it that way. Milestone… not met.
  7. Oddly, they currently show Heron as a 133-qubit processor, which was true initially, but has now been replaced with a 156-qubit processor, which probably should have had a different name. So, now, there are two different Herons. Confusing to say the least — and not explicitly indicated on the roadmap.

The real bottom line is that after IBM makes their bold but misleading claim, they use it as a predicate as evidence to assert that “Based on that past success, we feel confident in our continued progress.” So, they may feel confident, but should anybody else?

Something is missing: circuit cutting and knitting

Another step closer to the endgame of my journey into the IBM Quantum 2025 roadmap for fault-tolerant quantum computing (FTQC/QEC), I noticed something missing: circuit cutting and knitting.

IBM had touted it heavily and it was committed for 2025 (Innovation) and 2026 (Development). Now? Poof! Yeah, it’s now completely gone from the new roadmap. Vanished, without a trace! Not even a faint apology or a lame… Never Mind!

Another IBM quantum commitment/milestone… not met.

In truth, although it sure sounded like a great idea on paper, it had too many caveats and a lot of complexity, so it’s no big surprise to me that they dropped it. I had zero expectation that it was actually going to work out well even for the small fraction of the user base which might fit all of the onerous caveats. So, Good Riddance!

At this stage, anything that simplifies and streamlines the roadmap is a good thing.

Shouldn’t IBM be providing the community with a list of changes, additions, and subtractions — rationale for each — on each update of their quantum computing roadmap? You’d think! But, no, they… don’t.

I’m still quite concerned that IBM (and others!) are overtooling with a Tower of Babel of tools, all to compensate for hardware limitations and a crappy programming model. Fix The Damn Qubits! Give Us A Decent Programming Model!

Whether 200 or 2,000 logical qubits is really large scale

Yet another step closer to the endgame of my journey into the IBM Quantum 2025 roadmap for fault-tolerant quantum computing (FTQC/QEC), the question arises of whether 200 or 2,000 logical qubits is really large scale.

Maybe. Maybe not. But based on what criteria in terms of what scale of real-world problems it can tackle? Is 2,000 logical qubits enough to tackle really interesting real-world problems? Maybe, some.

Or, maybe is it overkill relative to solutions that will actually work on a real quantum computer?

Plenty of questions arise. Answers? Not so much.

Actually, the subheadline of the blog post says “IBM lays out a clear, rigorous, comprehensive framework for realizing a large-scale, fault-tolerant quantum computer by 2029”, and since Starling is the machine targeted for 2029 and Starling has only 200 (logical) qubits, that suggests that IBM itself considers 200 qubits to be “large-scale”. Really?!

At least with NISQ, even wimpy NISQ, 200 qubits has traditionally been referred to as intermediate scale (by definition, 50 to hundreds of qubits.)

Maybe IBM is simply referring to the count of physical qubits or the physical mechanical size of the quantum computer system. Oh… maybe it’s all just… hype… never mind!!

But seriously, even I used to consider 1,000 qubits to be large scale, but that was all relative to where we were years ago, barely a few dozen qubits, even on the best of days.

Now, maybe even a few thousand qubits could/should be considered… the new intermediate scale.

But the bottom line is that we need to get more real about not raw qubit counts but the degree of application complexity that can be handled relative to actual, practical, real-world problems.

And this doesn’t let IBM off the hook for referring to 200-qubit Starling as large scale.

Or, once again, maybe the authors of the blog post didn’t write this stuff themselves. Didn’t they review it for technical accuracy? Maybe not! I mean, does even IBM allow physicists to write (or review) marketing puff pieces?!!

Enough with all of the discussion of error correction for individual logical qubits; the truth is that error correction should be for the quantum state itself

As I get almost to the endgame of my journey into the IBM Quantum 2025 roadmap for fault-tolerant quantum computing (FTQC/QEC), I say enough with all of the discussion of error correction for individual logical qubits; the truth is that error correction should be for the quantum state itself, not the individual (logical) qubits, which are really just containers storing the quantum state, or in most cases, a fraction of the quantum state.

To wit, for n unentangled qubits, logical or physical, there are 2 x n quantum states, while for n fully-entangled qubits, there are up to 2^n quantum states to resolve for any errors. A big difference!

What problem is FTQC/QEC really solving?

Besides basic quantum memory, which is more about extending coherence time than about errors, per se.

For all of the narrative about quantum error correction, how often do the papers, articles, and social media posts discuss error correction for the probability amplitudes and phases of 2^n entangled quantum states? Like, none, never!

Or maybe I should ask what FRACTION of the total problem of errors of all sorts are they addressing, and with such incredible complexity?

Indeed, how can the cure not end up being worse than the disease!

My deeper question is how tiny a probability amplitude or phase can we realistically expect any real, especially large-scale, quantum computer to resolve, and what does that mean with regards to error correction of such probability amplitudes and phase. I’m just not seeing ANY real discussion of these critical issues that will determine, at a fundamental technical level, whether large-scale quantum computers will succeed or fail. Sure, the math and models work fine, on paper, but does the underlying physical phenomenon comport with supporting the math and model at these extreme levels of precision? I remain unconvinced!

Remember, real (and complex!) numbers are not real — there is no physical phenomenon with detail that can be resolved with such near-infinite precision. It’s all an illusion. Indeed, what is the precision of your qubits?

Revisiting the gate limits in the roadmap (5K Heron, 5K Nighthawk, 7.5K Nighthawk, et al) in terms of practical quantum circuits, particularly what they mean for analytical computation as opposed to simulation

Now in the endgame of my journey into the IBM Quantum 2025 roadmap for fault-tolerant quantum computing (FTQC/QEC), I want to revisit the gate limits in the roadmap (5K Heron, 5K Nighthawk, 7.5K Nighthawk, et al) in terms of practical quantum circuits, particularly what they mean for analytical computations as opposed to simulations which are more tolerant to or even exploit a bit of noisiness.

IBM’s old Quantum Volume (QV) metric was actually somewhat useful, at least for analytical computations.

IBM likely dropped this metric as larger counts of qubits were unable to be successfully exploited due to weak connectivity, the so-called heavy hex layout. This required SWAP networks to simulate direct connectivity, dramatically boosting gate count, which exposed the core circuit to a higher error rate. And that this indeed put limitations on using these chips for analytical computations which inherently require significantly greater connectivity, well beyond the heavy hex.

I suspect that IBM realized that simulation was a better target for applications since it physically relies on greater locality, so-called local realism, which is much less demanding for qubit connectivity. And quantum physics is probabilistic, so a little bit of noise is actually a feature rather than a bug.

So, although these published gate limits may make sense for simulation applications, they are likely very misleading for analytical computations.

Quantum Volume is likely a much better metric for analytical computation that requires greater qubit connectivity.

Even though IBM does not publish QV targets for upcoming chips, we could roughly extrapolate what they might be. The highest published QV was 512 for the 3.5K 133-qubit Heron. log2(512) is 9, 9 qubits, a good estimate for how wide a square circuit could be executed with a reasonable chance for yielding semi-decent results. We could extrapolate from 512 for 3.5K Heron to 1024 for 5K Heron, maybe 2048 to 5K Nighthawk with its improved square connectivity, and maybe 4096 for 7.5K Nighthawk, implying a maximum circuit width of 10, 11, and 12 qubits, respectively. And maybe add another 1 or 2 qubits for Nighthawk with its superior connectivity.

But, half of this analysis is mere speculation. IBM leaves so many questions unanswered, with so little disclosure and so little transparency.

It really bugs me to see the headline of the blog post proclaiming that this is a “clear path” when it is anything but clear. It’s as clear as… mud!

Finally completing the endgame of my journey into the IBM Quantum 2025 roadmap for fault-tolerant quantum computing (FTQC/QEC), I’ll close out my posts on the topic by saying that it really bugs me to see the headline of the blog post proclaiming that this is a “clear path” when it is anything but clear. It’s as clear as… mud! Just saying.

Once again, I am shocked that the otherwise-esteemed authors of the blog post would stoop to such hype and mere marketing babble.

Alternatively, maybe they didn’t write the headline at all, but did sign off on it, which is… just as bad. Alternatively, maybe they had no say at all about the headline, which is then not their fault.

Maybe this is yet another example of how mindless IBM can be sometimes, alternating between flashes of technical and business excellence and even brilliance and extended stretches of bureaucratic mindlessness and outright madness.

Okay, that’s it for me on the 2025 IBM Quantum roadmap.

Checkmate! C’est fini! Opus meum confectum est.

[Disclosure: I used an AI system, Microsoft Copilot to get that last phase, Latin for “My work is completed!”]

Of course I reserve the right to post additional commentary as thoughts surface due to further thinking of my own and exposure to commentary from others.

Is there anybody out there who ISN’T tired of hearing about the 2025 IBM Quantum roadmap?!

Roadmaps from other quantum computing vendors? Mostly vacuous, with far too little detail to dig into to be worthy of any significant attention, from me.

One last comment… I am considering rolling up all of my 2025 IBM Quantum roadmap posts into a single document and posting it on Medium; no fresh content, but all in one place for an easier read and easier searching, and easier to find if needed for future reference. If there’s any demand for it, that is.

What exactly is the long pole in the tent for advancing from Starling in 2029 to Blue Jay in 2033 that will take FOUR YEARS?!

It just keeps echoing in my head, the FOUR YEAR gap from Starling in 2029 until Blue Jay in 2033. What could take that long? What is the long pole for the tent?

In theory, Starling has all of the functional components needed by Blue Jay, and all that is needed is a little scaling. Almost literally, just take ten Starlings and lash them together with universal adapters and inter-module couplers. Right? So, what’s missing?

Some speculative possibilities:

  1. There are additional functional components that are missing from Starling. What those components might be is clear… as mud.
  2. The performance of Starling won’t be scalable, at least by a factor of ten. IBM may just need four years-worth of incremental performance improvements to get there. Just with incremental improvements to the same functional components as Starling. No new science or research, just more refinements to the engineering.
  3. Maybe Starling will only use the (single) gross code for error correction, and that significant hardware redesign will be needed to support the two-gross code needed to achieve the full performance and capacity objectives for Blue Jay.
  4. The performance of Starling is scalable, as it is, but that would be lower performance than IBM seeks for Blue Jay. IBM has not disclosed such a performance differential in their roadmap or papers.
  5. Starling is really just a prototype of the functional capabilities needed for Blue Jay, and must be (mostly) scrapped and completely designed and reimplemented to support the performance and capacity (scaling) required for Blue Jay. Starling may be enough to climb a modest mountain, but just isn’t enough to deal with the extra challenges of scaling Mt. Everest.

A fundamental question is how much of the research for Starling and Blue Jay has been completed and passes a robust peer review, and just needs experimental validation and mere engineering, or whether significant additional research is needed for Starling or Blue Jay.

So my headline question would become whether the four years from 2029 to 2033 is mostly just mere engineering, or is a substantial degree of research as well.

And with no intermediate Innovation systems between Starling and Blue Jay on the roadmap!

This all looks very suspicious, to me!

The biggest question for me is whether there is simply one major functional component that needs four extra years of work — a proverbial long pole for the tent, or whether there are more than one component needing multiple years of work, or indeed whether ALL of the components will need to be reworked.

Only IBM can tell us for sure what the reasons are. Or, maybe they can’t; maybe even they don’t know with great clarity, and maybe won’t know until they actually test Starling and see how well it meets its design criteria.

Overall, Nighthawk is the brightest spot on the roadmap

Nighthawk will still have plenty of issues, but it is a clean next step up on the progression of physical qubit quantum computer systems from IBM.

Overall, for me, personally, Nighthawk is the brightest spot on IBM’s 2025 roadmap.

The mere fact that they are ditching the lame heavy-hex qubit topology for a full four-degree nearest neighbor grid lattice (ala Google) is significant.

We still don’t know what the qubit or gate fidelity will be, or whether the new qubit topology might negatively impact it, but at least on the surface, Nighthawk looks to be a nice advance from Heron. Apart from the fact that anybody relying on the 29 additional qubits that 156-qubit Heron has over the 127-qubit Eagle will be unpleasantly surprised.

Some final thoughts

Stepping back and giving the whole landscape a broader perspective, here are some final, summarizing thoughts about the 2025 update to the IBM Quantum roadmap:

  1. It’s too focused on simulation rather than analytical computation. Worse, it doesn’t even acknowledge that.
  2. It’s too focused on quantum error correction (QEC) and so-called fault-tolerant quantum computing (FTQC) and not enough on better physical qubits.
  3. QEC is horrifically overly-complex and complicated, gross overkill and gross overhead, even if it does work, which I remain skeptical of, to say the least.
  4. Even if one does passionately believe in QEC, it’s still too soon to commit so heavily to a singular, one-size-fits-all code. Granted, the discussion makes it clear that they are still looking at TWO codes. Much more research is needed. The science of QEC is still not solidly settled; it’s still much more than Just Engineering.
  5. No clarity as to what the final, lingering error rate will be for logical qubits. Another indication that much research remains.
  6. Even with all the complexity of QEC, the roadmap appears to NOT including endowing logical qubits with full, any-to-any (all-to-all) qubit connectivity, forcing a reliance on SWAP networks, which is just too much overhead for advanced analytical computation even if it does functionally work somewhat better due to the lower error rate. As previously noted, IBM is still focused much more on simulation rather than analytical computation, so this is not a real surprise, per se. The former has greater locality, less demanding of non-local connectivity, even if the latter has much greater nonlocality, demanding much greater non-local connectivity.
  7. Fix The Damn Qubits! The hardware is broken and crappy. So, fix it, rather than trying so fantastically hard to mitigate its deficiencies at the firmware, software, and user level. Such a goal is not in evidence in this roadmap.
  8. Nighthawk is the one brightspot. Glad to see some improvement in the physical qubits, including a modest improvement in qubit connectivity, but with the emphasis on modest.
  9. But, even with Nighthawk, this is too little, too late on the qubit fidelity and connectivity front.
  10. Even Nighthawk seems to have too-limited a future. Further enhancements and evolution of qubit fidelity and connectivity are needed.
  11. IBM needs to have two separate roadmaps, one for the further evolution of NISQ (and what I call post-NISQ as qubit fidelity and connectivity advance to a low-noise regime, no longer as noisy as current NISQ) and a separate roadmap for FTQC.
  12. And maybe IBM should just bite the bullet and have two separate roadmaps for simulation and analytical computation. Sure they can share some of the hardware, but any sharing is likely to be a compromise on the two divergent collections of needs and opportunities for optimizing the hardware.
  13. Still too much emphasis on excessive tooling, which I call overtooling, again, primarily to attempt to compensate for deficiencies in the hardware, and programming model. Fix.The.Damn.Hardware! Give.Us.Full.Connectivity! Give.Us.A.Decent.Programming.Model!
  14. No acknowledgement of the deficiencies of the current programming model, let alone an attempt to rectify them, other than lame attempts at mitigation with complex tooling rather than simply… Fixing.The.Damn.Qubits! (And driving towards full any-to-any (all-to-all) full connectivity.)
  15. We need the Trifecta: Better qubits. Better connectivity. Better programming model. That’s not in evidence in this roadmap.
  16. Ends too soon. Should cover a full ten years.
  17. Too few systems to represent a credible path through the scaling of QEC. As if it mattered, since it’s too dubious a venture anyway. It may indeed take three or four revisions of large-scale FTQC before a reasonably diverse range of use cases are fully enabled.
  18. Overall, Nighthawk is the brightest spot on the roadmap.

--

--

No responses yet