Preliminary Thoughts on the IBM 127-qubit Eagle Quantum Computer

Jack Krupansky
49 min readDec 15, 2021

--

I have mixed feelings about IBM’s announcement of its 127-qubit Eagle quantum computer. Yes, it’s a significant engineering achievement, but it doesn’t actually offer any significant benefit to typical quantum algorithm designers or quantum application developers since that engineering is under the hood where they won’t see it, even for performance, so the dramatic increase in qubit count isn’t generally functionally useful, at present, for most typical users, particularly due to the lack of any significant improvement in qubit fidelity or connectivity. This informal paper will explore that disappointing dichotomy.

Granted, there may be some niche use cases where Eagle can be of significant advantage — but I don’t know of any, at present. Something where noisy qubits are not only tolerable but a feature. But most users desperately need higher qubit fidelity rather than more noisy qubits.

What the future holds for future revisions of Eagle is of course unknown, but we can be hopeful.

My advice is to stick with Falcon if you’re not using more than 20 to 24 qubits at present, or better yet, use simulation until Eagle offers significantly better qubit fidelity. Being able to simulate up to 32 to 40 qubits with greater qubit fidelity (and connectivity) is more compelling than Eagle at this stage.

This is not an in-depth technical review or even intended to be a complete summary, but simply a number of thoughts and impressions that popped up for me as I reviewed the announcement and related material. I’m coming at this from the perspective of a technologist rather than a customer or user, so I examine the technology in general rather than with particular applications in mind. I focus on capabilities, limitations, and issues for quantum computing.

Despite my disappointments and reservations with the initial release of Eagle, it really is an amazing engineering achievement, and I look forward to future revisions of Eagle, much as revisions of Falcon have shown significant improvements.

Topics discussed by this paper:

  1. These are only very preliminary impressions based on very limited information, subject to change as more information becomes available
  2. Brief summary
  3. References — The Eagle has landed
  4. Quantum processor vs. quantum computer
  5. Hopes and disappointments
  6. Yes, it’s a significant engineering achievement
  7. Why didn’t reduced crosstalk boost qubit fidelity significantly?
  8. Apparently no significant upgrade to the basic core qubit technology
  9. Uneven progress by IBM’s own standard for progress
  10. Quantum Volume (QV) of 32 is rather disappointing — what can you do with 5 qubits?
  11. Misleading headline: “IBM Rolls Out A Game-Changing 127-Qubit Quantum Computer That Redefines Scale, Quality, And Speed”
  12. Is it true that Eagle can’t be simulated?
  13. Qubit coherence time and circuit depth are secondary priorities for now
  14. Availability?
  15. No hands-on access
  16. Need for a Principles of Operation and detailed technical specifications
  17. What happened to the other five qubits?
  18. Maybe an upgrade to Eagle?
  19. Caveat: All of my direct observations are based on Eagle revision r1 — upgrades could change things
  20. Mediocre measurement fidelity
  21. No dramatic improvement in qubit fidelity
  22. Meanwhile Falcon has been advancing nicely
  23. Preview of Osprey
  24. How exactly do you go about computing Quantum Volume (QV) with more than 50 qubits?
  25. What can you do with 127 (or 65) qubits that you can’t do with 23 qubits?
  26. The world’s most powerful quantum processor?
  27. No, Eagle is not able to offer any dramatic quantum advantage
  28. In short, Eagle offers no net benefit to most real users at present
  29. Twin priorities for the medium term are progress towards quantum Fourier transform and quantum phase estimation as well as progress towards quantum error correction and logical qubits
  30. Progress towards near-perfect qubits will help on both fronts, but Eagle hasn’t done so, yet
  31. Variational methods are a technical dead-end and unlikely to ever achieve any significant quantum advantage
  32. Hopefully Osprey makes more significant progress on both qubit fidelity and fine granularity of phase
  33. Limited qubit connectivity is IBM’s greatest exposure
  34. Are superconducting transmon qubits a technical dead-end for dramatic quantum advantage due to severely limited connectivity? It sure seems that way!
  35. Never say never — I’m sure somebody can up up with a clever way to exploit a majority of Eagle’s qubits
  36. There may be some niche use cases where Eagle can be of significant advantage
  37. Where are all of the 40-qubit quantum algorithms?
  38. Can Eagle support 24 to 29-qubit algorithms?
  39. Can Eagle support 20 to 23-qubit algorithms?
  40. Can Eagle support 15 to 19-qubit algorithms?
  41. Clearly Eagle and IBM are still deep in the pre-commercialization stage of quantum computing, not yet ready to even begin commercialization
  42. The bottom line is that Eagle is still a research project, not close to a commercial product
  43. My advice is to stick with Falcon if you’re not using more than 20 to 24 qubits at present, or better yet, use simulation until Eagle offers significantly better qubit fidelity
  44. Will Eagle set a new world record for hype?
  45. Is Eagle a dud? It’s not THAT bad!
  46. Is Eagle a flop? Well, basically, yes
  47. No, Eagle is not positioned to enable a technical breakout for most users
  48. Did IBM jump the gun? Should they have waited another 3 to 6 or even 9 months? Maybe, maybe not
  49. What’s next for Eagle? Waiting for the r2 revision
  50. Will Eagle r4 hit 3.5 nines of qubit fidelity and support 32-qubit algorithms?
  51. Is Eagle close to offering us practical quantum computing? No, not really
  52. To end on a positive note, we should celebrate IBM’s engineering achievement with Eagle
  53. Summary and conclusions

These are only very preliminary impressions based on very limited information, subject to change as more information becomes available

IBM announced the Eagle only fairly recently and there has not been enough time to develop a significant knowledge base for this new quantum processor.

As such, there is a significant likelihood that some of the very preliminary comments contained in this informal paper will quickly become obsolete, superseded, or maybe simply irrelevant as more information about Eagle becomes available in the coming weeks and months.

Brief summary

The positives:

  1. Significant jump in qubit count to 127. Almost double the qubits of the previous top-end 65-qubit Hummingbird processor.
  2. Broke the 100-qubit barrier.
  3. Significant engineering improvements. At the chip level. Introduction of multi-level fabrication — increases density while reducing crosstalk. As the IBM press release puts it, “breakthrough packaging technology.”
  4. Progress on the path to more physical qubits to support quantum error correction (QEC) and logical qubits.

The negatives:

  1. No significant benefits to most typical near-term quantum algorithm designers or quantum application developers. All of the engineering is under the hood where most typical users won’t see it. Low qubit fidelity — no significant improvement from previous processors — precludes using more than 20 or so qubits in a single circuit — which can already be done with a 27-qubit Falcon, so the dramatic increase in qubit count isn’t generally functionally useful for most typical users, at present.
  2. No hint of any significant change to the basic core qubit technology. Despite the dramatic overall engineering redesign, there is no hint that the core qubit technology has changed. Presumably IBM would have touted that if it had been improved.
  3. No significant increase in qubit fidelity. Some 27-qubit Falcon processors are better.
  4. No hint of improvement in fine granularity of phase and probability amplitude. Needed for quantum Fourier transform (QFT) and quantum phase estimation (QPE), as well as for more complex algorithms utilizing quantum amplitude estimation (QAE). Needed for quantum computational chemistry, so no significant advance on this front.
  5. No hint of any significant improvement in measurement fidelity. Sorely needed.
  6. No improvement in qubit connectivity. Same topology. Low qubit fidelity limits use of SWAP networks to simulate connectivity.
  7. No significant increase in qubit coherence time. Many 27-qubit Falcon processors are better, some by a lot.
  8. No significant improvement in gate execution time. The minimum does seem to show significant improvement, but the average is not quite as good as ibm_hanoi (27-qubit Falcon), although somewhat better than ibmq_brooklyn (65-qubit Hummingbird.)
  9. No significant increase in circuit depth. Follows qubit coherence time and gate execution time.
  10. No improvement in Quantum Volume (QV). Measured at only 32 as of December 8, 2021. Very disappointing. Worse than Falcon (64 and 128). Matches 65-qubit Hummingbird. I had hoped for 256.
  11. No significant progress in two of the three metrics for progress given by IBM. Scale increased, but no significant increase in quality (QV) or speed (CLOPS).
  12. No support for Qiskit Runtime. At least not initially, but I presume that will come, eventually.
  13. Unlikely to attain any substantial degree of quantum advantage. Due to limited qubit fidelity and limited connectivity.
  14. No documented attempt to implement quantum error correction (QEC) or logical qubits.
  15. Clearly Eagle and IBM are still deep in the pre-commercialization stage of quantum computing, not yet ready to even begin commercialization. Many questions and issues and much research remains. Not even close to commercialization.
  16. No roadmap for enhancements to Eagle. Other than Osprey and Condor being successors. But I want to know about r2, r3, r4, and r5.

Net benefits:

  1. Engineering achievement for IBM. Required to support higher qubit counts.
  2. Progress towards physical qubit count needed for quantum error correction (QEC) and logical qubits.

Unfortunately, that’s it.

Superficially, the announcement sounds impressive, but only delivers those two net benefits.

For those seeking to design quantum algorithms, Eagle offers nothing of any great significance over the 65-qubit Hummingbird and the 27-qubit Falcon.

Most algorithm designers should focus on simulation rather than running on real quantum hardware for the indefinite future.

References — The Eagle has landed

The Eagle press release from IBM:

The Eagle blog post from IBM:

Initial press coverage of Eagle by Reuters:

Initial press coverage of Eagle by ZDNet:

View Eagle availability and technical metrics on IBM Quantum Services in the cloud:

Quantum processor vs. quantum computer

Technically, Eagle is a quantum processor rather than a quantum computer per se.

The quantum processor is where all of the computation is performed. The actual chip. All of the rest of the hardware is the quantum computer system, or simply quantum computer, or as IBM refers to it, the quantum system.

Most of the quantum system is common, regardless of the actual quantum processor chip. So, the 127-qubit Eagle, the 65-qubit Hummingbird, and the 27-qubit Falcon all share the same overall quantum system, called the IBM Quantum System One. All of that hardware other than the processor chip is the same, regardless of which processor chip is used. (There is also the wiring and electronics to drive the wiring, but that is all the same, just one for each qubit or a sequence of qubits for newer systems with serial readout.)

The upcoming 433-qubit Osprey and 1,121-qubit Condor will also share the same quantum system hardware, but it will be called IBM Quantum System Two, since it is significantly more sophisticated than what is needed for the smaller quantum systems.

All of that said, I personally will continue to refer to these as quantum computers — the 127-qubit Eagle quantum computer. Maybe that’s because I’m a software guy and it’s the functions under the hood which matter most, regardless of how it is all sliced, diced, and packaged.

Hopes and disappointments

We’ve all been waiting for the 127-qubit Eagle since IBM put out their quantum hardware roadmap in September 2020. It was light on technical detail, so expectations were unclear. It was easy to get our hopes up.

I had only two real expectations or hopes, especially due to all of the fancy engineering that the roadmap was touting:

  1. Qubit fidelity would be significantly improved.
  2. Qubit coherence time, gate execution time, and circuit depth would be significantly improved.

But, neither happened. Qubit fidelity of Eagle is only modestly better than the 65-qubit Hummingbird and some of the Falcons, but actually worse than some of the Falcons. Basically, they’re all in the general vicinity of two nines give or take a modest fraction of a nine. Ditto for coherence time and circuit depth relative to the Eagle’s predecessors.

But to be clear, IBM never promised an improvement in qubit fidelity, coherence time, gate execution time, or circuit depth.

In fact, part of my critique of their hardware roadmap was that it was very light on technical details and lacked specific milestones for all of the critical technical metrics.

I did have some hopes beyond those two real expectations:

  1. Improved connectivity. Not sure how since the overall qubit topology was not expected to be any different, and it wasn’t.
  2. Finer granularity of qubit phase. And probability amplitude.

Those hopes were also dashed as well. Although granularity of qubit phase and probability amplitude are unknown at present.

Yes, it’s a significant engineering achievement

Engineering details from the IBM Eagle blog post:

  1. Eagle broke the 100-qubit barrier with 127 qubits.
  2. Almost double the qubit count of the previous top-end 65-qubit Hummingbird processor.
  3. Introduction of multi-level chip fabrication. Increases qubit density while reducing crosstalk.

The four levels or planes of the Eagle chip:

  1. Qubit plane. The qubits themselves.
  2. Resonator plane. The resonators needed to connect and measure the qubits. Although the diagram says “Resonators for qubit readout wired through connectors. Measured shifts in the frequency of the resonator depend on the state of the qubit.”, which speaks to readout (measurement), but not two-qubit gate execution and entanglement.
  3. Wiring plane. Connections to external control electronics.
  4. Interposer plane.Leverages CMOS packaging techniques, including thru-substrate vias, to exploit the third dimension to electrically connect the qubits to the other planes and deliver the signals while protecting their coherence.” Not clear to me what that really means relative to what the wiring plane does — the diagram has a big empty white square!

Why didn’t reduced crosstalk boost qubit fidelity significantly?

A large part of the rationale of the whole multi-level redesign with a separate wiring plane seemed to be to reduce crosstalk — “Buried wiring layer connects to the other planes through superconducting thru-substrate vias, providing the flexibility to efficiently route signals to the qubit plane with low crosstalk.”, but I find it surprising that this reduced crosstalk didn’t have the consequence of boosting qubit fidelity dramatically or at least significantly.

I honestly don’t know if IBM expected that reduced signal crosstalk should boost qubit fidelity or not. All I do know is that it doesn’t show up as a significant let alone a dramatic improvement in the average CNOT error rate, which I expected that it would.

So this remains an open question.

Apparently no significant upgrade to the basic core qubit technology

So far, as far as I can tell from the limited material available, despite the dramatic overall engineering redesign, there is no hint that the core qubit technology has changed. Presumably IBM would have touted it if it had been improved.

This seems consistent with the absence of any significant improvement in qubit fidelity.

The engineering redesign seems to have focused on supporting a lot more qubits rather than enhancing each qubit.

Uneven progress by IBM’s own standard for progress

As the IBM Eagle press release notes:

  • IBM measures progress in quantum computing hardware through three performance attributes: Scale, Quality and Speed. Scale is measured in the number of qubits on a quantum processor and determines how large of a quantum circuit can be run. Quality is measured by Quantum Volume and describes how accurately quantum circuits run on a real quantum device. Speed is measured by CLOPS (Circuit Layer Operations Per Second), a metric IBM introduced in November 2021, and captures the feasibility of running real calculations composed of a large number of quantum circuits.
  • https://newsroom.ibm.com/2021-11-16-IBM-Unveils-Breakthrough-127-Qubit-Quantum-Processor

But other than the significant increase in qubit count, Eagle hasn’t shown any improvement in quality or speed.

In summary:

  1. Scale: Big leap.
  2. Quality: No significant improvement. Quantum Volume (QV) measured at only 32 as of December 8, 2021. Worse than Falcon (64 and 128). Matches 65-qubit Hummingbird.
  3. Speed: Unknown. CLOPS is still not reported as of the time this is written, December 10, 2021. Possibly worse since Qiskit Runtime is not yet supported.

Presumably the speed (CLOPS) will be roughly comparable to the other processors since it is more a function of the overall quantum system (IBM Quantum System One and Qiskit Runtime) than the processor chip itself.

Quantum Volume (QV) of 32 is rather disappointing — what can you do with 5 qubits?

With the 27-qubit Falcon hitting Quantum Volume (QV) of 64 and 128, I expected a lot more from Eagle. But the 127-qubit Eagle scored a QV of only 32, which is indeed rather disappointing. Actually, I shouldn’t have been so surprised since the 65-qubit Hummingbird only achieved a QV of 32 as well. Still, with the passage of a year, I really did expect more.

log2(32) is 5, meaning that effectively algorithms can only assume five high quality qubits. Wow, Eagle has 127 qubits but you’re only supposed to use five of them?!?!

Sure, some clever algorithm designers will be able to make use of 127 noisy qubits, but IBM itself has set expectations that qubits are supposed to be high-quality.

Misleading headline: “IBM Rolls Out A Game-Changing 127-Qubit Quantum Computer That Redefines Scale, Quality, And Speed”

That’s the actual Forbes headline:

As I noted two sections ago, yes, IBM succeeded on scale, but they failed on both quality and speed:

  1. Quantum Volume (QV) of only 32. Compared to 64 and 128 for the 27-qubit Falcon.
  2. Speed not even reported. As of this writing, December 10, 2021, the IBM Quantum Services dashboard for the ibm_washington system does not report a metric value for CLOPS and says that Qiskit Runtime is not supported.

I presume that Qiskit Runtime will eventually be supported, but CLOPS seems more a function of the overall system rather than the quantum processor itself.

Game-Changing? Well, that remains to be seen, but not so far. It doesn’t change any games that I am aware of. And not with mediocre qubit fidelity and a QV of only 32, and very limited qubit connectivity. So, yes, for now, that part of the headline is misleading as well.

Is it true that Eagle can’t be simulated?

This is a red herring — a statement that is true but irrelevant or misleading.

The press release is technically correct:

  • ‘Eagle’ is the first IBM quantum processor whose scale makes it impossible for a classical computer to reliably simulate.
  • In fact, the number of classical bits necessary to represent a state on the 127-qubit processor exceeds the total number of atoms in the more than 7.5 billion people alive today.

That’s all technically true, but… also technically true is that we don’t simulate the raw machine itself, but quantum circuits when run on the machine, and it would make no sense to bother simulating a quantum circuit which could not run reliably on the machine in the first place due to low qubit fidelity, gate errors, or limited coherence time. In fact, there’s little need to simulate a quantum circuit that is significantly larger than what can successfully be run when calculating Quantum Volume (QV) — after all, quality matters.

Given the CNOT error rate currently reported for Eagle — roughly a little less than two nines, there’s no good reason to run a circuit larger than what can be run on a 27-qubit Falcon. In fact, because of their lower CNOT error rate, some Falcons can run circuits that Eagle cannot run correctly, reliably.

In fact, it would be a stretch to run 32 or 40-qubit quantum circuits on Eagle, and those are within the reach of current classical quantum simulators.

Granted, there are likely some specialized niche cases which can be run successfully on Eagle but are beyond the current 40-qubit limit of current simulators, but they are unlikely to be representative of practical real-world problems for which these quantum computers are designed.

Qubit coherence time and circuit depth are secondary priorities for now

I would personally suggest that although qubit coherence time and circuit depth are important priorities for the longer term, they are not priorities right now and won’t be until:

  1. Qubit fidelity increases dramatically. Long circuits are rather useless if they are noisy.
  2. Connectivity increases dramatically. Longer and more complex circuits imply a significant degree of connectivity, which is not possible at present.

Availability?

Note: These were my initial observations as of November 27, 2021, but some changes have occurred, as I document in the section Caveat: All of my direct observations are based on Eagle revision r1 — upgrades could change things. In particular, I have recently see the system online a number of times, although lately it has been Online — Queue paused on a number of occasions.

The announcement gave me the impression that Eagle was actually available for use now. In fact, I do see ibm_washington listed as one of the available quantum systems in the IBM Quantum Services dashboard. But as of November 27, 2021…

  1. System has been tagged as Exploratory.
  2. It has been offline every time I checked.
  3. The dashboard shows that it hasn’t been calibrated for a month now.
  4. No Quantum Volume (QV) is displayed on the dashboard. Initially, as of November 27, 2021, but added as of December 8, 2021 — but only QV of 32, which is worse than Falcon and no better than 65-qubit Hummingbird.
  5. No speed measurement (CLOPS) is displayed on the dashboard.

I’ve seen no other indication of availability or explanation as to why Eagle is not currently available or when it might become available.

Update as of 5 PM December 16, 2021: The ibm_washington 127-qubit Eagle quantum system is not even listed as one of the available quantum systems in the IBM Quantum Services dashboard. Maybe this is just a temporary change. We’ll see soon enough. I’ll check back tomorrow.

Update as of 10:38 AM December 20, 2021: The ibm_washington 127-qubit Eagle quantum system is back and online.

No hands-on access

All of my commentary here is based on information available online coupled with my own analysis of that information, and my own views. None of my commentary is based on any hands-on access to Eagle — nor am I interested in having any hands-on access. I am interested in gaining access to technical specifications, documentation, and benchmarking results, as well as customer and user reports and academic papers based on actual hands-on usage.

I suppose you could say that access to the ibm-washington dashboard constitutes at least a superficial form of hands-on access. That’s about as close to hands-on access as I am interested in going, although I would appreciate seeing a lot more technical metrics displayed.

But, given that this machine has just been announced, I don’t expect much more in the very near future.

That said, I would look forward to any technical or academic papers in the months ahead which evaluate or analyze Eagle and computations performed using Eagle, especially with regard to my preliminary conclusion that Eagle doesn’t offer algorithm designers much beyond what was already available with 65-qubit Hummingbird and 27-qubit Falcon.

Need for a Principles of Operation and detailed technical specifications

Although I have found quite a few interesting tidbits of information from the announcement, blog, and quantum services dashboard, that’s really only the tip of the iceberg. Detailed technical specifications are needed.

First and foremost, we need a Principles of Operation document which tells algorithm designers and application developers everything they need to know about Eagle to design, develop, and run quantum algorithms on Eagle.

I had previously outlined a framework for such a document, based in part on how IBM itself documented how to program their mainframe computers:

That framework also calls for a separate document, an Implementation specification, which would document technical details that an algorithm designer nominally wouldn’t or at least shouldn’t care about, including system architecture, implementation details, performance, etc. Nominally an algorithm designer or application developer shouldn’t need that level of detail, but that kind of insight can frequently be helpful to understand what’s really going on under the hood. It also helps people understand the significance of any limitations.

In short, both documents are needed for Eagle or any other quantum computer:

  1. Principles of Operation.
  2. Implementation specification.

What happened to the other five qubits?

127 seems like such an odd number of qubits. 128 would seem to have been a much more appropriate and rational number — at least superficially. Of course, you could ask the same thing about Humminbird and Falcon — why 65 and 27 rather than the obvious 64 and 32?

The first thought that comes to mind is that maybe a few of the qubits are “bad” — but that wouldn’t explain why the Hummingbird has one more than an obvious count — 65 vs. 64.

The second thought that comes to mind is the intended topology — it’s not just a simple square or rectangular grid of qubits, but organized based on a so-called heavy-hexagonal qubit layout or as IBM also calls it a heavy-hex lattice.

Actually, there are two distinct topologies in play here:

  1. The physical qubit layout. Where the physical qubits are placed on the chip.
  2. The logical qubit layout. How the physical qubits are connected.

If you click on the ibm_washington system on the IBM Quantum Services dashboard you can see that the logical topology seems to resemble staggered bricks in a wall, each brick having 12 qubits but sharing qubits between the sides of adjacent bricks. Each brick has four corner qubits, three qubits in the middle of both the top and bottom sides, and one qubit in the middle of both the left and right sides. The corner qubits are shared with the qubits in the middle of the top and bottom sides of the row of bricks above and below each brick. But then there are a few extra qubits on the periphery of the wall, although there doesn’t appear to be any logic to the extras — Falcon has six extras while both Hummingbird and Eagle have two extras, one each at the lower left and upper right corners.

If you click down to the qubit layer (plane) of the chip diagram in the IBM Eagle blog post, you can see that Eagle has eleven rows of twelve qubits each (or vice versa depending on your perspective), on a clean rectangular grid. That’s the physical topology. Eleven times twelve is 132–5 greater than the official qubit count.

Look closer and you can see lines connecting the qubits, but not in a clean, rectangular, nearest-neighbor manner. Visually there doesn’t seem to be any geometric method to this madness of lines, but if you carefully count the connected qubits, you come up with twelve qubits around the perimeter of an odd-shaped area. That’s the physical layout of each logical brick.

Look even closer and you can find the five unconnected qubits — one just two rows to the left of the bottom corner, and four just to the left of the top corner. 132 minus 5 gives you the official 127 qubit count.

If you have a good eye and patience you can find two qubits which have only a single connection to another qubit rather than the two or three connections that most qubits have. One is right at the bottom corner. The other is one in from the third from the top corner on the right side. These are the two extra, but connected qubits, just as Hummingbird has two extras.

So, the five extras are simply the qubits left over from the raw eleven by twelve grid after the wall of eighteen 12-qubit bricks plus their two extra qubits — totalling 127 qubits — have been mapped from their logical connections to the physical qubit layout.

Why eleven by twelve? My guess is that a clean twelve by twelve square would be 144 qubits, but not arranged in a way to permit adding another brick or row of bricks to the wall, so IBM’s chip designers simply left that entire row out of the chip fabrication — 144 minus 12 is 132, while it may have been easier to leave the remaining five qubits — 132 minus 5 — in place rather than trying to remove them from within the eleventh and first rows.

Maybe an upgrade to Eagle?

Maybe a revision r2 or r3 or r4 of Eagle in the coming months might overcome at least a few of the shortcomings I note in this paper, such as:

  1. Negligible improvement in qubit fidelity. Hopefully get much closer to a third nine.
  2. Negligible improvement in coherence time, gate execution time, and circuit depth.
  3. Some preliminary results attempting to implement quantum error correction (QEC). Even if only one or two logical qubits.

Unfortunately, I wouldn’t expect any progress on two of the most profound shortfalls of Eagle:

  1. No improvement in connectivity.
  2. No improvement in fine granularity of phase. Required for quantum Fourier transform (QFT) and quantum phase estimation (QPE).

Caveat: All of my direct observations are based on Eagle revision r1 — upgrades could change things

Many of my impressions of Eagle are based on documents produced by IBM (see the References section), but some are based on examining the dashboard for the ibm_washington quantum system as shown by IBM Quantum Services, which indicates that that system is utilizing version 0.0.1 of revision r1 of the Eagle quantum processor. Any of the numbers on the dashboard could change if IBM upgrades Eagle. In fact, I’m hoping things will change — for the better.

For reference here are the top level metrics from the dashboard as of the time of this writing on November 27, 2021:

  1. ibm_washington
  2. QV 32 — not shown
  3. CLOPS — not shown
  4. Status: Offline
  5. Total pending jobs: 0 jobs
  6. Processor type: Eagle r1
  7. Version: 0.0.1
  8. Basis gates: CX, ID, RZ, SX, X
  9. Avg. CNOT Error: 2.021e-2
  10. Avg. Readout Error: 8.822e-2
  11. Avg. T1: 74.28 us
  12. Avg. T2: 101.43 us
  13. Supports Qiskit Runtime: No
  14. Calibration data Last calibrated: a month ago
  15. Qubit: Frequency (GHz) Avg 5.065 min 4.785 max 5.297
  16. Qubit: T1 (us) Avg 74.28 min 16.54 max 123.11
  17. Qubit: T2 (us) Avg 101.43 min 8.58 max 228.56
  18. Qubit: Readout assignment error Avg 8.822e-2 min 7.000e-3 max 4.856e-1
  19. Connection: CNOT error Avg 2.021e-2 min 8.394e-3 max 3.580e-2
  20. Connection: Gate time (ns) Avg 322.198 min 88.889 max 1457.778

Update as of December 4, 2021:

  1. ibm_washington
  2. QV 32 — not shown
  3. CLOPS — not shown
  4. Status: Offline
  5. Total pending jobs: 0 jobs
  6. Processor type: Eagle r1
  7. Version: 0.1.0 — from 0.0.1
  8. Basis gates: CX, ID, RZ, SX, X
  9. Avg. CNOT Error: 2.157e-2 from 2.021e-2
  10. Avg. Readout Error: 2.582e-2 from 8.822e-2
  11. Avg. T1: 95.92 us from 74.28 us
  12. Avg. T2: 103.31 us from 101.43 us
  13. Supports Qiskit Runtime: No
  14. Calibration data Last calibrated: 16 hours ago
  15. Qubit: Frequency (GHz) Avg 5.064 min 4.774 max 5.291 from Avg 5.065 min 4.785 max 5.297
  16. Qubit: T1 (us) Avg 95.92 min 3.86 max 232.85 from Avg 74.28 min 16.54 max 123.11
  17. Qubit: T2 (us) Avg 103.31min 5.16 max 222.36 from 101.43 min 8.58 max 228.56
  18. Qubit: Readout assignment error Avg 2.582e-2 min 2.500e-3 max 2.760e-1 from Avg 8.822e-2 min 7.000e-3 max 4.856e-1
  19. Connection: CNOT error Avg 2.157e-2 min 5.178e-3 max 1.746e-1 from Avg 2.021e-2 min 8.394e-3 max 3.580e-2
  20. Connection: Gate time (ns) Avg 545.351 min 80 max 1187.556 from Avg 322.198 min 88.889 max 1457.778

Update as of December 8, 2021:

  1. ibm_washington
  2. QV 32 — for the first time that I noticed as of 3:54 PM ET 12/8/2021
  3. CLOPS — not shown
  4. Status: Online — for the first time that I noticed as of 3:54 PM ET 12/8/2021
  5. Total pending jobs: 0 jobs
  6. Processor type: Eagle r1
  7. Version: 0.1.0
  8. Basis gates: CX, ID, RZ, SX, X
  9. Avg. CNOT Error: 6.717e-1
  10. Avg. Readout Error: 2.347e-2
  11. Avg. T1: 95.9 us
  12. Avg. T2: 103.31 us
  13. Supports Qiskit Runtime: No
  14. Calibration data Last calibrated: 25 minutes ago
  15. Qubit: Frequency (GHz) Avg 5.064 min 4.774 max 5.291
  16. Qubit: T1 (us) Avg 95.9 min 1.27 max 232.85
  17. Qubit: T2 (us) Avg 103.31 min 5.16 max 222.36
  18. Qubit: Readout assignment error Avg 2.347e-2 min 3.000e-3 max 3.215e-1
  19. Connection: CNOT error Avg 6.717e-1 min 5.650e-3 max 1.000e+0
  20. Connection: Gate time (ns) Avg 507.864 min 80 max 1187.556

Final update before posting this paper, as of 5 PM December 14, 2021:

  1. ibm_washington
  2. QV 32 — no change from its initial value
  3. CLOPS — not shown
  4. Status: Online — Queue paused
  5. Total pending jobs: 0 jobs
  6. Processor type: Eagle r1
  7. Version: 1.1.0 from 0.1.0
  8. Basis gates: CX, ID, RZ, SX, X
  9. Avg. CNOT Error: 3.828e-2 from 6.717e-1
  10. Avg. Readout Error: 2.397e-2 from 2.347e-2
  11. Avg. T1: 94.57 us from 95.9 us
  12. Avg. T2: 102.64 us from 103.31 us
  13. Supports Qiskit Runtime: No
  14. Calibration data Last calibrated: 13 hours ago
  15. Qubit: Frequency (GHz) Avg 5.064 min 4.767 max 5.291
  16. Qubit: T1 (us) Avg 94.57 min 32.31 max 173.21
  17. Qubit: T2 (us) Avg 102.64 min 3.49 max 234.62
  18. Qubit: Readout assignment error Avg 2.397e-2 min 4.700e-3 max 2.470e-1
  19. Connection: CNOT error Avg 3.828e-2 min 5.656e-3 max 1.000e+0
  20. Connection: Gate time (ns) Avg 528.508 min 213.333 max 1187.556

Update as of 5 PM December 16, 2021: The ibm_washington 127-qubit Eagle quantum system is not even listed as one of the available quantum systems in the IBM Quantum Services dashboard. Maybe this is just a temporary change. We’ll see soon enough. I’ll check back tomorrow.

Update as of 10:38 AM December 20, 2021: The ibm_washington 127-qubit Eagle quantum system is back and online.

  1. ibm_washington
  2. QV 32 — no change from its initial value
  3. CLOPS 1.1K — shown for the first time
  4. Status: Online
  5. Total pending jobs: 0 jobs
  6. Processor type: Eagle r1
  7. Version: 1.1.0
  8. Basis gates: CX, ID, RZ, SX, X
  9. Avg. CNOT Error: 3.989e-2 from 3.828e-2
  10. Avg. Readout Error: 2.628e-2 from 2.397e-2
  11. Avg. T1: 97.74 us from 94.57 us
  12. Avg. T2: 99.2 us from 102.64 us
  13. Supports Qiskit Runtime: No — odd since CLOPS is now measured
  14. Calibration data Last calibrated: 18 minute ago
  15. Qubit: Frequency (GHz) Avg 5.064 min 4.767 max 5.291 — unchanged
  16. Qubit: T1 (us) Avg 97.74 min 6.11 max 170.38 from Avg 94.57 min 32.31 max 173.21
  17. Qubit: T2 (us) Avg 99.2 min 1.98 max 243.12 from Avg 102.64 min 3.49 max 234.62
  18. Qubit: Readout assignment error Avg 2.628e-2 min 3.400e-3 max 3.909e-1 from Avg 2.397e-2 min 4.700e-3 max 2.470e-1
  19. Connection: Avg 3.989e-2 min 5.498e-3 max 1.000e+0 from CNOT error Avg 3.828e-2 min 5.656e-3 max 1.000e+0
  20. Connection: Avg 547.556 min 213.333 max 1187.556 from Gate time (ns) Avg 528.508 min 213.333 max 1187.556

I was curious about the CNOT error rate sometimes being 1.00 — in error 100% of the time. Well, if you select “Graph view” and then “CNOT Error”, you can see six (6) spikes up to 1.00 (100%):

  1. CNOT between qubits 8 and 9.
  2. CNOT between qubits 9 and 8.
  3. CNOT between qubits 113 and 114.
  4. CNOT between qubits 114 and 113.
  5. CNOT between qubits 114 and 115.
  6. CNOT between qubits 115 and 114.

I only just noticed that, so I can’t say whether these qubits always fail on CNOT or if other qubits sometimes fail.

Update as of 5:38 PM December 20, 2021: I noticed that for the latest calibration, “an hour ago”, the CNOT error rate spiked up to 5.820e-2 from 3.989e-2 as noted above in the morning. I checked the graph view and it there were more qubit pairs having 1.0 error rates, with twelve (12) spikes where CNOT was failing 100% of the time — the same six pairs of qubits as earlier plus another six pairs of qubits:

  1. CNOT between qubits 2 and 3.
  2. CNOT between qubits 3 and 2.
  3. CNOT between qubits 8 and 9. Failed earlier.
  4. CNOT between qubits 9 and 8. Failed earlier.
  5. CNOT between qubits 20 and 21.
  6. CNOT between qubits 21 and 20.
  7. CNOT between qubits 21 and 22.
  8. CNOT between qubits 22 and 21.
  9. CNOT between qubits 113 and 114. Failed earlier.
  10. CNOT between qubits 114 and 113. Failed earlier.
  11. CNOT between qubits 114 and 115. Failed earlier.
  12. CNOT between qubits 115 and 114. Failed earlier.

Update as of 4:27 PM December 22, 2021:

  1. The system has been online every time I checked today.
  2. Calibrated within the past hour every time I checked.
  3. CNOT error rate has fallen to 1.738e-2 (0.01738, 1.738%) — the lowest I’ve seen so far. It’s been that low whenever I checked so far today, multiple times.
  4. There are no 1.0 100% CNOT error rate spikes. None that I have noticed all day.
  5. The highest CNOT error rate I have seen today was 0.1342–13.42%, for CNOT between qubits 123 and 124.
  6. There were a fair number of qubit pairs (more than a dozen) with an error rate around 5–7%.
  7. Most qubit pairs had a CNOT error rate under 2%.
  8. Some qubit pairs (a minority) had CNOT error rates under 1%.
  9. Average readout (measurement) error rate was 2.826e-2 (0.02826, 2.826%) roughly in line with previous days.
  10. Oddly, CLOPS was down to 850 — all day.
  11. Has shown 1 job pending every time I’ve refreshed the display.
  12. No other notable changes.

Mediocre measurement fidelity

This isn’t a criticism unique to Eagle — shared by most quantum computers, at least most systems based on superconducting transmon qubits, but you may have noticed in the preceding section the mediocre measurement fidelity:

  • Avg. Readout Error: 2.347e-2

That’s 0.02347 or 97.653% reliability, which is somewhat less than two nines, 1.77 nines to be exact.

That means that if you measure a hundred qubits, two or three of them will be wrong, not because of the underlying qubit fidelity, but due to the nature of measurement of qubits.

This is a little better than Hummingbird, and better than some of the Falcons, but some of the Falcons are significantly better, although always short of two nines.

No dramatic improvement in qubit fidelity

Two sections ago you can see the progression in qubit fidelity. Technically, it’s gate fidelity or two-qubit gate fidelity (CNOT error) to be be more technically precise, but that’s a decent proxy for overall qubit fidelity:

  1. Connection: CNOT error Avg 2.021e-2 min 8.394e-3 max 3.580e-2.
  2. Connection: CNOT error Avg 2.157e-2 min 5.178e-3 max 1.746e-1.
  3. Connection: CNOT error Avg 6.717e-1 min 5.650e-3 max 1.000e+0.
  4. Connection: CNOT error Avg 3.828e-2 min 5.656e-3 max 1.000e+0.

The third one seems to be more of an outlier anomaly, especially since it shows a maximum error rate of 100%. But the most recent reading is not so great either.

But the previous two averages, 2.021e-2 and 2.157e-2 are:

  1. Only modestly better than some of the 27-qubit Falcon averages.
  2. Worse than the rest of the Falcon averages.
  3. Not even two nines — only 1.8 nines.

In short, not only is qubit fidelity disappointing overall, but it’s no real improvement over Falcon — which is actually improving over itself as shown in the next section.

Meanwhile Falcon has been advancing nicely

Here are a pair of tweets from IBM quantum VP Jay Gambetta touting great recent strides by 27-qubit Falcon:

By large quantum system, I think he meant large quantum circuit.

And

So, already Falcon has advanced to beyond where Eagle is starting, in terms of qubit fidelity — which is really the #1 concern at this stage.

The only reservation I would express is that the tweeted chart is for the Best error rate, not the average error rate.

Eyeballing the chart as best I can, it seems as if Falcon_r10 achieved a CNOT error rate of approximately 0.000825, which is 99.9175% reliability or three nines. Or more precisely 3.175 nines — a bit better than three nines.

For comparison, the best CNOT error rate for Eagle (ibm_washington) is 0.008394, which is 99.1606% reliability or two nines. Or more precisely 2.16 nines — well below the fidelity of Falcon_r10.

Preview of Osprey

We know from the IBM quantum hardware roadmap and more recent comments that Osprey is the next major quantum processor planned by IBM.

We know precisely three facts about Osprey:

  1. It will have 433 qubits.
  2. It requires the IBM Quantum System Two hardware infrastructure.
  3. It is expected sometime in 2022. Likely at the IBM Quantum Summit 2022, which I presume will once again occur in November as the 2021 Summit did.

What we don’t and won’t know until availability are these critical technical facts:

  1. Any improvements in qubit fidelity.
  2. Any improvements in coherence time, gate execution time, or circuit depth.
  3. Any improvements in fine granularity of phase or probability amplitude.
  4. Any improvements in measurement fidelity.
  5. Any improvements in connectivity. Unlikely since it would be a radical architectural change, not an evolutionary step.
  6. Any improvements in Quantum Volume (QV).

I would expect that the new IBM Quantum System Two hardware infrastructure should boost qubit fidelity significantly, but I had the same expectation with Eagle’s new chip design and that didn’t result in a significant boost in qubit fidelity.

In any case, we’ll just have to wait and see.

How exactly do you go about computing Quantum Volume (QV) with more than 50 qubits?

As the original IBM paper notes, Quantum Volume is valid only up to approximately 50 qubits:

Why limited to 50 qubits?

Calculation of Quantum Volume (QV) requires classical simulation of a quantum circuit which has been run on a real quantum computer to verify that it computes the correct results. That’s fine up to about 50 qubits since that is believed to be the upper limit for classical simulation of quantum circuits.

Actually, it’s somewhat less than 50 qubits. Google can simulate 40 qubits. IBM can simulate 32 qubits. That is of course subject to change as classical computing technology continues to evolve.

But those limits are not an issue at present since the highest measured Quantum Volume is 1024 by Honeywell with 10 qubits. IonQ has estimated QV of 4,000,000 for 22 qubits, but that was not confirmed with actual classical simulation. The best QV IBM has achieved is 128 for 7 qubits on Falcon.

But my question here is how to decide where to start and stop for selecting a subset of qubits to test.

Picking which 7 qubits out of the 127 qubits available on Eagle seems to be a very impossibly complex computational problem.

Even for the 27-qubit Falcon which has achieved Quantum Volume of 128, selecting 7 qubits out of 27 is a fairly complex computational problem.

I don’t know the answer, at present. Although I can imagine at least a few possibilities.

There are plenty of heuristic approaches to take, such as requiring the selected qubits to be contiguous. Even for 127-qubit Eagle, there are only 121 possible starting positions for a contiguous sequence of 7 qubits.

But requiring the qubits to be contiguous could be an excessive restriction which fails to fully acknowledge the true capabilities of the quantum processor, such as the logical connectivity of a heavy-hex lattice as opposed to the raw physical qubit layout.

I would presume that the set of qubits to test should be nearby based on the heavy-hex lattice topology. I’m sure there is some heuristic that can identify preferred configurations. But, I haven’t heard any discussion of this. And obviously that would differ between quantum processor architectures.

Clearly the problem is solvable and has been solved since IBM has calculated (and simulated) Quantum Volume of 32 (5 qubits) for the 65-qubit Hummingbird.

But… even if solved, is it actually an optimal solution?

But for now, since IBM previously solved it for Hummingbird, I presume that their solution will work for Eagle as well, especially since the qubit fidelity is not much better.

There is another possibility, namely that Quantum Volume may be inherently very limited by the nature of the heavy-hex lattice topology, so that it may not be feasible to get far beyond a Quantum Volume of 128 until qubit fidelity increases substantially, such as 3.5 or even four nines.

I would note that the Falcon ibm_peekskill system, which is an exploratory system formerly known as the test system Falcon_r8 doesn’t have a Quantum Volume listed yet on the IBM Quantum Services dashboard. It has a moderately higher qubit fidelity, so it will be interesting to see if that’s enough to permit it to get a higher Quantum Volume. The test system Falcon_r10 has an even higher qubit fidelity, reportedly around three nines, so it will be interesting to see where it comes in on Quantum Volume.

What can you do with 127 (or 65) qubits that you can’t do with 23 qubits?

At present, few published algorithms are able to utilize much over 20 qubits on current quantum computers.

So, if only 16 to 23 qubits are being used at present, it would not seem that a 65-qubit or even a 127-qubit quantum computer would offer any significant additional utility, at present.

It’s difficult to say whether the critical limiting factor is qubit fidelity or qubit connectivity, or both. Either way, we probably can’t expect to see 28, 32, 36, or even 40-qubit algorithms until those critical limiting factors can be transcended. Although we should be able to simulate such algorithms

It could be quite some time — years — before algorithms are commonly able to effectively utilize a majority of the qubits on a 65-qubit or 127-qubit quantum computer — let alone larger systems such as the 433-qubit Osprey expected next year.

So, for now, this is an open question. I look forward to reading research papers on this topic.

The world’s most powerful quantum processor?

We’re inundated with hype, so it’s tempting and easy to want to ignore anything that sounds like hype, such as lurid headlines suggesting that Eagle is “the world’s most powerful quantum processor.

Is it true? Well… maybe or maybe not, depending on how you interpret… everything.

From the perspective of raw qubit count, yes, it would appear to be true.

But from the perspective of how large and complex a quantum algorithm can be for a practical real-world problem, maybe not since we still aren’t able to fully utilize a 27-qubit Falcon, let alone a 65-qubit Hummingbird.

And since the Falcon currently has a higher qubit fidelity, that probably means that the Falcon is currently the world’s most powerful quantum processor.

But since Eagle is so new, it wouldn’t surprise me if Eagle were to go through a succession of enhancements comparable to or exceeding those that Falcon has gotten through which could well position Eagle to be even more powerful than either Falcon or Hummingbird.

Even then, though, Eagle, Falcon, and Hummingbird still suffer from very limited qubit connectivity. This leaves open the potential for trapped-ion and neutral atom quantum computers to zoom past superconducting transmon qubits based on their full any to any connectivity. That would make them the world’s most powerful quantum processors.

No, Eagle is not able to offer any dramatic quantum advantage

Eagle certainly has more than enough qubits to achieve dramatic quantum advantage, but two critical factors effectively prevent Eagle from attaining any substantial let alone dramatic degree of quantum advantage:

  1. Limited qubit fidelity.
  2. Limited qubit connectivity.

Limited circuit depth may also be a limiting factor, but the other two factors probably dominate.

But incremental improvements to Eagle could change this picture.

For more on substantial or fractional quantum advantage, see my paper:

For more on dramatic quantum advantage, see my paper:

In short, Eagle offers no net benefit to most real users at present

Until and unless qubit fidelity improves dramatically, 127-qubit Eagle (and 65-qubit Hummingbird) offer no significant net benefit to most real users — at least in the near term.

I can’t speak to any potential benefits if future revisions of Eagle offer significant improvements — which I do sincerely hope does occur.

Twin priorities for the medium term are progress towards quantum Fourier transform and quantum phase estimation as well as progress towards quantum error correction and logical qubits

Despite the advances from Eagle, the twin priorities for the medium term need to be progress towards support for quantum Fourier transform (QFT) and quantum phase estimation (QPE) as well as progress towards support for quantum error correction (QEC) and logical qubits.

Technical progress is needed in both qubit fidelity and fine granularity of phase, but we aren’t seeing much progress on either front by Eagle,so far.

Meanwhile Falcon does appear to be making progress towards near-perfect qubits, with recent reports of hitting three nines of qubit fidelity.

Progress towards much higher qubit count is definitely needed to achieve quantum error correction, but that’s not the most critical impediment currently being faced by quantum computing. Still, it is good for IBM to continue to push qubit counts higher, to eventually enable support for quantum error correction and logical qubits.

Progress towards near-perfect qubits will help on both fronts, but Eagle hasn’t done so, yet

Progress towards near-perfect qubits will help on both fronts, QFT/QPE and QEC, but Eagle hasn’t done so, yet.

Quantum Fourier transform (QFT) and quantum phase estimation (QPE) can make progress using only near-perfect qubits — four to five nines of qubit fidelity, coupled with finer granularity of phase, but quantum error correction (QEC) and logical qubits are not absolutely required, yet.

Sure, the day will come when quantum error correction (QEC) and logical qubits are required — to achieve widespread adoption of quantum computing among the non-elite, but that won’t be for at least several years, at the earliest.

Near-perfect qubits would be a big near-term and even medium-term win. Eagle hasn’t progressed on this front, so far.

Variational methods are a technical dead-end and unlikely to ever achieve any significant quantum advantage

Although variational methods are quite popular, and do work reasonably well on near-term quantum hardware, they are unlikely to ever achieve any significant quantum advantage. Without a significant quantum advantage, they are a technical dead-end — quantum advantage is the only real benefit of quantum computing.

Variational methods only succeed by breaking a problem down into much smaller pieces, but that also reduces any advantage by reducing the extent of any quantum parallelism.

No significant quantum parallelism, no significant quantum advantage. It’s that simple.

My point here is that I would like to see advances in quantum algorithms and quantum applications, but use of variational methods will undermine if not absolutely eliminate the advantages of such quantum algorithms and quantum applications.

Quantum algorithm design should focus primarily on quantum Fourier transform (QFT) and quantum phase estimation (QPE), not variational methods. Sure, this means running on simulators rather than real quantum computers, but this is the approach which will eventually achieve dramatic quantum advantage, not variational methods.

That said, current variational methods should run fine on Eagle — provided that they currently run fine on Falcon or Hummingbird, but Eagle won’t offer any net benefit over Falcon and Hummingbird for most real users using variational methods. And won’t offer any significant quantum advantage.

Run multiple circuits at the same time?

Since Eagle has so many qubits and realistic circuits are not likely to use very many of those qubits, say 16 to 24 qubits max, this raises the prospect that multiple copies of circuits or even multiple but different circuits could be run at the same time. Up to six 20-qubit circuits could be run in one invocation — subject to the total gates not exceeding the coherence time of Eagle. Or up to ten 12-qubits circuits.

I’m not advocating this approach or suggesting that it would indeed work as suggested, but simply noting the possibility.

If this approach does work, it could be an interesting benefit over Falcon and Hummingbird.

But, even if it does work for some niche use cases, it’s unlikely to benefit average real users.

And the limited coherence time could limit application to an even smaller subset of potential use cases.

Hopefully Osprey makes more significant progress on both qubit fidelity and fine granularity of phase

I sure hope Osprey makes more significant progress on both qubit fidelity and fine granularity of phase, but I’m not holding my breath.

It would seem a slam dunk that Osprey would make some progress on qubit fidelity, but I had the same expectation for Eagle, which was not fulfilled. Still, I continue to have an open mind.

IBM did essentially commit to four nines of qubit fidelity “by 2024” (1,121-qubit Condor timeframe) at their recent quantum summit:

I’d hope that IBM could try to achieve at least a half a nine improvement in qubit fidelity each year.

Limited qubit connectivity is IBM’s greatest exposure

IBM’s current conceptualization and realization of quantum computers is fine for experimentation at a relatively small scale, but appears to be an absolute dead-end in terms of sophisticated, complex quantum algorithms which require a significant degree of qubit connectivity.

So far, IBM hasn’t offered a single hint that they have any plans to dramatically boost qubit connectivity.

That’s not to say that IBM couldn’t switch to a highly-connected architecture any year now, but it’s alarming that the idea is not even mentioned on their quantum hardware roadmap.

Are superconducting transmon qubits a technical dead-end for dramatic quantum advantage due to severely limited connectivity? It sure seems that way!

Even if significant improvements were made to Eagle, does the severely limited connectivity inherent in superconducting transmon qubit architectures used by IBM (and others) effectively preclude the possibility of achieving any substantial quantum advantage let alone any dramatic quantum advantage? I hate to say never, but it sure seems that way!

SWAP networks do provide a technique for simulating qubit connectivity to at least some degree, but can physical qubit fidelity ever be sufficient to assure that two-qubit gates will have sufficient fidelity to achieve any significant degree of quantum advantage? I just don’t see how at this stage or any stage in the foreseeable future.

Technically, I can imagine alternative architectures, but IBM, et al, are not even hinting that such architectural changes are even over the distant horizon. They’re certainly not on the published roadmap.

In short, severely-limited connectivity remains a great challenge for IBM, et al.

Never say never — I’m sure somebody can up up with a clever way to exploit a majority of Eagle’s qubits

Although it is likely true that typical users won’t be able to exploit more than 16 to 24 of Eagle’s 127 qubits, it is probably also likely that somebody somewhere will come up with some creative algorithm which is able to exploit a much more sizable fraction of those qubits to solve some practical real-world problem.

I’m discounting computer science experiments, of course — such as Google’s infamous quantum supremacy experiment, which was not in any way representative of solving any practical real-world problems.

Personally, I’ll settle for a 40-qubit algorithm or even 32 qubits. Something that can also be simulated to confirm the results. But I would also want the algorithm to be automatically scalable so that it can trivially exploit more capable hardware when it becomes available, whether it’s a future revision of Eagle, Osprey, Condor, or whatever.

There may be some niche use cases where Eagle can be of significant advantage

There may be some esoteric or even useful niche use cases where Eagle as it exists today can actually deliver some interesting advantage over existing quantum processors, but I am at a loss to think of any at the moment — they won’t be from any of the usual applications that people generally tout fr quantum computing.

This caveat is in keeping with the never say never philosophy for innovation and technology in general.

Where are all of the 40-qubit quantum algorithms?

I have a fascination with 40-qubit quantum algorithms. There’s nothing magical about that number other than the fact that it’s the apparent current upper limit for classical simulation of quantum algorithms (for Google — for IBM it’s 32 qubits.) The curious fact is that we don’t see much in the way of 40-qubit algorithms. Or 32 qubits for that matter. Even though we can simulate such algorithms.

My model is that algorithms should be designed to be automatically scalable based on input size and other parameters. Even if an algorithm is targeted at much greater size, say 50, 75, 100, 150, or even hundreds of qubits, it is highly desirable to be able to demonstrate that the algorithm is scalable in the 12 to 40-qubit range, which can be simulated successfully, and sometimes can even be run successfully on a real quantum computer with a qubit count in that range.

So now the question is whether Eagle might in fact be able to support 40-qubit algorithms. Or even 32 qubits. Or even 28 or 24 qubits. It remains to be seen. I suspect not given the preliminary indications of a relatively low qubit fidelity and rather limited qubit connectivity. But we will see soon enough.

That’s my challenge — Bring on the 40-qubit quantum algorithms!

For more detail on my quest for 40-qubit quantum algorithms, see my paper:

And for more detail on my model for scalable quantum algorithms, again focused on being able to test quantum algorithms for sizes that can be classically simulated, see my paper:

Can Eagle support 24 to 29-qubit algorithms?

Will Eagle be able to support quantum algorithms which use 24 to 29 qubits? At this stage I am skeptical, but still open-minded.

It’s a real challenge — even Google used only up to 23 qubits, but if a 127-qubit quantum computer can’t support 28-qubit quantum algorithms, what’s the point of having all of those qubits?

I’m looking forward to some paper preprints on arXiv — either showing off the algorithms or explaining why they can’t be done at this time.

Can Eagle support 20 to 23-qubit algorithms?

Maybe support for 24 to 29-qubit quantum algorithms is a bit too extreme, but support for 20 to 23-qubit algorithms seems more doable. I’m a little less skeptical on this milestone.

Can Eagle support 15 to 19-qubit algorithms?

And if even 20 to 23-qubit algorithms are still too much to support, surely at least 15 to 19-qubit algorithms can be supported. This should be a slam dunk for IBM at this stage.

Clearly Eagle and IBM are still deep in the pre-commercialization stage of quantum computing, not yet ready to even begin commercialization

Many questions and thorny technical issues and much research remains before IBM — or anybody else! — is ready to exit from the pre-commercialization stage of quantum computing. We’re all deep in the pre-commercialization stage. They’re not even close to having settled all significant technical questions and issues which are needed before true commercialization can even begin. Even with Osprey and Condor they will still have significantly further research to complete before true commercialization can even begin.

For more on the overall process of pre-commercialization and commercialization, see my paper:

For more on the research aspect of pre-commercialization, see my paper:

And for more on pre-commercialization overall, see my paper:

The bottom line is that Eagle is still a research project, not close to a commercial product

Just to reemphasize the point of the previous section, Eagle is clearly only a research project and not even close to something resembling a commercial product capable of supporting production-scale practical real-world applications.

That’s not a bad thing, and Eagle is indeed decent research progress of sorts, but I don’t want to see people talk about Eagle as if it were a commercial product or even close to being a commercial product. Even Osprey and Condor won’t be close to commercial products.

My advice is to stick with Falcon if you’re not using more than 20 to 24 qubits at present, or better yet, use simulation until Eagle offers significantly better qubit fidelity

At present, qubit fidelity of Falcon is better than for Eagle, so I’d advise people to stick with Falcon if they’re not using more than 20 to 24 qubits at present, until Eagle offers significantly better qubit fidelity.

Although my overall advice at this stage of qubit fidelity is to stick to simulation where you can use up to 32 to 40 qubits.

Being able to simulate up to 32 to 40 qubits with greater qubit fidelity is more compelling than Eagle at this stage.

Will Eagle set a new world record for hype?

I sure hope not, but we already seem to be on that path. The hype is out of control. Uncontrolled hype is not our friend.

It’s not unusual or unexpected to see a lot of excitement and enthusiasm for a new product, but that doesn’t justify or warrant hype.

As I have noted in this paper, Eagle offers some — a mere few — benefits, but for the most part it doesn’t offer most algorithm designers any significant benefit over what they can get from the 65-qubit Hummingbird or the 27-qubit Falcon, at this moment.

Granted, initial information and data on Eagle is preliminary and limited and even likely to evolve and maybe even improve somewhat as the initial kinks get worked out, and future revisions of Eagle might transcend some of its initial shortfalls, but we can’t allow ourselves to get too far ahead of ourselves. We really do need to focus on the reality of the here and now.

Yes, it is okay to speculate about the future, maybe even wildly — even I do, but that’s very different than making wild claims about the present. And this paper is primarily about Eagle at present.

We can only surmise what benefits might come with future revisions of Eagle or with 433-qubit Osprey or 1,121-qubit Condor, but as we already know, it’s not about raw qubit count — except for the future of quantum error correction which does rely on much higher raw physical qubit count.

For now, qubit fidelity and qubit connectivity place significant constraints on the benefits of Eagle.

But, as we can already see, they present no constraint on hype, unfortunately.

Is Eagle a dud? It’s not THAT bad!

I wouldn’t go quite so far as to say that Eagle is an outright dud, but it really is somewhat disappointing, especially given all of its promise.

I’m scratching my head wondering how IBM could put such an amazing amount of engineering effort into this project but somehow fail to achieve a significant improvement in qubit fidelity. It almost makes no sense.

Did they rush the project too fast and leave out some critical work?

Did they actually expect to achieve much better qubit fidelity but somehow something went wrong?

It just feels as though something went wrong. They couldn’t possibly have missed something as important as qubit fidelity, could they?!

Is Eagle a flop? Well, basically, yes

In short, the best thing that Eagle has going for it is that there’s a very low bar for revision r2 to be a dramatic improvement over the initial revision r1.

The bottom line is that users should stick with either the 27-qubit Falcon or the 65-qubit Hummingbird, and wait for Eagle revision r2, or r3, or r4, or r5. Or maybe even the 433-qubit Osprey.

To be clear, IBM can legitimately crow about the under the hood engineering improvements in Eagle, but they simply don’t translate into net dramatic technical improvements for the average user.

No, Eagle is not positioned to enable a technical breakout for most users

Eagle simply isn’t positioned to enable any sort of technical breakout for average users.

The dramatic increase in qubit count alone just won’t do it.

What’s missing are dramatic improvements in:

  1. Qubit fidelity.
  2. Gate fidelity.
  3. Qubit connectivity.
  4. Measurement fidelity.

Dramatic improvements in all of those areas would lead to a true technical breakout.

Did IBM jump the gun? Should they have waited another 3 to 6 or even 9 months? Maybe, maybe not

Engineering products is a real challenge. There is always a dynamic tension between the twin temptations of:

  1. Delay the product while additional enhancements are added.
  2. Delay additional enhancements in favor of an earlier release of the product.

There’s an old saying:

  • In every project there comes a time to shoot the engineers and ship the product.

Sure, IBM could have spent another few months, six months, maybe even nine months or even an entire year, to get Eagle closer to perfection, closer to meeting (my!) expectations, but that can be a never-ending process.

My conclusion is that it is what it is and that IBM would be damned if they did and damned if they didn’t, whether it’s delays or enhancements.

Sure, I’m not very happy with the current state of affairs for Eagle, but three or six months from now I’d still be unhappy since there would still be shortcomings.

My druthers are that I’d be happier if they gave us a roadmap for revisions to Eagle.

But… even then, I’d be unhappy since the lack of robust qubit connectivity is the fatal flaw with IBM’s current quantum computing architecture, and a mere three to twelve months of effort wouldn’t fix that.

So, in short, I’m glad IBM released Eagle when they did. They got the initial pain over with and now we can focus on potential future enhancements. Another three to six to nine months wouldn’t have made much of a difference, and the delays might have harmed their credibility more than the benefits. Again, it’s damned if you do, damned if you don’t.

What’s next for Eagle? Waiting for the r2 revision

Given the many shortcomings of Eagle r1 documented in this paper, it will be interesting to see what improvements IBM manages to put into revision r2.

Whether they are relatively minor or cosmetic, or actually do offer an incentive for algorithm designers to use Eagle rather than Falcon remains to be seen.

Stay tuned.

Will Eagle r4 hit 3.5 nines of qubit fidelity and support 32-qubit algorithms?

I don’t want to be presumptuous, but it is plausible and I’m hopeful that with just a few revisions IBM will get Eagle to the stage where it can hit two major, momentous milestones:

  1. 3.5 nines of qubit fidelity.
  2. Support for 32-qubit algorithms. 3.5 nines of qubit fidelity is likely the primary obstacle.

This pair of milestones would finally give Eagle an advantage over the 27-qubit Falcon.

It seems plausible that IBM could hit this revision in 2022.

Alternatively, maybe it is Osprey r1 which hits these milestones.

Is Eagle close to offering us practical quantum computing? No, not really

Eagle does offer an increment of progress, but is still well short of offering a true practical quantum computer, capable of easily and routinely delivering dramatic quantum advantage for production-scale practical real-world quantum applications.

We’re probably still at least a couple of years away from a true practical computer.

We definitely need dramatically better qubit fidelity as well as dramatically better qubit connectivity.

But we can certainly be grateful for the incremental progress of Eagle.

To end on a positive note, we should celebrate IBM’s engineering achievement with Eagle

Eagle really is quite impressive from an engineering perspective.

Reiterating the positives:

  1. Significant jump in qubit count to 127. Almost double the qubits of the previous top-end 65-qubit Hummingbird processor.
  2. Broke the 100-qubit barrier. Getting all of those qubits to work at all is an amazing achievement.
  3. Significant engineering improvements. At the chip level. Introduction of multi-level fabrication — increases density while reducing crosstalk. As the IBM press release puts it, “breakthrough packaging technology.”
  4. Progress on the path to more physical qubits to support quantum error correction (QEC) and logical qubits.

And we can undoubtedly look forward to incremental improvements as Eagle moves through the inevitable revisions in the months ahead.

And Eagle lays the groundwork for the 433-qubit Osprey in a year.

Summary and conclusions

  1. In short, Eagle offers no significant net benefit to most typical near-term quantum algorithm designers or quantum application developers. That could change if Eagle is upgraded, but this is where revision r1 of Eagle stands right now. Despite the dramatic increase in raw qubit count, the lack of any significant improvement in qubit fidelity or qubit connectivity renders those additional qubits effectively useless for most users.
  2. Eagle is an impressive engineering accomplishment. Couldn’t have achieved 127 qubits without the dramatic processor redesign.
  3. But all of the engineering is under the hood where most typical users won’t see it. The dramatic increase in qubit count isn’t generally functionally useful to most typical users, at present.
  4. It’s a decent stepping stone towards quantum error correction (QEC) and logical qubits. QEC needs a lot more qubits. Eagle is a decent start down that path.
  5. But lackluster qubit fidelity and mediocre qubit connectivity prevent Eagle from having any significant and dramatic benefit to real users over 27-qubit Falcon. Most users won’t be able to effectively use more qubits on Eagle than they can on Falcon.
  6. No hint of any significant change to the basic core qubit technology. Despite the dramatic overall engineering redesign, there is no hint that the core qubit technology has changed. Presumably IBM would have touted it if it had been improved.
  7. No dramatic improvement in qubit fidelity. Only modestly better than some of the 27-qubit Falcons. Worse than the rest of the Falcons. Not even two nines — only 1.8 nines.
  8. No dramatic improvement in coherence time, gate execution time, or circuit depth.
  9. Sorry, but Eagle won’t deliver any substantial quantum advantage. Mostly due to limited qubit fidelity and limited qubit connectivity. There are certainly enough qubits, but that’s not good enough.
  10. Quantum Volume (QV) of 32 is rather disappointing. Same as the 65-qubit Hummingbird. Less than the QV of 64 and 128 for the 27-qubit Falcon. I had hoped for at least 256. Maybe r2 or r3 might yield some improvement?
  11. Curious that there is no support for Qiskit Runtime. At least not initially, but I presume that will come, eventually. Especially surprising since IBM has made a big deal about performance (speed) and CLOPS. No CLOPS rating either, presumably because it depends on Qiskit Runtime (I think.)
  12. Incremental enhancements, as happened with Falcon, could change this picture, possibly dramatically.
  13. But even then, trapped-ion and neutral atom qubits could overtake superconducting transmon qubits simply as a result of full any to any connectivity. Memo to IBM: Qubit connectivity is a really REALLY big deal.
  14. It will be interesting to see whether the 433-qubit Osprey will be a dramatic improvement over Eagle or only a modest to moderate improvement.
  15. Twin priorities for the medium term are progress towards quantum Fourier transform (QFT) and quantum phase estimation (QPE) as well as progress towards quantum error correction (QEC) and logical qubits. Technical progress is needed in both qubit fidelity and fine granularity of phase. But we aren’t seeing much progress on either front yet by Eagle.
  16. Progress towards near-perfect qubits will help on both fronts, but Eagle hasn’t done so.
  17. I sure hope Osprey makes more significant progress on both qubit fidelity and fine granularity of phase. But, I’m not holding my breath.
  18. Don’t discount the possibility that some clever algorithm designer may come up with a very creative algorithm which actually is able to exploit a majority of Eagle’s qubits to solve some practical real-world problem. Even more than 24 or 32 qubits. I’d settle for a 40-qubit algorithm which can also be simulated — provided that it is automatically scalable so that it can trivially exploit more capable hardware when it becomes available.
  19. There may be some niche use cases where Eagle can be of significant advantage. But I don’t know of any, at present. Something that is very tolerant of or even exploits noisy qubits.
  20. My advice is to stick with Falcon if you’re not using more than 20 to 24 qubits at present, or better yet, use simulation until Eagle offers significantly better qubit fidelity. Being able to simulate up to 32 to 40 qubits with greater qubit fidelity is more compelling than Eagle at this stage.
  21. No, Eagle is not positioned to enable a technical breakout for most users. The dramatic increase in qubit count alone just won’t do it. What’s missing are dramatic improvements in: qubit fidelity, gate fidelity, qubit connectivity, and measurement fidelity. Dramatic improvements in all of those areas would lead to a true technical breakout.
  22. Clearly Eagle and IBM are still deep in the pre-commercialization stage of quantum computing, not yet ready to even begin commercialization. Many questions and issues and much research remains. Not even close to commercialization.
  23. Is Eagle close to offering us practical quantum computing? No, not really. It’s an increment of progress, but we have very far to go.
  24. To end on a positive note, we should celebrate IBM’s engineering achievement with Eagle. It really is quite impressive from an engineering perspective.
  25. So, stay tuned. It ain’t over yet.

--

--