Why Is IBM’s Notion of Quantum Volume Only Valid up to About 50 Qubits?

  • Measurement of quantum volume is limited to the capacity of available quantum simulators running on classical computers.
  • At present, 50 qubits is viewed as the current practical limit for classical simulation of a quantum computer.
  • The 50-qubit limit appears to be the number of qubits which are simulated and executed by the generated circuits, not the total number of qubits on the system.
  • For example, IBM recently announced a quantum volume of 32 for a 65-qubit system. Only five (or so) qubits out of 65 were used in that test. And six (or so) qubits were used to calculate the quantum volume of 64 for their 29-qubit Falcon processor.
  1. The IBM protocol in a nutshell
  2. Simulation of randomly-generated circuits
  3. Square circuits
  4. Measuring connectivity
  5. Any-to-any connectivity
  6. Swap networks
  7. Calculation of quantum volume
  8. QV vs. VQ
  9. Limits of quantum simulators
  10. 50 is the base 2 logarithm of one quadrillion
  11. One quadrillion quantum states pushes the limits of classical simulation
  12. The IBM paper
  13. Sorry, but this paper won’t dive into all details of the IBM protocol or all aspects of quantum volume
  14. Quantum volume in the Qiskit Textbook
  15. Unclear whether the 50-qubit limit is for the entire system or only for the circuit being tested
  16. What if your quantum computer has more than 50 qubits?
  17. How might logical qubits impact the 50-qubit limit?
  18. Possible need for an artificial limit on quantum volume
  19. What does the IBM paper say about lower error rates?
  20. So what is the real limit?
  21. What is the practical limit?
  22. How much storage is needed for quantum state?
  23. What are the prospects for a quantum performance metric that doesn’t require classical simulation?
  24. Potential for application-specific benchmarks
  25. Benchmarks based on verifiable algorithms
  26. How many qubits are likely to be used for production-scale applications?
  27. Quantum volume is a deadend
  28. Why doesn’t IBM refer to quantum volume in their quantum roadmap?
  29. Why doesn’t IBM publish the quantum volume of their 53-qubit machine?
  30. But IBM does say that their 65-qubit system has a quantum volume of 32
  31. How does one configure a simulator for a system with more than 50 qubits?
  32. Did IonQ achieve a quantum volume of 4 million for 32 qubits?
  33. Other related topics
  34. The future

The IBM protocol in a nutshell

This informal paper won’t go very deep into the gory technical details of the IBM protocol for measuring quantum volume, beyond this brief summary.

Simulation of randomly-generated circuits

There is nothing special about the results derived from executing randomly-generated circuits, just that the results should be the same when executed on a real quantum computer as when executed on a quantum simulator.

Square circuits

The essence of quantum volume is that it measures the largest square quantum circuit that can be executed on a real quantum computer which produces reasonably correct results. This informal paper won’t go into the deep details — see the IBM paper or the Qiskit Textbook description.

Measuring connectivity

The gates of the randomly-generated circuit will entangle qubits as well to test the degree of connectivity which the machine supports. Some machine architectures such as trapped-ion machines support any-to-any connectivity, while other architectures such as superconducting transmon qubits require swap networks to move the quantum states of two qubits to adjacent qubits since only nearest-neighbor connectivity is supported.

Any-to-any connectivity

A quantum computer architecture such as a trapped-ion machine has the flexibility to entangle any two qubits with a single gate even if they are not physically adjacent.

Swap networks

If the architecture of a quantum computer, such as superconducting transmon qubits, only permits nearest neighboring qubits to be directly entangled, a swap network is needed to physically move the quantum state of one or both qubits to two qubits which are physically adjacent.

Calculation of quantum volume

The specific math for calculating quantum volume is:

  • log2(VQ) = argmax min(m, d(m))
  1. m = 3, VQ = 2³ = 8.
  2. m = 4, VQ = 2⁴ = 16.
  3. m = 5, VQ = 2⁵ = 32.
  4. m = 6, VQ = 2⁶ = 64.
  5. m = 7, VQ = 2⁷ = 128.
  6. m = 20, VQ = 2²⁰ = 1,048,576.

QV vs. VQ

QV is the marketing shorthand for quantum volume.

Limits of quantum simulators

The problem is that quantum simulators are incapable of simulating circuits using more than around 50 qubits since the number of quantum states grows exponentially as the number of qubits grows.

50 is the base 2 logarithm of one quadrillion

From the preceding discussion, it is not the 50 itself which causes a limitation, but the number of quantum states required to execute a circuit with 50 entangled qubits. Specifically,

  • 50 qubits = 2⁵⁰ = 1,125,899,906,842,624 (one quadrillion) quantum states.
  • Or, log2(1,125,899,906,842,624 quantum states) = 50 qubits.

One quadrillion quantum states pushes the limits of classical simulation

One quadrillion — one thousand trillion or one million billion — is a lot of quantum states for a simulator to keep track of.

The IBM paper

The IBM paper which proposed the quantum volume metric:

  • We introduce a single-number metric, quantum volume, that can be measured using a concrete protocol on near-term quantum computers of modest size (n≲50), and measure it on several state-of-the-art transmon devices, finding values as high as 16. The quantum volume is linked to system error rates, and is empirically reduced by uncontrolled interactions within the system. It quantifies the largest random circuit of equal width and depth that the computer successfully implements. Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate sets, and circuit rewriting toolchains are expected to have higher quantum volumes. The quantum volume is a pragmatic way to measure and compare progress toward improved system-wide gate error rates for near-term quantum computation and error-correction experiments.
  • near-term quantum computers of modest size (n≲50)

Sorry, but this paper won’t dive into all details of the IBM protocol or all aspects of quantum volume

Quite a few high-level details and even some low-level details from the IBM protocol for quantum volume are included in this informal paper, but there is no intention to include all details of the IBM protocol, just enough to discuss and support the headline topic of what the 50-qubit limit is all about.

Quantum volume in the Qiskit Textbook

For additional reference, and an example, see the Qiskit Textbook:

What if your quantum computer has more than 50 qubits?

First, it’s not exactly clear what the precise limit is for simulation. It could be a few qubits more than 50 or even a few qubits less depending on pragmatic considerations for the simulation software and hardware.

Unclear whether the 50-qubit limit is for the entire system or only for the circuit being tested

Reading the IBM paper by itself, one might get the impression that the 50-qubit limit applied to the entire system — that quantum volume does not apply to quantum computers with more than about 50 qubits in the entire system.

How might logical qubits impact the 50-qubit limit?

We’re still a long way from the advent and common use of error-corrected and error-free logical qubits, but they would definitely increase the size of quantum volume circuits which can be executed on real hardware.

Possible need for an artificial limit on quantum volume

For future quantum computers which have dramatically lower error rates, and if the machine has more than 50 qubits — such as 65, 72, 127, or 413 qubits or greater — it would probably be appropriate to place some upper limit on quantum volume so that impossible simulations are not attempted.

  1. 2⁵⁰. A reasonable default — if sufficient resources are available.
  2. 2⁵⁵. Probably a general practical limit for current classical technology.
  3. 2⁶⁰. Probably an absolute limit for current classical technology.
  4. 2⁴⁰. May be a more reasonable default since 2⁵⁰ or higher would consume extreme resources.
  5. 2³⁸. If that’s what Google’s cloud-based simulator supports and if it does so efficiently.
  6. 2³⁵. May be a reasonable default unless the quantum hardware is exceptionally good.
  7. 2³². May be a good default for average, commodity hardware.

What does the IBM paper say about lower error rates?

The IBM paper does have this footnote:

  • For error rates as low at 10E−4, we anticipate that model circuits U that can be successfully implemented will involve few enough qubits and/or low enough depth to compute HU classically. For lower error rates than this, the quantum volume can be superseded by new volume metrics or modified so classical simulations are not necessary.
  • quantum volume can be superseded by new volume metrics
  • or modified so classical simulations are not necessary

So what is the real limit?

There doesn’t appear to be any specific actual real or theoretical limit to how many qubits can be measured for IBM’s notion of quantum volume. Rather, there is only the practical limit of how large a quantum simulation the individual running the test can perform.

What is the practical limit?

The practical limit for measuring quantum volume is whatever simulator software and classical hardware is available to the person seeking to generate a measurement of quantum volume.

  1. 16 qubits = 2¹⁶ = 65,536 quantum states.
  2. 20 qubits = 2²⁰ = 1,048,576 (one million) quantum states.
  3. 24 qubits = 2²⁴ = 16,777,216 (16 million) quantum states.
  4. 28 qubits = 2²⁸ = 268,435,456 (268 million) quantum states.
  5. 32 qubits = 2³² = 4,294,967,296 (4 billion) quantum states.
  6. 36 qubits = 2³⁶ = 68,719,476,736 (68 billion) quantum states.
  7. 38 qubits = 2³⁸ = 274,877,906,943 (274 billion) quantum states.
  8. 40 qubits = 2⁴⁰ = 1,099,511,627,776 (one trillion) quantum states.
  9. 41 qubits = 2⁴¹ = 2,199,023,255,552 (two trillion) quantum states.
  10. 42 qubits = 2⁴² = 4,398,046,511,104 (four trillion) quantum states.
  11. 45 qubits = 2⁴⁵ = 35,184,372,088,832 (35 trillion) quantum states.
  12. 50 qubits = 2⁵⁰ = 1,125,899,906,842,624 (one quadrillion) quantum states.
  13. 53 qubits = 2⁵³ = 9,007,199,254,740,992 (nine quadrillion) quantum states.
  14. 54 qubits = 2⁵⁴ = 18,014,398,509,481,984 (18 quadrillion) quantum states.
  15. 55 qubits = 2⁵⁵ = 36,028,797,018,963,968 (36 quadrillion) quantum states.
  16. 60 qubits = 2⁶⁰ = 1,152,921,504,606,846,976 (one quintillion) quantum states.
  17. 64 qubits = 2⁶⁴ = 18,446,744,073,709,551,616 (18 quintillion) quantum states.
  18. 65 qubits = 2⁶⁵ = 36,893,488,147,419,103,232 (37 quintillion) quantum states.
  19. 72 qubits = 2⁷² = 4,722,366,500,000,000,000,000 (4 sextillion) quantum states.

How much storage is needed for quantum state?

We’ve discussed large numbers of quantum states, but it’s unclear how much storage would be required. The precise number of bits, bytes, and more needed for even a single quantum state is unclear.

What are the prospects for a quantum performance metric that doesn’t require classical simulation?

A quantum leap for a performance metric is simply uncharted territory. We can speculate or fantasize about a comparable performance metric that is virtually limitless and doesn’t bump into such classical limits, but no solid answers are available at present.

Potential for application-specific benchmarks

Another possibility is that as we finally begin to get true production-scale algorithms and applications, application-specific benchmarks could be developed, so that even if complex theoretical quantum circuits cannot be readily modeled classically, much more practical circuits could — maybe.

Benchmarks based on verifiable algorithms

There are two classes of theoretically-hard algorithms — those whose results can be quickly validated or verified and those which can’t be quickly validated or verified. If your quantum circuit factored a very large semiprime number, it’s easy to check (verify) that the factors multiply to produce the initial semiprime number. But for a complex optimization such as protein folding or solving a complex traveling salesman problem, you can only validate whether the result seems acceptable, not whether it is in fact actually the most optimal since you can’t readily classically calculate the optimal result since if you could readily calculate it, it wouldn’t require a quantum computer to calculate it in the first place.

How many qubits are likely to be used for production-scale applications?

At present, there are no production-scale algorithms or applications, but it is worth contemplating how many qubits might be needed for production-scale applications. In particular, the likelihood that far more than 50 qubits are likely to be needed, blowing right past the key limitation of the quantum volume benchmarking metric.

Quantum volume is a deadend

Sure, quantum volume is a plausible (if weak) metric for smaller quantum computers and smaller quantum circuits, but clearly it won’t work well and won’t work at all for larger quantum computers and larger quantum algorithms.

Why doesn’t IBM refer to quantum volume in their quantum roadmap?

When IBM initially posted their roadmap for their future quantum computers I thought it very odd that they had no mention of quantum volume. It was only a couple of weeks later as I started to dig deeper into the technical details of quantum volume that I finally realized why, as exemplified by this informal paper — because all of their future machines have a lot more than 50 qubits and 50 qubits is roughly the limit for practical measurement of quantum volume.

Why doesn’t IBM publish the quantum volume of their 53-qubit machine?

In fact, IBM’s current largest quantum computer (before the 65-qubit machine was just announced), at 53 qubits, is already beyond the 50-qubit limit for measurement of quantum volume.

But IBM does say that their 65-qubit system has a quantum volume of 32

Even though IBM hasn’t published a quantum volume for their 53-qubit system, they did publish (okay, they tweeted) that their new 65-qubit successor to the 53-qubit system has a quantum volume of 32.

How does one configure a simulator for a system with more than 50 qubits?

I haven’t seen any mention or publication of techniques for configuring simulations of quantum computers with over 50 qubits (or even 40 or even 32 qubits.) Clearly there are practical resource constraints. I don’t want to speculate, but I suspect that you could configure the simulator for just the subset of qubits used by your algorithm.

Did IonQ achieve a quantum volume of 4 million for 32 qubits?

Although their press release seems to assert that they achieved a quantum volume of “greater than 4,000,000” for a 32-qubit system, something seems amiss in IonQ’s announcement.

Other related topics

There are numerous other topics related to quantum volume which are worthy of discussion, but beyond the scope of this current paper, some of which have been discussed briefly above, but others include:

  1. General plain-language explanation of quantum volume.
  2. Quantum volume metric doesn’t appear to have any direct utility to designers of quantum algorithms or developers of quantum applications. No guidance is offered for how to interpret or parse a quantum volume metric in a way that yields useful information for algorithm designers or application developers.
  3. The utility of quantum volume seems to be limited to marketing — to allow a vendor to show that their machine is more “powerful” than a competitor’s machine.
  4. Why is quantum volume a power of 2? Why not just use the exponent alone — 6 instead of 64 (2⁶)? The original IBM paper offers no rationale.
  5. A logarithmic metric would be much more appropriate for quantum computers where the advantage is expected to be exponential, so that relatively small numbers can be used rather than very large numbers. An ideal quantum computer with 65 perfect qubits would have a quantum volume of 36,893,488,147,419,103,232 (37 quintillion), which is not a number which a typical quantum algorithm designer or application developer could use in a practical manner, as opposed to 65 qubits and a sense of maximum circuit depth, coherence, and any limits to connectivity.
  6. What can you infer about the number of qubits from the quantum volume metric? log2(QV) is the minimum of the number of qubits — the machine may have more qubits, and currently is likely to have significantly more qubits.
  7. Is quantum volume too strict or too lenient? Does an average algorithm need that much connectivity, accuracy, or coherence? Some algorithms might need more qubits but shallow depth circuits, while others may need fewer qubits but much deeper circuits. Should there be levels of quantum volume, so algorithm designers and application developers can match their particular needs to the most appropriate metric?
  8. Impact of SWAP networks when the machine does not have direct any-to-any connectivity. How much swapping of quantum states was needed to achieve a particular value of quantum volume? What qubit and circuit sizes failed due to excessive SWAP errors?
  9. Using quantum Fourier transform (QFT) for benchmarking.
  10. Using quantum phase estimation (QPE) for benchmarking.
  11. Using electronic complexity of an atom for benchmarking.
  12. Using complexity of a molecule for benchmarking.
  13. Using various specific optimization scenarios for benchmarking.
  14. Using portfolio size for benchmarking for financial applications.
  15. Families or groups of application-specific benchmarking.
  16. Benchmarking metrics that can be expressed both in terms of raw machine resources and application domain-specific terms. Such as the “n” used in Big-O notation for algorithmic complexity.
  17. Multi-factor benchmarking, rather than a single, all-encompassing benchmark metric.
  18. How to benchmark machines with hundreds, thousands, or millions of qubits.
  19. Whether quantum volume might apply to any of the machines on IBM’s quantum roadmap — 65, 127, 433, 1,121, or one million qubits.
  20. Benchmarks for algorithms which don’t require square circuits — more qubits but shallow circuits, or deeper circuits with fewer qubits.
  21. Circuits which use a lot of nearest-neighbor connectivity, but don’t require full any-to-any connectivity.
  22. Future quantum architectures which are a dramatic departure from current architectures.
  23. Modular and distributed quantum machine architectures. Impact of quantum interconnects or qubit shuttling.
  24. Benchmarking in terms of higher-level programming models using algorithmic building blocks other than raw qubits and quantum logic gates.
  25. Contemplate future quantum simulator capacity and performance for much larger simulations.
  26. It hardly seems worthwhile to characterize the performance of a quantum computer by anything short of its relative advantage compared to a classical computer. It would be more compelling to have a metric whose value at 1.0 would mean the same performance as a classical computer.

The future

Whatever technological advances the coming years and decades bring, we definitely need more helpful benchmarking metrics. And benchmark metrics that work both for more capable hardware and the needs of less-sophisticated application developers.

--

--

Freelance Consultant

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store