What Is the Quantum Advantage of Your Quantum Algorithm?

  1. Theoretical quantum advantage. A mathematical comparison of the algorithmic complexity of a quantum algorithm compared to the algorithmic complexity of a comparable classical algorithm. Typically, this will be a quantum algorithm with polynomial complexity or less compared to a classical algorithm with exponential complexity, so the quantum algorithm delivers an exponential speedup over the classical algorithm.
  2. Actual quantum advantage. The actual numerical advantage of a quantum algorithm over a comparable classical algorithm for specific input data, especially of a small to modest size, even if in theory a quantum algorithm would deliver an exponential speedup for larger input data.
  3. Dramatic quantum advantage. Not simply a notable advantage of a quantum algorithm over a classical algorithm, but such a dramatic advantage that there is no question that the quantum-based solution is the only way to go. A 50% improvement or even a 2–4 times improvement might normally be advantageous when comparing two classical solutions, but a quantum solution might require a 10X, 100X, 1,000X, or even a factor of a million or a billion or more over a comparable classical solution to justify the intellectual difficulty and cost of going quantum.
  • What actual numerical performance advantage is the quantum algorithm actually delivering for particular input data?
  1. What is quantum advantage?
  2. How dramatic an advantage?
  3. Algorithmic complexity or computational complexity and Big-O notation
  4. Mathematical formalism?
  5. Simple rule of thumb
  6. Quantum parallelism
  7. Hadamard transform
  8. What if there is no comparable classical algorithm?
  9. Three separate performance factors
  10. Exponential speedup
  11. Repetitions or shot count
  12. Qualitative quantum advantage
  13. Computational complexity of your quantum algorithm
  14. Real computational complexity
  15. Your quantum advantage is the widest Hadamard transform in your algorithm
  16. Multiple Hadamard transforms
  17. Actual quantum advantage vs. theoretical quantum advantage
  18. Running time vs. speedup
  19. How scalable is your quantum algorithm?
  20. How many classical computers would it take to complete the task in the same time as the quantum algorithm?
  21. What algorithm documentation should disclose
  22. Conclusion
  23. What’s next?
  24. List of My Papers on Quantum Computing

What is quantum advantage?

If you are unfamiliar with the concept or quantum advantage (or quantum supremacy), take a glance at this companion paper:

  • quantum advantage. A quantum computer can perform a particular computation significantly faster than even the best classical computer. And in some cases, a quantum computer can perform computations which no classical computer can perform at all — also referred to as quantum supremacy.

How dramatic an advantage?

Technically, even a relatively modest advantage of a quantum algorithm over a classical algorithm, such as 50% or a factor of 2–5, could be enough to characterize the quantum algorithm as having a quantum advantage.

When will we see dramatic quantum advantage?

Unfortunately, we may have to wait for quantum computing hardware to progress a few more iterations before it will be common for quantum algorithm designers to achieve a dramatic quantum advantage on a relatively frequent basis.

Algorithmic complexity or computational complexity and Big-O notation

Algorithmic complexity, also known as computational complexity, is a mathematical expression of approximately (roughly) how much computing resources will be needed to execute an algorithm for a particular input or size of input.

Mathematical formalism?

The goal of this informal paper is not mathematical formalism and mathematical accuracy, but a simple method of characterizing the performance advantage of a quantum algorithm for actual use cases, particularly for relatively small input data since we do not now have large quantum computers capable of handling larger input data.

Simple rule of thumb

Generally, it is sufficient to have a simple rule of thumb rather than a precise mathematical formula or detailed methodology. I’ll leave that work to others with the appropriate skills and interests.

Quantum parallelism

The essence of quantum advantage is provided by quantum parallelism, which is the ability to perform a single computation over all possible values of a particular variable. In theory, this should almost always provide an incredibly dramatic quantum advantage over virtually any classical computing approach. In theory — but theory doesn’t always translate into practice as perfectly as we would desire.

Hadamard transform

The standard technique for employing quantum parallelism on a quantum computer is to dedicate a collection of qubits to be used as a register and then to execute a Hadamard transform on the qubits of the register to place the qubits in a superposition of all possible values of those qubits.

What if there is no comparable classical algorithm?

In many cases there may be an existing classical solution to a problem to which a quantum solution can be compared, but not always. So, what should be done when there is no comparable classical solution?

  1. 2^k processors, each executing the classical equivalent of the core quantum algorithm once.
  2. One classical processor executing that classical code 2^k times.
  3. m classical processors, each executing the classical code (2^k)/m times.

Three separate performance factors

Whether considering a quantum solution or a classical solution, there are generally three distinct performance factors:

  1. Cost of the inner loop — the core of the algorithm.
  2. Number of iterations of the inner loop.
  3. Number of iterations of the outer loop.

Exponential speedup

At least in theory, quantum computing promises an exponential speedup compared to classical computing.

  1. Not all quantum algorithms actually deliver an exponential speedup, even in theory. Every algorithm has its own unique performance issues. The computational complexity of some algorithms may in fact be polynomial or quadratic rather than truly exponential.
  2. Limitations of current quantum computer hardware may make it impractical to achieve exponential speedup for input data larger than a very limited size. Limits include number of qubits, coherence time, error rate, and connectivity between qubits.
  3. The complexity of the computation may exceed the capabilities of existing quantum computing hardware.
  4. The number of qubits needed for a particular input may be so small that even an exponential speedup is only a fairly small absolute speedup compared to a classical computer and easily replicated using parallel computation, distributed computation, or even simple, brute force iteration.
  5. Variational methods or other forms of hybrid quantum/classical algorithms may use such a small number of qubits — to deal with limited existing hardware — that even an exponential speedup is still a relatively small advantage over classical computing techniques.
  6. Clever analysis and programming techniques can achieve dramatic improvements over simpler classical techniques. Not all of the raw power of an exponential speedup may really be needed.
  7. Special cases of input may have dramatically simpler classical solutions so that exponential speedup is not needed or not significant enough to demand a quantum solution.
  8. Brute force classical solutions, such as a supercomputer or large-scale distributed computing cluster — hundreds or even thousands of classical processors — as well as clever classical programming techniques may yield solutions in an acceptable amount of time.
  9. Brute force neo-classical solutions, such as GPUs, FPGAs, and other specialized hardware, coupled with distributed computing, might be sufficient to render any quantum advantage not significant enough to be worth the effort — or even feasible with today’s limited quantum computing hardware.

Repetitions or shot count

Quantum computers are probabilistic by nature rather than deterministic. They are also currently noisy and produce errors. As a result, it is necessary to execute a quantum algorithm repeatedly so that a statistical average can be calculated for the results of the algorithm.

Qualitative quantum advantage

Specific numerical values of quantum advantage can be confusing or even misleading, and definitely difficult to put into context. So, here are some broader, looser, simpler categories of quantum advantage:

  1. Trivial. Significantly less than a single order of magnitude. Improvements of 10%, 20%, 50%, 100%, 200%, or even 300%, or factors of 2 to 5. This may be significant for particular applications in particular contexts for particular business requirements, but these are gains that can more typically be achieved with clever classical techniques or parallel classical processors. There’s little benefit to be gained by utilizing a quantum computer for such small improvements.
  2. Modest. A single order of magnitude — 10 X. Again, not particularly a big win for quantum computing.
  3. Moderate. One to two orders of magnitude — 20X to 100X.
  4. Significant. Three to five orders of magnitude — 1,000X to 100,000X
  5. Dramatic. Six to eight orders of magnitude — 1 million X to 100 million X.
  6. Profound. Nine orders of magnitude and higher — 1 billion X. Comparable classical algorithms require many machines for months or years, and are at the extreme limits of what is practical, or even beyond what is practical and economically feasible.
  7. Profound exponential. Exponential speedup for larger input values — dozens, hundreds, and even thousands of orders of magnitude. Comparable classical algorithms requiring dozens, hundreds, and even thousands of years to complete even using a large number of computing processors.
  8. Quantum supremacy. No conceivable classical approach can solve the problem even with many thousands of processors running for many thousands of years.

Computational complexity of your quantum algorithm

Whether the developer of your algorithm does the work for you or you have to do the work yourself, you need to know the computational complexity of your algorithm.

  • O(2^n)
  • O(n²)
  • O(2¹⁰⁰) = 10³⁰, which is a huge number and would take a very, very long time to complete.
  • O(100²) = 10,000, which is a reasonably small number and would run to completion fairly quickly.

Real computational complexity

Computational complexity is usually expressed in an abstract form, such as O(n²), to cover all possible sizes of input, but for a particular usage, n will be a constant.

  • O(3²) = 9, a very small number (polynomial)
  • O(2³) = 8, still a very small number even though exponential
  • O(7²) = 49 (polynomial)
  • O(2⁷) = 128, still not too bad even though exponential
  • O(11²) = 121 (polynomial)
  • O(2¹¹) = 2048, exponential cost is starting to become more apparent
  • O(20²) = 400 (polynomial)
  • O(2²⁰) = 1 million, exponential cost is readily apparent

Your quantum advantage is the widest Hadamard transform in your algorithm

It could be more complicated than this, but in its simplest form the quantum advantage of your algorithm is the number of distinct quantum states that can be represented by the widest Hadamard transform used in your algorithm.

  • For 3 qubits, 2³ = 8. Your quantum advantage is 8.
  • For 7 qubits, 2⁷ = 128. Your quantum advantage is 128.
  • For 11 qubits, 2¹¹ = 2K. Your quantum advantage is 2,048.
  • For 20 qubits, 2²⁰ = 1M. Your quantum advantage is one million (1,048,576.)
  • 2 qubits = 2² = 4 X advantage
  • 3 qubits = 2³ = 8 X advantage
  • 4 qubits = 2⁴ = 16 X advantage
  • 5 qubits = 2⁵ = 32 X advantage
  • 6 qubits = 2⁶ = 64 X advantage
  • 7 qubits = 2⁷ = 128 X advantage
  • 8 qubits = 2⁸ = 256 X advantage
  • 10 qubits = 2¹⁰ = 1K X (kilo — thousand — 10³) advantage
  • 11 qubits = 2¹¹ = 2K X advantage
  • 12 qubits = 2¹² = 4K X advantage
  • 16 qubits = 2¹⁶ = 64K X advantage
  • 20 qubits = 2²⁰ = 1M X (mega — million — 10⁶) advantage
  • 30 qubits = 2³⁰ = 1G X (giga — billion — 10⁹) advantage
  • 40 qubits = 2⁴⁰ = 1T X (tera — trillion — 10¹²) advantage
  • 50 qubits = 2⁵⁰ = 1P X (peta — quadrillion — 10¹⁵) advantage
  • 60 qubits = 2⁶⁰ = 1E X (exa — quintillion — 10¹⁸) advantage
  • 70 qubits = 2⁷⁰ = 1Z X (zetta — sextillion — 10²¹) advantage
  • 80 qubits = 2⁸⁰ = 1Y X (yotta — septillion — 10²⁴) advantage

Multiple Hadamard transforms

What if your algorithm uses multiple Hadamard transforms?

  1. Sequential Hadamard transforms. On the same register of qubits. Simply sum them.
  2. Parallel Hadamard transforms. On distinct registers of qubits. Use the width of the widest register.
  • 2^k1 + 2^k2 + … + 2^km
  • 2³ + 2² + 2⁴ = 8 + 4 + 16 = 28
  • Not 2 raised to the k1 + k2 + … + km
  • Not 3 + 2 + 4 = 9, 2⁹ = 512
  • Driven by widest transform, the maximum.
  • 2 ^3, 2², 2⁵, 2², 2⁴ has the widest as 2⁵ = 32

Actual quantum advantage vs. theoretical quantum advantage

Sure, technically, an abstract algorithm may be designed to work for k qubits where k can be an arbitrarily large number of qubits such as 50, 72, 128, 160, 256, or even larger, which would offer a phenomenally large quantum advantage — the theoretical quantum advantage, but only if the actual input data requires that many qubits. If fewer qubits are needed to represent the actual input data, the actual quantum advantage will be potentially much smaller.

Running time vs. speedup

Technically, computational complexity characterizes the running time of an algorithm — larger input means more running time. But when comparing the performance of a quantum algorithm to that of a classical algorithm, we’re usually more interested in the relative advantage or speedup of the quantum algorithm relative to the classical algorithm.

How scalable is your quantum algorithm?

Classical algorithms can be scaled fairly readily, within limits, by adding more memory or more processors, for as much memory and as many processors as you want. Modern distributed computing supports using dozens, hundreds, thousands, and even tens of thousands of processors and many gigabytes and even terabytes of memory.

  1. Too few qubits. Current hardware is too limited.
  2. Too little fidelity, too-short maximum circuit depth. Algorithms must be really short.
  3. Too little connectivity. Even when there may be enough qubits, limited connectivity prevents them from being effectively utilized.
  1. How far can your algorithm scale for increasing input size using the same hardware as you use today? At what point do you require new hardware?
  2. If you do need new hardware, is it available today, off the shelf, at a reasonable cost, or might you have to wait months and even years for hardware that can handle your algorithm for your desired input size?
  3. Will your algorithm scale automatically to adjust for input and hardware size? Do you have classical code which generates the quantum circuit dynamically and already has all the needed scaling logic?
  4. Do you need to fiddle a little with your algorithm and manually adjust it to changes in hardware or input size?
  5. Does the hardware scale in a perfectly symmetric manner, so that your algorithm can take advantage of the additional hardware without the need for significant adaptation?
  6. Will you need to do some minor redesign of your algorithm? Do you already know what those changes will be, or will it take some significant thought to figure them out?
  7. Will you need to do some major redesign of your algorithm? Do you already know what those changes will be, or will it take some significant thought to figure them out?
  8. Will you need to do a radical redesign of your algorithm? Do you already know what those changes will be, or will it take some significant thought to figure them out?
  9. Will your algorithm require a complete redesign, from scratch? Do you already know what the redesign will look like, or will you have to go back to the drawing board and come with a whole new approach.
  10. Maybe you have no idea whether or how your algorithm will scale.
  11. Are you maybe simply using a published (or purchased) algorithm and you are completely dependent on the original author or a vendor or a consultant to do any scaling?
  12. Are you maybe assuming that your algorithm will scale, but don’t actually know for sure or know exactly how it might scale.
  13. Is there a size to which you are confident that your algorithm will scale, but beyond which you lack confidence as to whether it will scale? Both for input data size and for hardware size.
  14. How well does the repetition count (shots required) scale as input scales? Is there a clear formula or rule of thumb for calculating it? It may be fine and acceptable for relatively small input data, but it could grow faster than desired or acceptable for larger input data. A few hundreds or thousands of repetitions may be perfectly acceptable, but many thousands, millions, billions, or more… not so much. The strict determinism of classical solutions can sometimes be a distinct advantage when the repetition count for a probabilistic quantum solution gets too high.

How many classical computers would it take to complete the task in the same time as the quantum algorithm?

A most basic question to ask when comparing quantum and classical algorithms is:

  • How many classical computers would it take to complete the task in the same time as the quantum algorithm?

What algorithm documentation should disclose

Whether described in a formal, published paper or merely as comments in a GitHub repository, the following performance information should be disclosed:

  1. The computational complexity (Big-O) for the quantum algorithm.
  2. The computational complexity (Big-O) for the comparable classical algorithm, whether an actual existing classical algorithm or an estimation of a hypothetical classical algorithm. Either cite the specific classical algorithm, or summarize the hypothetical classical algorithm in sufficient detail to deduce it’s computational complexity.
  3. The theoretical quantum advantage. Some notion of the inverse of the ratio or delta of the computational complexity of the quantum algorithm to the computational complexity of the classical algorithm. This could be a mathematical formalism, which might be difficult for non-mathematicians to make sense of, or a plain-language simplification, such as the common case of an exponential speedup.
  4. A reasonably insightful list of representative examples of the actual quantum advantage for common cases of small or typical input data.
  5. A formula or rule of thumb for calculating the number of repetitions of the quantum algorithm needed for a particular input size. Or at least some rough estimate of the ballpark of the repetition count.
  1. For 3 qubits, 2³ = 8 times the performance of a comparable classical algorithm.
  2. For 7 qubits, 2⁷ = 128 times the performance of a comparable classical algorithm.


Quantum computing promises mind-boggling performance gains over classical computing solutions, but whether those promises are fulfilled is another matter. Not all quantum algorithms will deliver exponential speedup — for example, Grover’s search algorithm, which offers only a quadratic speedup. Be careful. Check. Avoid assumptions not based on verifiable and measurable facts.

What’s next?

We’re still in the early, Wild West days of quantum computing, so it’s too soon to insist that the degree of formalism discussed in this paper will quickly find its way into common practice. But one can always hope!



Freelance Consultant

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store