What Is the Quantum Advantage of Your Quantum Algorithm?

Jack Krupansky
23 min readFeb 25, 2020

--

Before deciding to utilize a quantum computer for a particular application, one should first assess whether a chosen quantum algorithm will indeed offer a dramatic performance advantage — a so-called quantum advantage — over a comparable algorithm running on a classical computer. Alas, a typical published quantum algorithm tends not to offer a clear statement of exactly what the quantum advantage of the algorithm might be. In short, the concept of quantum advantage is rather problematic, at least as currently practiced. This informal paper will explore some of the issues surrounding quantum advantage of quantum algorithms, and propose disclosures that should be made when algorithms are published or marketed.

Three concepts or refinement of quantum advantage must be introduced:

  1. Theoretical quantum advantage. A mathematical comparison of the algorithmic complexity of a quantum algorithm compared to the algorithmic complexity of a comparable classical algorithm. Typically, this will be a quantum algorithm with polynomial complexity or less compared to a classical algorithm with exponential complexity, so the quantum algorithm delivers an exponential speedup over the classical algorithm.
  2. Actual quantum advantage. The actual numerical advantage of a quantum algorithm over a comparable classical algorithm for specific input data, especially of a small to modest size, even if in theory a quantum algorithm would deliver an exponential speedup for larger input data.
  3. Dramatic quantum advantage. Not simply a notable advantage of a quantum algorithm over a classical algorithm, but such a dramatic advantage that there is no question that the quantum-based solution is the only way to go. A 50% improvement or even a 2–4 times improvement might normally be advantageous when comparing two classical solutions, but a quantum solution might require a 10X, 100X, 1,000X, or even a factor of a million or a billion or more over a comparable classical solution to justify the intellectual difficulty and cost of going quantum.

The theoretical quantum advantage of an algorithm may look great for arbitrary and large input data on hypothetical theoretical future quantum computers, but the actual quantum advantage of that same algorithm on current and near-term quantum computers may be much less impressive for relatively modest input data.

In particular, even if the quantum algorithm indeed has a theoretical exponential advantage, the input size may be so small that O(k^n) (in Big-O notation) is such a small number that classical parallel computing, distributed computing, or even brute-force iteration is still quite feasible.

In addition, there may be practical limitations of current and near-term quantum computers which preclude practical applications from scaling to achieve their true theoretical quantum advantage, or at least make it rather difficult, at best. This paper will summarize some of the issues related to scaling.

The ultimate question to be answered for every user and every application is:

  • What actual numerical performance advantage is the quantum algorithm actually delivering for particular input data?

Topics to be discussed in this paper include:

  1. What is quantum advantage?
  2. How dramatic an advantage?
  3. Algorithmic complexity or computational complexity and Big-O notation
  4. Mathematical formalism?
  5. Simple rule of thumb
  6. Quantum parallelism
  7. Hadamard transform
  8. What if there is no comparable classical algorithm?
  9. Three separate performance factors
  10. Exponential speedup
  11. Repetitions or shot count
  12. Qualitative quantum advantage
  13. Computational complexity of your quantum algorithm
  14. Real computational complexity
  15. Your quantum advantage is the widest Hadamard transform in your algorithm
  16. Multiple Hadamard transforms
  17. Actual quantum advantage vs. theoretical quantum advantage
  18. Running time vs. speedup
  19. How scalable is your quantum algorithm?
  20. How many classical computers would it take to complete the task in the same time as the quantum algorithm?
  21. What algorithm documentation should disclose
  22. Conclusion
  23. What’s next?
  24. List of My Papers on Quantum Computing

What is quantum advantage?

If you are unfamiliar with the concept or quantum advantage (or quantum supremacy), take a glance at this companion paper:

Here’s my definition from that paper:

  • quantum advantage. A quantum computer can perform a particular computation significantly faster than even the best classical computer. And in some cases, a quantum computer can perform computations which no classical computer can perform at all — also referred to as quantum supremacy.

Generally, quantum advantage is more of an abstract term — a quantum algorithm is simply better than a classical algorithm, by a fairly wide margin, in a simple binary sense (does or doesn’t have a dramatic advantage), but in the context of this paper we will use it as if it was intended to provide a more quantifiable, numeric value of the specific advantage of a quantum algorithm over a comparable classical algorithm.

How dramatic an advantage?

Technically, even a relatively modest advantage of a quantum algorithm over a classical algorithm, such as 50% or a factor of 2–5, could be enough to characterize the quantum algorithm as having a quantum advantage.

But, generally, the whole point of transitioning to quantum computing is not to achieve relatively modest performance gains, but to achieve dramatic gains, typically an exponential speedup.

Given the significant intellectual challenges of quantum computing algorithms, plus the cost of a quantum computer, an improvement of merely 50% or a factor of 2–5 may simply not be sufficient. An advantage of 10X, 100X, 1,000X, or even a factor in the millions, billions, or much more, may be required to justify the effort and resources required for a quantum-based solution.

Generally, a dramatic quantum advantage is needed to justify a quantum-based solution.

In fact, the real goal is not simply a speedup or modest reduction in cost, but enabling the solution of problems which simply cannot be readily or economically achieved using classical computing.

When will we see dramatic quantum advantage?

Unfortunately, we may have to wait for quantum computing hardware to progress a few more iterations before it will be common for quantum algorithm designers to achieve a dramatic quantum advantage on a relatively frequent basis.

Maybe it might occur sooner for some specialized niches. Or with specialized hardware.

Google achieved a specialized form of quantum supremacy but it doesn’t translate into quantum advantage for any other algorithms, at least at this time.

I’d say maybe in another two years or so. Maybe when 48 to 128-qubit quantum computers with much higher fidelity and much greater connectivity become relatively common. And it will take dramatic advances in algorithm design, not simply greater hardware capacity.

Algorithmic complexity or computational complexity and Big-O notation

Algorithmic complexity, also known as computational complexity, is a mathematical expression of approximately (roughly) how much computing resources will be needed to execute an algorithm for a particular input or size of input.

Algorithmic complexity is commonly expressed in Big-O notation.

Both will be described briefly in a bit, but for a full introduction and more details, read this paper:

Mathematical formalism?

The goal of this informal paper is not mathematical formalism and mathematical accuracy, but a simple method of characterizing the performance advantage of a quantum algorithm for actual use cases, particularly for relatively small input data since we do not now have large quantum computers capable of handling larger input data.

Simple rule of thumb

Generally, it is sufficient to have a simple rule of thumb rather than a precise mathematical formula or detailed methodology. I’ll leave that work to others with the appropriate skills and interests.

But I would suggest that published, peer-reviewed formal papers should have such formalisms and accuracy — along with the simple rule of thumb results for more casual readers — and most practitioners as well.

Quantum parallelism

The essence of quantum advantage is provided by quantum parallelism, which is the ability to perform a single computation over all possible values of a particular variable. In theory, this should almost always provide an incredibly dramatic quantum advantage over virtually any classical computing approach. In theory — but theory doesn’t always translate into practice as perfectly as we would desire.

The essence of quantum parallelism is the Hadamard transform, which can place a collection of qubits into a superposition of all possible values (quantum states) of those qubits. In particular, k qubits can be placed into 2^k quantum states.

Once in such a superposition, a computation using that collection of qubits will operate on all 2^k quantum states in parallel, simultaneously. That’s 2^k computations in the same time that a classical computer could perform the same computation once.

That’s the easy part. Accessing the results of those 2^k computations is tricky and requires significant cleverness, which is beyond the scope of this informal paper.

Hadamard transform

The standard technique for employing quantum parallelism on a quantum computer is to dedicate a collection of qubits to be used as a register and then to execute a Hadamard transform on the qubits of the register to place the qubits in a superposition of all possible values of those qubits.

For k qubits, there are 2^k possible values.

So executing a Hadamard quantum logic gate on a register of k qubits will generate 2^k distinct quantum states.

What if there is no comparable classical algorithm?

In many cases there may be an existing classical solution to a problem to which a quantum solution can be compared, but not always. So, what should be done when there is no comparable classical solution?

First, it behooves the quantum algorithm designer to at least offer a strawman proposal for a classical solution so that a realistic comparison can be made. Otherwise, it might actually be true that a quantum solution is not necessarily more efficient than a possible classical solution.

But in the absence of a credible comparable classical solution, the default should be to suggest a classical solution which has as many classical processors as the number of quantum states in the widest Hadamard transform in the quantum algorithm. So, if the widest Hadamard transform has k qubits, which represents 2^k quantum states, there are three possible classical approaches for a default classical algorithm:

  1. 2^k processors, each executing the classical equivalent of the core quantum algorithm once.
  2. One classical processor executing that classical code 2^k times.
  3. m classical processors, each executing the classical code (2^k)/m times.

That would be a credible default, but not necessarily the most optimal classical solution. It might be that there are some very clever heuristics so that those 2^k iterations could be dramatically reduced. It might be possible to preprocess the input data or otherwise constrain the iterations to a more manageable amount.

But for the purposes here, the absence of a carefully designed classical alternative leaves us with a default, hypothetical classical alternative that works with either 2^k processors, 2^k iterations, or with m processors for 2^k/m iterations.

Three separate performance factors

Whether considering a quantum solution or a classical solution, there are generally three distinct performance factors:

  1. Cost of the inner loop — the core of the algorithm.
  2. Number of iterations of the inner loop.
  3. Number of iterations of the outer loop.

For a typical quantum algorithm the inner loop is driven by a Hadamard transform which causes the core of the algorithm to be executed once for every possible value or quantum state created by the Hadamard transform.

For example, if the Hadamard transform is performed on a k-qubit quantum register, that generates 2^k quantum states and the number of iterations of the core of the quantum algorithm will be 2^k.

The number of iterations of the outer loop would generally be the repetition count (shot count) needed to generate a statistically-significant average of the results of the inner loop. This is generally needed since the result of a quantum algorithm is probabilistic rather than deterministic — the repetitions are needed to statistically average the quantum results to get an approximation of a deterministic result. The outer loop would always be executed on a classical processor.

Quantum variational algorithms would have an additional level of looping outside of the outer loop which perform the variational method of the algorithm. Depending on the specific algorithm, the two outer loops might be merged or inverted, so that, for example, the variational method is performed on each result of the inner loop before it is statistically averaged, and then the statistical averaging is performed with the results of the variational method.

A classical implementation would not have the outer loop for statistical averaging since the inner loop would always generate a strictly deterministic result.

But a classical implementation of a variational algorithm would have the same identical variational outer loop as the quantum implementation.

Exponential speedup

At least in theory, quantum computing promises an exponential speedup compared to classical computing.

But factors that interfere with achieving this benefit include:

  1. Not all quantum algorithms actually deliver an exponential speedup, even in theory. Every algorithm has its own unique performance issues. The computational complexity of some algorithms may in fact be polynomial or quadratic rather than truly exponential.
  2. Limitations of current quantum computer hardware may make it impractical to achieve exponential speedup for input data larger than a very limited size. Limits include number of qubits, coherence time, error rate, and connectivity between qubits.
  3. The complexity of the computation may exceed the capabilities of existing quantum computing hardware.
  4. The number of qubits needed for a particular input may be so small that even an exponential speedup is only a fairly small absolute speedup compared to a classical computer and easily replicated using parallel computation, distributed computation, or even simple, brute force iteration.
  5. Variational methods or other forms of hybrid quantum/classical algorithms may use such a small number of qubits — to deal with limited existing hardware — that even an exponential speedup is still a relatively small advantage over classical computing techniques.
  6. Clever analysis and programming techniques can achieve dramatic improvements over simpler classical techniques. Not all of the raw power of an exponential speedup may really be needed.
  7. Special cases of input may have dramatically simpler classical solutions so that exponential speedup is not needed or not significant enough to demand a quantum solution.
  8. Brute force classical solutions, such as a supercomputer or large-scale distributed computing cluster — hundreds or even thousands of classical processors — as well as clever classical programming techniques may yield solutions in an acceptable amount of time.
  9. Brute force neo-classical solutions, such as GPUs, FPGAs, and other specialized hardware, coupled with distributed computing, might be sufficient to render any quantum advantage not significant enough to be worth the effort — or even feasible with today’s limited quantum computing hardware.

Repetitions or shot count

Quantum computers are probabilistic by nature rather than deterministic. They are also currently noisy and produce errors. As a result, it is necessary to execute a quantum algorithm repeatedly so that a statistical average can be calculated for the results of the algorithm.

The count of repetitions is also sometimes known as a shot count or shots.

A typical number of repetitions might be 100, 1,000, or even one million, depending on the nature of both the algorithm and the input size.

The comparison of a quantum algorithm to a comparable classical algorithm should include some rough or precise factor of scaling to the performance of the quantum algorithm to accommodate the repetition count.

For small input data the repetitions might in fact overwhelm the performance of the classical algorithm in a negative manner.

For larger input data the raw performance of quantum parallelism would generally overwhelm even a fast classical algorithm.

It’s also possible that more complex quantum algorithms might have such a low fidelity, high-error rate, and highly probabilistic nature that a very high number of repetitions might be needed so that even though the core quantum algorithm is extremely fast, scaling by a huge number of repetitions could cause the quantum algorithm to underperform a decent classical solution.

Careful analysis of the count of repetitions (shots) needed to get statistically acceptable results is needed.

Qualitative quantum advantage

Specific numerical values of quantum advantage can be confusing or even misleading, and definitely difficult to put into context. So, here are some broader, looser, simpler categories of quantum advantage:

  1. Trivial. Significantly less than a single order of magnitude. Improvements of 10%, 20%, 50%, 100%, 200%, or even 300%, or factors of 2 to 5. This may be significant for particular applications in particular contexts for particular business requirements, but these are gains that can more typically be achieved with clever classical techniques or parallel classical processors. There’s little benefit to be gained by utilizing a quantum computer for such small improvements.
  2. Modest. A single order of magnitude — 10 X. Again, not particularly a big win for quantum computing.
  3. Moderate. One to two orders of magnitude — 20X to 100X.
  4. Significant. Three to five orders of magnitude — 1,000X to 100,000X
  5. Dramatic. Six to eight orders of magnitude — 1 million X to 100 million X.
  6. Profound. Nine orders of magnitude and higher — 1 billion X. Comparable classical algorithms require many machines for months or years, and are at the extreme limits of what is practical, or even beyond what is practical and economically feasible.
  7. Profound exponential. Exponential speedup for larger input values — dozens, hundreds, and even thousands of orders of magnitude. Comparable classical algorithms requiring dozens, hundreds, and even thousands of years to complete even using a large number of computing processors.
  8. Quantum supremacy. No conceivable classical approach can solve the problem even with many thousands of processors running for many thousands of years.

Computational complexity of your quantum algorithm

Whether the developer of your algorithm does the work for you or you have to do the work yourself, you need to know the computational complexity of your algorithm.

This is sometimes referred to as Big-O, where “O” means order as in “on the order of”, meaning a rough characterization of how the running time of the algorithm will increase as the size of the input increases.

The promise of quantum computing is that the computational complexity will be polynomial while a comparable classical algorithm may be exponential, hence the quantum algorithm may offer an exponential speedup compared to the classical algorithm.

In its simplest form, an exponential complexity has the form:

  • O(2^n)

In its simplest form, a polynomial complexity has the form:

  • O(n²)

So, for example, for an input size (n) of 100:

  • O(2¹⁰⁰) = 10³⁰, which is a huge number and would take a very, very long time to complete.
  • O(100²) = 10,000, which is a reasonably small number and would run to completion fairly quickly.

For a broader and deeper introduction to computational complexity and Big-O notation, read this paper:

Real computational complexity

Computational complexity is usually expressed in an abstract form, such as O(n²), to cover all possible sizes of input, but for a particular usage, n will be a constant.

For example, using 3 qubits, the real computational complexity is no more than:

  • O(3²) = 9, a very small number (polynomial)
  • O(2³) = 8, still a very small number even though exponential

Or when using 7 qubits:

  • O(7²) = 49 (polynomial)
  • O(2⁷) = 128, still not too bad even though exponential

Or when using 11 qubits:

  • O(11²) = 121 (polynomial)
  • O(2¹¹) = 2048, exponential cost is starting to become more apparent

For 20 qubits:

  • O(20²) = 400 (polynomial)
  • O(2²⁰) = 1 million, exponential cost is readily apparent

Note that n is not the total number of qubits in the quantum computer, but the number to be used in the computation — actually used by the algorithm, and used in the Hadamard transform in particular. Actually, it’s a little more complicated than that…

And recall that the repetition count (shot count) for the quantum algorithm must be taken into account. Depending on the nature of the algorithm and input size, this may be an insignificant scale factor or it may overwhelm the cost of the comparable classical algorithm.

Your quantum advantage is the widest Hadamard transform in your algorithm

It could be more complicated than this, but in its simplest form the quantum advantage of your algorithm is the number of distinct quantum states that can be represented by the widest Hadamard transform used in your algorithm.

For example:

  • For 3 qubits, 2³ = 8. Your quantum advantage is 8.
  • For 7 qubits, 2⁷ = 128. Your quantum advantage is 128.
  • For 11 qubits, 2¹¹ = 2K. Your quantum advantage is 2,048.
  • For 20 qubits, 2²⁰ = 1M. Your quantum advantage is one million (1,048,576.)

Note: To be clear, the quantum advantage is based on the number of qubits used in a register, not the total number of qubits in the entire quantum computer — or the total number of qubits used by your algorithm, which may have multiple registers as well as ancilla qubits.

Other quantum advantages:

  • 2 qubits = 2² = 4 X advantage
  • 3 qubits = 2³ = 8 X advantage
  • 4 qubits = 2⁴ = 16 X advantage
  • 5 qubits = 2⁵ = 32 X advantage
  • 6 qubits = 2⁶ = 64 X advantage
  • 7 qubits = 2⁷ = 128 X advantage
  • 8 qubits = 2⁸ = 256 X advantage
  • 10 qubits = 2¹⁰ = 1K X (kilo — thousand — 10³) advantage
  • 11 qubits = 2¹¹ = 2K X advantage
  • 12 qubits = 2¹² = 4K X advantage
  • 16 qubits = 2¹⁶ = 64K X advantage
  • 20 qubits = 2²⁰ = 1M X (mega — million — 10⁶) advantage
  • 30 qubits = 2³⁰ = 1G X (giga — billion — 10⁹) advantage
  • 40 qubits = 2⁴⁰ = 1T X (tera — trillion — 10¹²) advantage
  • 50 qubits = 2⁵⁰ = 1P X (peta — quadrillion — 10¹⁵) advantage
  • 60 qubits = 2⁶⁰ = 1E X (exa — quintillion — 10¹⁸) advantage
  • 70 qubits = 2⁷⁰ = 1Z X (zetta — sextillion — 10²¹) advantage
  • 80 qubits = 2⁸⁰ = 1Y X (yotta — septillion — 10²⁴) advantage

And in a few years we will have quantum computers with hundreds and even thousands of qubits, for which the quantum advantage will be impossibly astronomical.

But once again, note that your quantum advantage is based on the number of qubits used for a register of qubits which are placed into superposition using a Hadamard transform, not the total number of qubits in the entire quantum computer or even the total number of qubits used by your algorithm.

Multiple Hadamard transforms

What if your algorithm uses multiple Hadamard transforms?

There are two simple cases:

  1. Sequential Hadamard transforms. On the same register of qubits. Simply sum them.
  2. Parallel Hadamard transforms. On distinct registers of qubits. Use the width of the widest register.

For example, for the sequential case:

  • 2^k1 + 2^k2 + … + 2^km
  • 2³ + 2² + 2⁴ = 8 + 4 + 16 = 28
  • Not 2 raised to the k1 + k2 + … + km
  • Not 3 + 2 + 4 = 9, 2⁹ = 512

For example, for the parallel case:

  • Driven by widest transform, the maximum.
  • 2 ^3, 2², 2⁵, 2², 2⁴ has the widest as 2⁵ = 32

These are for the simple cases. It can get very messy and complicated, but the simples cases may cover most algorithms.

Actual quantum advantage vs. theoretical quantum advantage

Sure, technically, an abstract algorithm may be designed to work for k qubits where k can be an arbitrarily large number of qubits such as 50, 72, 128, 160, 256, or even larger, which would offer a phenomenally large quantum advantage — the theoretical quantum advantage, but only if the actual input data requires that many qubits. If fewer qubits are needed to represent the actual input data, the actual quantum advantage will be potentially much smaller.

For example, if only 20 qubits of a 50-qubit quantum computer are needed for the input data (a register initialized with a Hadamard transform), the actual quantum advantage will be 2²⁰ or one million, rather than 2⁵⁰ or one quadrillion.

And if only 10 of those 50 qubits are placed into superposition using a Hadamard transform, the actual quantum advantage will be only 1,000 (okay, 1024 = 2¹⁰, to be exact.)

As a simple example, even if an abstract algorithm could deliver exponential speedup for very large inputs, there may be very little actual quantum advantage for small input values. For example, input data requiring only 3 qubits would yield an actual quantum advantage of only 8 (2³), and input data requiring only 5 qubits would yield an actual quantum advantage of only 32 (2⁵).

Running time vs. speedup

Technically, computational complexity characterizes the running time of an algorithm — larger input means more running time. But when comparing the performance of a quantum algorithm to that of a classical algorithm, we’re usually more interested in the relative advantage or speedup of the quantum algorithm relative to the classical algorithm.

That’s why we commonly refer to a quantum algorithm as offering an exponential speedup or an exponential advantage relatively to a classical algorithm. The classical algorithm has exponential complexity or performance while the quantum algorithm has polynomial complexity, but we still refer to the quantum algorithm having an exponential advantage.

How scalable is your quantum algorithm?

Classical algorithms can be scaled fairly readily, within limits, by adding more memory or more processors, for as much memory and as many processors as you want. Modern distributed computing supports using dozens, hundreds, thousands, and even tens of thousands of processors and many gigabytes and even terabytes of memory.

But even then, at some point adding more memory to a single processor or adding more processors to a computing cluster is likely to get either too expensive, too much of a logistical nightmare, or eventually flat out no longer technically feasible.

At present, quantum solutions have very limited scaling options. As a result, quantum algorithm designers hit one or more barriers very quickly:

  1. Too few qubits. Current hardware is too limited.
  2. Too little fidelity, too-short maximum circuit depth. Algorithms must be really short.
  3. Too little connectivity. Even when there may be enough qubits, limited connectivity prevents them from being effectively utilized.

Granted, new quantum computers are coming online on a fairly frequent basis, but the incremental improvements in those three areas are currently fairly meager.

It may be more than a couple of years before we have quantum hardware that supports algorithms with a lot more qubits, a lot greater fidelity and much longer circuit length, and significantly greater connectivity.

Then there are technical issues such as the granularity of phase of quantum states which may not see dramatic improvements as quickly as algorithm designers need them.

Even if hardware is available to support your algorithm, you may still have work to do, potentially a lot of work:

  1. How far can your algorithm scale for increasing input size using the same hardware as you use today? At what point do you require new hardware?
  2. If you do need new hardware, is it available today, off the shelf, at a reasonable cost, or might you have to wait months and even years for hardware that can handle your algorithm for your desired input size?
  3. Will your algorithm scale automatically to adjust for input and hardware size? Do you have classical code which generates the quantum circuit dynamically and already has all the needed scaling logic?
  4. Do you need to fiddle a little with your algorithm and manually adjust it to changes in hardware or input size?
  5. Does the hardware scale in a perfectly symmetric manner, so that your algorithm can take advantage of the additional hardware without the need for significant adaptation?
  6. Will you need to do some minor redesign of your algorithm? Do you already know what those changes will be, or will it take some significant thought to figure them out?
  7. Will you need to do some major redesign of your algorithm? Do you already know what those changes will be, or will it take some significant thought to figure them out?
  8. Will you need to do a radical redesign of your algorithm? Do you already know what those changes will be, or will it take some significant thought to figure them out?
  9. Will your algorithm require a complete redesign, from scratch? Do you already know what the redesign will look like, or will you have to go back to the drawing board and come with a whole new approach.
  10. Maybe you have no idea whether or how your algorithm will scale.
  11. Are you maybe simply using a published (or purchased) algorithm and you are completely dependent on the original author or a vendor or a consultant to do any scaling?
  12. Are you maybe assuming that your algorithm will scale, but don’t actually know for sure or know exactly how it might scale.
  13. Is there a size to which you are confident that your algorithm will scale, but beyond which you lack confidence as to whether it will scale? Both for input data size and for hardware size.
  14. How well does the repetition count (shots required) scale as input scales? Is there a clear formula or rule of thumb for calculating it? It may be fine and acceptable for relatively small input data, but it could grow faster than desired or acceptable for larger input data. A few hundreds or thousands of repetitions may be perfectly acceptable, but many thousands, millions, billions, or more… not so much. The strict determinism of classical solutions can sometimes be a distinct advantage when the repetition count for a probabilistic quantum solution gets too high.

In any case, designers and users of quantum algorithms must be mindful about the specific technical parameters that come into play as they seek to scale their algorithm to handle larger data input sizes.

Theoretically, a quantum algorithm with a theoretical quantum advantage over a comparable classical algorithm should win, but if the scalability of the quantum algorithm is too problematic, the ease of scaling classical algorithms, coupled with the neverending cleverness and resourcefulness of classical algorithm designers and implementers could conspire to deprive the quantum algorithm of an actual quantum advantage.

How many classical computers would it take to complete the task in the same time as the quantum algorithm?

A most basic question to ask when comparing quantum and classical algorithms is:

  • How many classical computers would it take to complete the task in the same time as the quantum algorithm?

Worst case for the classical algorithm and best case for the quantum computer is that you would need one classical computer or processor for each quantum state in the widest Hadamard transform in the quantum algorithm.

For example, if the quantum algorithm has a Hadamard transform on k qubits, which would generate 2^k quantum states, the fastest classical solution would require 2^k processors.

But that would be the absolute worst case.

Generally, the core quantum algorithm is relatively short and runs very quickly, so that the comparable classical solution could afford to execute the body of the inner loop quite a few times on a single classical processor before it was even noticeable — maybe hundreds or many thousands, or maybe even millions of times before it amounted to a noticeable amount of wall-clock time.

On the other hand, if the Hadamard transform is wide enough — dozens of qubits or more — then the number of iterations required by the classical computer might be in the billions, trillions, or even much, much more.

But, again, the point of this paper is to notice whether the input size for a quantum computer is such that the widest Hadamard transform might be relatively modest so that a relatively small number of classical processors might be needed.

And, recall that the repetition count (shot count) of the quantum algorithm might in fact dwarf the number of iterations required in the comparable classical algorithm — or not, depending on the algorithm and input size.

What algorithm documentation should disclose

Whether described in a formal, published paper or merely as comments in a GitHub repository, the following performance information should be disclosed:

  1. The computational complexity (Big-O) for the quantum algorithm.
  2. The computational complexity (Big-O) for the comparable classical algorithm, whether an actual existing classical algorithm or an estimation of a hypothetical classical algorithm. Either cite the specific classical algorithm, or summarize the hypothetical classical algorithm in sufficient detail to deduce it’s computational complexity.
  3. The theoretical quantum advantage. Some notion of the inverse of the ratio or delta of the computational complexity of the quantum algorithm to the computational complexity of the classical algorithm. This could be a mathematical formalism, which might be difficult for non-mathematicians to make sense of, or a plain-language simplification, such as the common case of an exponential speedup.
  4. A reasonably insightful list of representative examples of the actual quantum advantage for common cases of small or typical input data.
  5. A formula or rule of thumb for calculating the number of repetitions of the quantum algorithm needed for a particular input size. Or at least some rough estimate of the ballpark of the repetition count.

A couple of examples of the latter might be:

  1. For 3 qubits, 2³ = 8 times the performance of a comparable classical algorithm.
  2. For 7 qubits, 2⁷ = 128 times the performance of a comparable classical algorithm.

Such disclosures are needed for both formal publication of quantum algorithms as well as for products and services which are commercially marketed.

There may also be much more elaborate formal disclosures, especially in formal papers, but there does need to be a minimal level of disclosure, as proposed here for all algorithms.

Conclusion

Quantum computing promises mind-boggling performance gains over classical computing solutions, but whether those promises are fulfilled is another matter. Not all quantum algorithms will deliver exponential speedup — for example, Grover’s search algorithm, which offers only a quadratic speedup. Be careful. Check. Avoid assumptions not based on verifiable and measurable facts.

And even when a quantum algorithm will in fact theoretically deliver dramatic gains, translating theory into practice can be quite problematic.

Finally, dramatic theoretical gains for large input data may not be anywhere near as dramatic for more modest input data. The actual quantum advantage might be relatively modest.

And be alert to the repetition count needed for a quantum algorithm for particular input data size — it might grow faster than desired or acceptable even as the underlying quantum algorithm delivers mind-boggling performance gains.

Ultimately, be sure to test to confirm that both performance and quality of results are both acceptable. Again, eschew unverified assumptions.

What’s next?

We’re still in the early, Wild West days of quantum computing, so it’s too soon to insist that the degree of formalism discussed in this paper will quickly find its way into common practice. But one can always hope!

I suspect it will be another two years or so before quantum computing hardware and algorithm design have both advanced to the stage where people will commonly be discussing the specific quantum advantage of their algorithms. Meanwhile, for now, people are simply amazed when anything can be made to work on today’s very limited NISQ quantum computers.

--

--