What Is Dramatic Quantum Advantage?

Jack Krupansky
44 min readJul 7, 2021

--

The great promise of quantum computing is that an application running on a quantum computer should have a truly dramatic and compelling performance advantage over a comparable application running on a classical computer, not merely a minor to moderate performance advantage, but what defines dramatic? The ideal for quantum advantage is a so-called exponential speedup. This informal paper explores and sets out a model for judging how dramatic an advantage a quantum solution has over a classical solution.

There is no standard definition for what constitutes either quantum advantage or dramatic quantum advantage or how they should be measured, but I have my own suggestions and proposals, which I will elaborate in this informal paper.

Caveat: Since we’re not even close to achieving any significant quantum advantage, let alone dramatic quantum advantage in the near future, the relevance of this paper is for more than a few years from now, say 5–7 years, not the next one to three years with current expectations for NISQ quantum computers. Consider it a preview of the future of quantum computing, not the present nor even the near future.

Note: Quantum advantage as used in this paper relates only to quantum computing, and not to the use of the term in any other subfields of the broader area of quantum information science, such as quantum communication.

In the broadest terms, I suggest three levels of quantum advantage:

  1. Minimal quantum advantage. A 1,000X performance advantage over classical solutions. 2X, 10X, and 100X (among others) are reasonable stepping stones.
  2. Substantial or significant quantum advantage. A 1,000,000X performance advantage over classical solutions. 20,000X, 100,000X, and 500,000X (among others) are reasonable stepping stones.
  3. Dramatic quantum advantage. A one quadrillion X (one million billion times) performance advantage over classical solutions. 100,000,000X, a billion X, and a trillion X (among others) are reasonable stepping stones.

Note: The performance advantage is not constant and will vary based on input size. See the “Exponential speedup” and “Quantum advantage tends to be nonlinear“ sections for more details. The thresholds for these levels are based on production-scale input size, or some presumed standard test for input size which may be much smaller than production-scale, or the maximum input size for current quantum computer hardware which is much too limited to handle production-scale input or to handle even some more-desired input size.

One line answer to the headline question:

  • A performance advantage of one quadrillion X (one million billion) would be extremely compelling and the kind of dramatic quantum advantage that a quantum solution should be expected to deliver compared to a classical solution given all of the bold promises of quantum computing.

Abbreviated summary:

  1. There is no standard definition for what constitutes either quantum advantage or dramatic quantum advantage.
  2. I consider a 1,000,000X performance advantage to be the preferred starting point for a compelling dramatic quantum advantage — this should be the primary focus as a threshold for achieving dramatic quantum advantage. But even this would be a mere stepping stone to a much more dramatic quantum advantage. This is what I refer to as a substantial or significant quantum advantage.
  3. I consider a 1,000X performance advantage to be the minimal starting point, but more of a secondary, backup focus, only worthy of attention if a 1,000,000X performance advantage is out of reach. It should be considered more of a distraction or mere stepping stone rather than the primary objective. This is what I refer to as a minimal quantum advantage.
  4. I imagine that most people would probably agree that a performance advantage of one quadrillion X (one million billion) would be extremely compelling and the kind of advantage that a quantum solution should be expected to deliver given all of the bold promises of quantum computing. This would require the use of 50 or more qubits for the core of the quantum computation (2⁵⁰ = one quadrillion.) This is what I refer to as a dramatic quantum advantage.
  5. But for some time-critical niche applications even a 20X to 500X advantage or even a mere 2X to 10X advantage might offer a dramatic and compelling advantage — if a classical solution simply isn’t possible given the severe time constraints and the modest quantum solution is enough to finally enable a solution.
  6. Quantum advantage is generally nonlinear. As the size of the input grows, the advantage also grows, but not necessarily at the same rate as the input grows. Similarly, for smaller input sizes, the quantum advantage will be smaller. There will be some threshold input size where quantum advantage transitions from less than dramatic to very dramatic. Unfortunately, there can be extreme cases where the growth of the advantage eventually slows or the advantage even declines as input size grows, possibly even hitting a wall and failing. Be very cognizant of your input size to get a better handle on the actual quantum advantage.
  7. Beyond any technical metric, what matters most to many is that it just feels like a compelling advantage. It may be more of an emotional matter rather than a strictly technical matter.
  8. Alas, we’re not even close to achieving any significant quantum advantage, let alone dramatic quantum advantage in the near future. The relevance of this paper is for more than a few years from now. Consider it a preview of the future of quantum computing.

A more detailed summary can be found in the conclusion section at the end of this informal paper.

Topics to be explored in this informal paper:

  1. Some questions that will be addressed by this paper
  2. Performance advantage is not a constant — depends on input size
  3. Where do we draw the line between a 25% advantage and a one quadrillion advantage?
  4. No standard or even consensus for quantum advantage, let alone for dramatic quantum advantage
  5. Algorithms and applications versus computer hardware alone
  6. A quantum solution versus a classical solution
  7. To each his own
  8. Different users within the same organization may have different expectations
  9. What is quantum advantage?
  10. Other meanings of quantum advantage in other areas of quantum information science
  11. Quantum computational advantage — synonym emphasizing quantum computing
  12. What is dramatic quantum advantage?
  13. Exponential speedup
  14. The ideal quantum advantage is exponential speedup
  15. It just feels like a truly compelling advantage
  16. Quantum advantage tends to be nonlinear
  17. In some cases, any quantum advantage can begin to slow or even decline
  18. Be very cognizant of your input size to get a better handle on the actual quantum advantage
  19. Fly in the ointment: shot count (circuit repetitions)
  20. What Is the actual quantum advantage of your quantum algorithm?
  21. Net quantum advantage
  22. Three categories of quantum advantage
  23. Wall clock problems
  24. Two-hour business process optimization problems
  25. Larger wall clock problems
  26. Be sure to use the same input data when comparing quantum solutions to classical solutions
  27. Ranges of actual quantum advantage
  28. Minimal, substantial or significant, and dramatic quantum advantage
  29. Minimal quantum advantage — 1,000X
  30. Substantial or significant quantum advantage — 1,000,000X
  31. Dramatic quantum advantage — one quadrillion X
  32. Fractional quantum advantage
  33. One, two, and three star quantum advantage
  34. Gold, silver, and bronze levels of quantum advantage
  35. Platinum, stainless steel, and brushed aluminum levels of quantum advantage
  36. Why aren’t 10X to 500X considered dramatic quantum advantage?
  37. Why aren’t 1,000X to 50,000X considered dramatic quantum advantage?
  38. 1,000,000X seems to be the best threshold for dramatic quantum advantage
  39. One quadrillion X would be a very clear and very compelling dramatic quantum advantage
  40. Flip a coin whether 100,000X to 750,000X should be considered significant quantum advantage?
  41. Be sure to divide net quantum advantage by the number of classical processors used by an application
  42. Special case quantum advantage: Generating random numbers
  43. Never underestimate the cleverness of classical programmers
  44. Don’t expect a dramatic quantum advantage from an algorithm offering only a quadratic speedup
  45. Application categories for practical quantum algorithms
  46. Quantum supremacy
  47. What about Google’s claim of quantum supremacy?
  48. Why is it so hard to explain quantum computing?
  49. Standard maximum classical computer configuration for comparisons
  50. Asking questions about quantum advantage should be a key criteria for reviewing algorithms, applications, papers, projects, and products
  51. We’re not there yet anyway
  52. When will we be there?
  53. Will dramatic quantum advantage require quantum error correction and error-free logical qubits?
  54. Is dramatic quantum advantage possible on NISQ quantum computers?
  55. Is any quantum advantage possible on any NISQ quantum computers?
  56. Best path to dramatic quantum advantage — Little data with a big solution space
  57. Summary and conclusions

Some questions that will be addressed by this paper

This paper should give some sort of answer to many questions, such as these:

  1. What is quantum advantage?
  2. What is dramatic quantum advantage?
  3. What is the minimum quantum advantage?
  4. What is a substantial or significant quantum advantage?
  5. What is a fractional quantum advantage?
  6. Is an exponential speedup mandatory?
  7. What is an actual quantum advantage?
  8. What is a net quantum advantage?
  9. Is a 25% advantage a dramatic quantum advantage? 50%? 75%?
  10. Is a 2X advantage a dramatic quantum advantage? 4X? 8X?
  11. Is a 10X advantage a dramatic quantum advantage? 20X? 50X? 75X?
  12. Is a 100X advantage a dramatic quantum advantage? 250X? 500X? 750X?
  13. Is a 1,000X advantage a dramatic quantum advantage? 2,500X? 5,000X? 7,500X?
  14. Is a 10,000X advantage a dramatic quantum advantage? 25,000X? 50,000X? 75,000X?
  15. Is a 100,000X advantage a dramatic quantum advantage? 250,000X? 500,000X? 750,000X?
  16. Is a 1,000,000X advantage a dramatic quantum advantage? 10,000,000X? 100,000,000X? 500,000,000X?
  17. Is a 1,000,000,000X (billion) advantage a dramatic quantum advantage? 10,000,000,000X? 100,000,000,000X? 500,000,000,000X?
  18. Is a trillion X advantage a dramatic quantum advantage?
  19. Is a quadrillion X advantage a dramatic quantum advantage?

Most people would readily admit that an advantage of a factor of a quadrillion or more would clearly be a dramatic quantum advantage.

Performance advantage is not a constant — depends on input size

The performance advantage of a quantum algorithm is not constant and will vary based on the size of the input to the algorithm. But to simplify discussion, this paper presumes the use of some standardized input size so that constant numbers can be discussed. Ideally, the chosen input size should correspond to realistic production-scale input. But since current quantum computers are far too limited to handle production-scale data, significantly smaller input size is presumed.

This paper doesn’t presume any specific input size. Each user and each organization will have their own criteria for input size — and arrive at their own quantum advantage numbers.

The thresholds for the levels of quantum advantage used in this paper are based on a presumption of production-scale input size, but any size can be used. The possibilities include:

  1. Production-scale input size. Of course that can vary greatly. Just pick one. Although this is not practical at this time. Maybe in 5–7 years.
  2. The size of some presumed standardized test case for input which may be significantly smaller than production-scale so that it can actually be run on a real quantum computer or some hardware anticipated within a couple of years.
  3. The maximum input size which can be correctly processed by current quantum computer hardware which is much too limited to handle production-scale input or to handle even some more-desired input size which may become feasible within a couple of years.
  4. Hypothetical input sizes for proposed future quantum computers. Requires tools for estimating performance of a hypothetical quantum computer, using estimates for qubit count, qubit fidelity, gate error rates, maximum circuit depth, etc.
  5. The maximum input size which can be processed on a classical computer. Otherwise a comparison (quantum advantage) cannot be drawn between the quantum and classical solutions.

Where do we draw the line between a 25% advantage and a one quadrillion advantage?

The key question is where to draw the line between a 25% advantage and a one quadrillion advantage.

The short answer is that it all depends, and different organizations having different requirements and needs will make different choices.

But if you insist on a single number, 1,000X and 1,000,000X are the two top candidates.

No standard or even consensus for quantum advantage, let alone for dramatic quantum advantage

There is no standard or even consensus definition or specification for quantum advantage or what metric and what value for that metric should be used to judge quantum advantage, let alone for dramatic quantum advantage.

I give my own ideas, definitions, and metrics in this informal paper, but my views don’t carry much weight.

Algorithms and applications versus computer hardware alone

We’re not looking at the raw performance advantage of raw quantum computer hardware over raw classical computer hardware, but the performance of algorithms running on the hardware — the software.

The hardware enables the performance of the software but ultimately what matters is the performance of the combination of hardware and software — the performance of quantum software on quantum hardware compared to classical software on classical hardware.

A typical quantum application is a hybrid of some number of pure quantum algorithms plus some amount of classical software which invokes the quantum algorithms to perform the compute-intensive portions of the application.

A quantum solution versus a classical solution

The term solution refers to the combination of hardware and software.

A quantum solution is quantum software on quantum hardware, where the quantum software is a hybrid of some number of pure quantum algorithms plus some amount of classical software (running on classical hardware) which invokes the quantum algorithms to perform the compute-intensive portions of the application.

A classical solution is classical software on classical hardware.

So, ultimately quantum advantage is measuring the performance of a quantum solution (quantum software on quantum hardware) relative to a purely classical solution (classical software on classical hardware.

To each his own

Ultimately, each user gets to decide based on their own organizational needs what numeric quantum advantage is sufficient to justify a quantum solution over a classical solution — and what constitutes a dramatic quantum advantage.

Different users within the same organization may have different expectations

Even within one organization, different projects or different business units may have different requirements or needs which result in differing thresholds for what constitutes an acceptable or even a dramatic quantum advantage.

The best to hope for is a range of expectations for each organization.

What is quantum advantage?

The intent of this paper is not to define or explore all aspects of quantum advantage, but to focus on the issue of what threshold of performance advantage should be considered dramatic. For more depth on quantum advantage, consult my paper:

A very detailed definition of quantum advantage is presented in that paper, but a brief definition from that paper is included here:

  • quantum advantage. A quantum computer can perform a particular computation significantly faster than even the best classical computer. And in some cases, a quantum computer can perform computations which no classical computer can perform at all — also referred to as quantum supremacy.

I explored degrees of quantum advantage in another paper:

Other meanings of quantum advantage in other areas of quantum information science

This paper is exclusively focused on quantum computing and the meaning of quantum advantage in quantum computing. But “quantum” covers a much broader set of fields and subfields than solely quantum computing.

Quantum information science (QIS) is a broad umbrella of related fields and subfields, including:

  1. Quantum effects.
  2. Quantum information.
  3. Quantum computing.
  4. Quantum communication.
  5. Quantum networking.
  6. Quantum metrology.
  7. Quantum sensing.

Any of those fields or subfields might have one or more advantages over classical, non-quantum approaches.

For example, people speak of quantum advantage for quantum communication. But that has nothing to do with the concept of quantum advantage as it applies to quantum computing.

I presume that there are aspects of quantum advantage in quantum sensing, and I presume that they don’t relate very directly to quantum advantage as the term is used within quantum computing.

Quantum computational advantage — synonym emphasizing quantum computing

I’ve seen a few journal papers which use the term quantum computational advantage, completely synonymous with quantum advantage, but emphasizing that it is in the context of quantum computing rather than advantage in other areas of quantum information science mentioned above.

Maybe technically we should be using the longer term to avoid ambiguity and potential confusion, but I’m not ready to go that route yet.

But just to be clear, everything in this informal paper is concerned with quantum computational advantage — quantum advantage in the context of quantum computing.

What is dramatic quantum advantage?

In that last paper I introduced this definition for dramatic quantum advantage:

  • Dramatic quantum advantage. Not simply a notable advantage of a quantum algorithm over a classical algorithm, but such a dramatic advantage that there is no question that the quantum-based solution is the only way to go. A 50% improvement or even a 2–4 times improvement might normally be advantageous when comparing two classical solutions, but a quantum solution might require a 10X, 100X, 1,000X, or even a factor of a million or a billion or more over a comparable classical solution to justify the intellectual difficulty and cost of going quantum.

I’ll explore and even revise that definition here in this paper. In particular, the performance threshold to be used for that definition.

Exponential speedup

Exponential speedup is the holy grail of quantum computing. That’s what every quantum solution should be shooting for. A quantum solution shouldn’t just be somewhat faster than a classical solution, but exponentially faster. Exponential speedup is the only real reason to pursue quantum computing, at least for computationally-intensive applications.

Delivering an exponential speedup for a nontrivial amount of input data would certainly qualify as a dramatic quantum advantage.

This paper won’t delve deeply into computational complexity, but that’s what exponential speedup is all about. It’s all about attempting to model and characterize how long an algorithm will run based on the size of its input. For more on computational complexity, consult this paper:

Put simply, a quantum algorithm offers polynomial computational complexity (better) while a comparable classical algorithm offers exponential computational complexity (worse). For input of size n, the classical algorithm might take O(2^n) time while the quantum algorithm might take only O(n²) time. See the paper above for details on computational complexity and so-called Big-O notation. Basically, O is short for order of, meaning approximately or roughly.

Polynomial computational complexity loosely means that n is the base and a constant is the exponent, while exponential complexity means the opposite, that n is the exponent and the base is a constant.

For example, for n = 40, the classical algorithm might take 2⁴⁰ = one trillion seconds, while the quantum algorithm might take only 40² = 1,600 seconds — that’s an exponential speedup.

And the quantum advantage only grows as the input size increases.

A modest increase of n from 40 to 50 means that the classical algorithm might take 2⁵⁰ = one quadrillion seconds — 32 billion years — while the quantum algorithm might take only 50² = 2,500 seconds. That’s a huge exponential speedup.

The ideal quantum advantage is exponential speedup

Just to reaffirm a point already made, that exponential speedup is the ideal goal for quantum computing — it is the ideal quantum advantage.

It just feels like a truly compelling advantage

Regardless of the specific technical metric used to judge a dramatic quantum advantage, the real question is whether technical staff, users, managers, executives, and customers alike simply feel that the quantum solution is truly compelling — it just feels like a truly compelling advantage.

Quantum advantage tends to be nonlinear

Although the introduction couched the problem as a discrete numerical advantage, the nature of quantum advantage is that it is generally nonlinear — the advantage will actually increase as the size of the input increases. This is because a classical solution would typically have an exponential computational complexity while a quantum solution would typically have a polynomial computational complexity, which is much more efficient.

For example, if a classical solution has a computational complexity with a Big-O of O(2^n) and the comparable quantum solution has a computational complexity with a Big-O of O(n²), the actual quantum advantage for various input sizes would be:

  1. For n = 10, classical O(2¹⁰) = 1,024, quantum O(10²) = 100 = quantum advantage of 10X.
  2. For n = 15, classical O(2¹⁵) = 32,768, quantum O(15²) = 225 = quantum advantage of 146X.
  3. For n = 20, classical O(2²⁰) = 1,000,000, quantum O(20²) = 400 = quantum advantage of 2,500X.

For more detail on computational complexity and Big-O notation, see the “Exponential speedup” section and this paper:

The actual quantum advantage grows rapidly as input size grows — in fact the advantage grows faster than the input size grows.

For relatively small inputs, the quantum advantage is rather modest.

But for even modestly larger inputs, the quantum advantage becomes very dramatic, very quickly.

For any particular algorithm there are three thresholds of interest:

  1. Start. The input size at which the quantum advantage starts to look attractive.
  2. Nominal. Or Typical. Or Target. The typical input size for the application. Hopefully it is greater than the start threshold, and above the dramatic threshold as well. There’s no guarantee that the input size is greater than the dramatic threshold.
  3. Dramatic. The input size at which quantum advantage becomes truly dramatic.

Using the previous example, maybe start is 10, maybe nominal is 25, and maybe dramatic is 18. In that case, happily, nominal input size is greater than the dramatic threshold.

In some cases, any quantum advantage can begin to slow or even decline

Because of limitations of the hardware, some quantum algorithms can actually see the rate of increase of their advantage as input size grows begin to slow or even see the advantage begin to decline or even hit a wall where the algorithm begins to fail.

Coherence time and maximum circuit depth, as well as accumulation of gate errors, can cause such anomalies.

These and other factors can cause the need to increase shot count (circuit repetitions) to compensate, putting further downwards pressure on any remaining quantum advantage.

So, users could see an increasing quantum advantage for a while, but then start to see that advantage begin to evaporate.

Algorithm designers and application developers need to pay careful attention to limitations, whether of the raw hardware or of the use of resources by the algorithm.

Be very cognizant of your input size to get a better handle on the actual quantum advantage

If you really want to get a good handle on quantum advantage, you need to be cognizant of the likely input size — small or large, and how small or how large.

Using the example from the preceding section, achieving dramatic quantum advantage of 1,000 and 1,000,000 would require input size of:

  1. For n = 19, classical O(2¹⁹) = 524,288, quantum O(19²) = 361 = quantum advantage of 1,452, roughly 1,000.
  2. For n = 30, classical O(2³⁰) = 1,073,741,824, quantum O(30²) = 900 = quantum advantage of 1,193,046, roughly 1,000,000.

For more on getting a better handle on actual quantum advantage of your algorithm and application, consult this paper:

Fly in the ointment: shot count (circuit repetitions)

So far, the math looked fairly easy, but there is another factor to be considered: shot count or circuit repetitions. The math so far presumed that the quantum circuit would execute once, but in reality, the circuit would have to be executed dozens, hundreds, thousands, or even millions of times before a statistically correct result can be achieved.

There are two reasons for circuit repetitions:

  1. Low qubit fidelity. High error rate for gates and measurements.
  2. Inherently probabilistic nature of quantum computation.

With current error-prone NISQ quantum computers, the former can be substantial and even overwhelm any quantum advantage in some cases.

But even with advances in hardware which significantly improves qubit fidelity, there will still be significant errors for the foreseeable future.

And even when qubit error rates get fairly low, some number of shots will still be required to account for the inherent probabilistic nature of the particular quantum computation.

Worse, the shot count itself may grow nonlinearly as the size of the input grows.

As one example, if 100 shots were required for n = 19 and 1,000 shots were required for n = 30:

  1. For n = 19, the quantum advantage of 1,452 becomes 14.52 (divided by 100.) Not dramatic at all.
  2. For n = 30, the quantum advantage of 1,193,046 becomes 1,193 (divided by 1,000.) Still reasonably dramatic, but nowhere near as dramatic as 1,000,000.

For a deeper discussion on shots and circuit repetitions, consult this paper:

What Is the actual quantum advantage of your quantum algorithm?

And there are other factors that modify the actual quantum advantage. For a more in-depth discussion of all of the factors, consult this informal paper:

That paper also introduces two variations on quantum advantage:

  1. Theoretical quantum advantage. A mathematical comparison of the algorithmic complexity of a quantum algorithm compared to the algorithmic complexity of a comparable classical algorithm. Typically, this will be a quantum algorithm with polynomial complexity or less compared to a classical algorithm with exponential complexity, so the quantum algorithm delivers an exponential speedup over the classical algorithm.
  2. Actual quantum advantage. The actual numerical advantage of a quantum algorithm over a comparable classical algorithm for specific input data, especially of a small to modest size, even if in theory a quantum algorithm would deliver an exponential speedup for larger input data.

It is so easy to get blown away by the promises of theory, but it is the actual quantum advantage which really matters to real people trying to solve real problems.

Net quantum advantage

Net quantum advantage is really just a synonym for actual quantum advantage. It takes into account all factors, including shot count (circuit repetitions) and evaluation for a specific input size or relatively narrow range of input sizes which reflects how the algorithm or application will likely be used in realistic real-world applications, especially production-scale applications — not mere contrived lab experiments.

In short, net quantum advantage (or actual quantum advantage) is what a real user would actually experience, not some theoretical advantage ungrounded in reality.

Three categories of quantum advantage

For the purposes of discussion in this particular paper, I’ll divide quantum advantage into three categories:

  1. Wall clock problems. Time is of the essence, the clock is ticking. Minutes and hours. Even a modest quantum advantage of 2X to 10X could make a great difference.
  2. Larger wall clock problem. Calendar time is significant. Days, weeks, months. A somewhat more dramatic quantum advantage of 1,000X or even a million is needed. But, for some niche applications, particularly those which have pushed classical computing to its limits, even a 20X to 100X advantage could deliver great value.
  3. Time per se is not an issue. Classical solutions are so impractical that even more months are not sufficient. Sure, maybe a classical solution can complete in many months or a year, but the significant advantage is a large factor of reduction in time and resources. A truly dramatic quantum advantage is needed — a factor of millions, billions, trillions, or even more is needed.

Wall clock problems

I define a wall clock problem as a problem which must be solved in some defined time interval, typically measured in hours and possibly minutes, where a solution which takes longer than that designated interval is effectively useless since the organization has a mission critical need which must be met in that time interval or the organization is considered to have failed to fulfill its mission. Taking much longer than that designated interval will incur dramatic pain, anxiety, cost, lost business, unhappy customers, lost opportunity, or some other dramatic and unacceptable outcome.

The point here is that if a quantum solution can complete within that designated time interval while a classical solution cannot, then there is a clear quantum advantage.

In fact, I would go so far as to suggest that there is quantum supremacy since no classical solution can do the job acceptably. Granted the classical solution might take only moderately longer and the quantum solution might have only a seemingly minor relative advantage, the mere fact that it is a wall clock problem which cannot be acceptably solved with a classical solution indicates quantum supremacy. The actual quantum advantage might be relatively small, but enough to make all of the difference in the world.

That said, such narrow constraints might be relatively atypical — generally a few hours won’t matter, but the point of wall clock problems is that they are the cases where even relatively small amounts of time can matter a lot.

Once we have separated out wall clock problems, we can focus on the remaining problems where actual quantum advantage is expected to be very large with even a factor of a thousand or a million being towards the low end of expectations and a factor of a billion, a trillion, or even more are what will be targeted with quantum solutions.

Two-hour business process optimization problems

Usually we focus attention on using quantum computers to solve really big problems that might take years, decades, or even centuries on a classical computer or even a large cluster of classical computers, but there are some relatively small problems where quantum computers could still deliver a significant or even dramatic quantum advantage. One category of uch problems is what I call two-hour optimization problems.

Imagine a business process optimization problem such as vehicle routing which has a 2-hour window to run every day to plan out the business needs of that day, but the best classical solution takes 4 hours, meaning it doesn’t fit in the two-hour window and therefore is flat out unacceptable, so a mere 2X advantage for a quantum solution is a big win. Or if the best classical solution took 3 hours, even a mere 50% quantum advantage would be a big win.

But these are extreme examples. After all, there are always a thousand tricks to get a 50% to 2X improvement for any classical solution, especially with cheap commodity clusters of machines and a smarter, more elite technical team. That said, it could well be that the three or four-hour classical solution is in fact the best optimization that the best classical team could muster, so I suspect that there are indeed plenty of such two-hour optimization problems out there.

A bigger benefit would come if a more sophisticated optimization algorithm which provides more functional features but might take a full eight hours or a full day or two or even a week or month to run a classical solution, while a quantum solution could get the job done well within that required 2-hour window. That might mean a quantum advantage of 10X to 300X.

In any case, an actual quantum advantage far short of 1,000,000X or even short of 1,000X might be sufficient.

Larger wall clock problems

Nominally, wall clock problems are measured in hours, but the same concept can be applied to problems where the time interval is measured in days, weeks, or even months. For these larger wall clock problems the functional definition is the same — the organization has a mission-critical need and a maximum allowable time interval.

Some possible time intervals:

  1. A day or two. Might be meaningful for an application which is run on the weekend, part of a weekly process.
  2. A week or two. Might be meaningful for some design process which occurs over a small number of months.
  3. A month or two. Might be meaningful for some process which occurs over many months or even a year or more.

Many design applications may take a lengthy amount of time, but a quantum solution (or the classical equivalent) which can complete in 10–20% or 1–2% of the total time may be completely acceptable.

These might be problems where a classical solution takes great amounts of time — days, weeks, months, or even theoretically years if you actually tried to implement it — but a quantum solution is still acceptable even if it still takes hours, days, weeks, or even a month. But a quantum solution in seconds or a few minutes won’t necessarily provide much of an advantage over a solution in a few hours or even a few days.

The point is that the difference in wall clock time between a quantum and classical solution must be very meaningful to the organization. A mere advantage with no tangible business value is no real advantage to the organization.

Be sure to use the same input data when comparing quantum solutions to classical solutions

Any type of comparison of different solutions, including comparison of quantum and classical solutions, should be an apples-to-apples comparison, not an apples-to-oranges comparison.

In particular, both solutions should be built from the same requirements specification, should be based on the same assumptions about the problem, have the same limits, achieve the same accuracy in results, and… use the exact same input data.

To be clear, we’re not comparing raw quantum computer hardware to raw classical computer hardware, but comparing quantum algorithms and applications to classical algorithms and applications.

Ranges of actual quantum advantage

Just for reference, if somebody suggests that a quantum algorithm or quantum application provides a quantum advantage, these ranges should be used to gauge how significant or dramatic a quantum advantage is being provided.

To be clear, these are actual quantum advantages, including shot counts (circuit repetitions) and for specific input sizes:

  1. Under 25%
  2. 25%
  3. 50%
  4. 75%
  5. 2X (100%)
  6. 4X
  7. 5X
  8. 8X
  9. 10X
  10. 20X
  11. 50X
  12. 75X
  13. 100X
  14. 250X
  15. 500X
  16. 750X
  17. 1,000X
  18. 2,500X
  19. 5,000X
  20. 7,500X
  21. 10,000X
  22. 25,000X
  23. 50,000X
  24. 75,000X
  25. 100,000X
  26. 250,000X
  27. 500,000X
  28. 750,000X
  29. 1,000,000X
  30. 10,000,000X
  31. 100,000,000X
  32. 500,000,000X
  33. Billion X
  34. 10 Billion X
  35. 100 Billion X
  36. 500 Billion X
  37. Trillion X
  38. 10 Trillion X
  39. 100 Trillion X
  40. 500 Trillion X
  41. Quadrillion X

The point here is not to provide an exhaustive list of quantum advantage milestones, but to present the possibilities as a spectrum and encourage people to think about the problem a little more concretely and specifically than purely abstractly.

Minimal, substantial or significant, and dramatic quantum advantage

In the broadest terms, I suggest three levels of quantum advantage:

  1. Minimal quantum advantage. A 1,000X performance advantage over classical solutions. 2X, 10X, and 100X (among others) are reasonable stepping stones.
  2. Substantial or significant quantum advantage. A 1,000,000X performance advantage over classical solutions. 20,000X, 100,000X, and 500,000X (among others) are reasonable stepping stones.
  3. Dramatic quantum advantage. A one quadrillion X (one million billion times) performance advantage over classical solutions. 100,000,000X, a billion X, and a trillion X (among others) are reasonable stepping stones.

Minimal quantum advantage — 1,000X

I suggest that a 1,000X performance advantage over classical solutions would constitute a minimal quantum advantage.

2X, 10X, and 100X (among others) are reasonable stepping stones to minimal quantum advantage.

Substantial or significant quantum advantage — 1,000,000X

I suggest that a 1,000,000X performance advantage over classical solutions would constitute a substantial or significant quantum advantage.

20,000X, 100,000X, and 500,000X (among others) are reasonable stepping stones to substantial or significant quantum advantage.

Dramatic quantum advantage — one quadrillion X

I suggest that a one quadrillion X (one million billion times) performance advantage over classical solutions would constitute a dramatic quantum advantage.

100,000,000X, a billion X, and a trillion X (among others) are reasonable stepping stones to dramatic quantum advantage.

Fractional quantum advantage

Achieving minimal, substantial or significant, or dramatic quantum advantage will be incredible feats, not easily achieved. As such, it may be worth acknowledging partial achievement of those three major milestones, which I suggest could be called fractional quantum advantage, measured as a percentage or decimal fraction of the milestone metric.

For example,

  1. 1% minimum quantum advantage would be 0.01 times 1,000X or 10X.
  2. 10% minimum quantum advantage would be 0.10 times 1,000X or 100X.
  3. 50% minimum quantum advantage would be 0.50 times 1,000X or 500X.
  4. 5% substantial or significant quantum advantage would be 0.05 times 1,000,000X or 50,000X.
  5. 25% substantial or significant quantum advantage would be 0.25 times 1,000,000X or 250,000X.
  6. 1% dramatic quantum advantage would be 0.01 times one quadrillion X or ten trillion X.
  7. 0.1% dramatic quantum advantage would be 0.001 times one quadrillion X or one trillion X.
  8. 0.0001% dramatic advantage would be 0.000001 times one quadrillion X or one billion X.

One, two, and three star quantum advantage

To avoid ambiguity or confusion of terms, the three major levels of quantum advantage could be labeled with star ratings:

  1. One star. Minimal quantum advantage — 1,000X.
  2. Two stars. Substantial or significant quantum advantage — 1,000,000X.
  3. Three stars. Dramatic quantum advantage — one quadrillion X.

Gold, silver, and bronze levels of quantum advantage

An alternative to star levels of quantum advantage would be contest prize metals:

  1. Gold. Three stars. Dramatic quantum advantage — one quadrillion X.
  2. Silver. Two stars. Substantial or significant quantum advantage — 1,000,000X.
  3. Bronze. One star. Minimal quantum advantage — 1,000X.

Platinum, stainless steel, and brushed aluminum levels of quantum advantage

A disadvantage of the traditional gold, silver, bronze contest prize metal model is that the performance differences between the top three winners can be very modest, so it would be more helpful to have metals which emphasize wider distinctions between the levels, such as:

  1. Platinum. Gold. Three stars. Dramatic quantum advantage — one quadrillion X.
  2. Stainless steel. Silver. Two stars. Substantial or significant quantum advantage — 1,000,000X.
  3. Brushed aluminum. Bronze. One star. Minimal quantum advantage — 1,000X.

Stainless steel is easily recognized as reasonably substantial and of solid practical utility, but nowhere close to the value of a platinum.

And brushed aluminum is easily recognized as minimal but of a workable utility value for less-demanding applications, but nowhere near as substantial as stainless steel or of the extreme value of platinum.

Why aren’t 10X to 500X considered dramatic quantum advantage?

To put it simply, it’s too easy and commodity classical hardware is too cheap to readily deploy almost any number of classical computers to a given computational task.

Deploying a cluster of 10, 20, 50, 100, 250, or even 500 commodity servers is a slam-dunk no-brainer for even average IT technical teams.

So if your quantum solution only offers a 10X to 500X performance advantage, it’s easy enough to readily deploy a classical solution with the same performance.

Why aren’t 1,000X to 50,000X considered dramatic quantum advantage?

Although not as trivial a task, thousands of classical commodity servers have been used to address significant computational tasks, so it doesn’t feel appropriate to consider a performance advantage of a mere 1,000X to be a dramatic quantum advantage.

Yes, a performance advantage of 1,000X could be considered to be a quantum advantage, just not a dramatic quantum advantage.

That said, for particular situations, 1,000X to 50,000X may in fact, in the eyes of the users, be viewed as a dramatic quantum advantage. But that would be more of an exception than the general rule.

1,000,000X seems to be the best threshold for dramatic quantum advantage

Although thousands of classical commodity servers can be used to implement a significant computational task, a million servers seems well beyond our reach.

Maybe in theory, or if given enough commitment and enough resources a multi-million server computing cluster could be deployed, but such a system sure seems beyond our reach at this time, although who can say looking out a decade or more.

So, at least for now, a performance advantage of 1,000,000X seems to be the most plausible threshold to use for dramatic quantum advantage.

One quadrillion X would be a very clear and very compelling dramatic quantum advantage

As impressive as a performance advantage of 1,000,000X might be, it is still only a stepping stone to the real promise of quantum computing. I imagine that most people would probably agree that a performance advantage of one quadrillion X (one million billion) would be extremely compelling and the kind of dramatic quantum advantage that a quantum solution should be expected to deliver compared to a classical solution given all of the bold promises of quantum computing.

This would require the use of 50 or more qubits for the core of the quantum computation (2⁵⁰ = one quadrillion.) And quantum computers with significantly greater capacity are expected within the next few years.

There’s absolutely no way that even a massive distributed cluster of classical computers could compete with a quantum computer with a performance advantage of one quadrillion X.

Although one normally wouldn’t construct classical computing servers beyond a few tens of thousands of servers, one can run the algorithms for an extended period, running into many weeks and even months, but running servers for years, decades or centuries is clearly beyond the realm of practicality.

Millions or billions of servers are impractical, although there are some technical possibilities, at least in theory.

But a quadrillion servers? That’s just flat out of the question.

So, it’s very easy to conclude that a performance advantage of one quadrillion would clearly be a compelling and very dramatic quantum advantage.

Flip a coin whether 100,000X to 750,000X should be considered significant quantum advantage?

A very motivated organization (e.g., Google, Apple, Microsoft) could deploy a massive computing cluster with 10,000, 20,000, 50,000, or even 75,000 classical commodity servers for a high-value computing task, but a cluster with 100,000 to 750,000 servers seems decidedly less-practical today.

Given an extreme motivation, a computing cluster with 100,000 to 750,000 servers is probably possible, at least theoretically, although rather unlikely today.

In short, 100,000 to 750,000 servers is a gray zone, not worth considering at this time.

Sure, if somebody did implement a solution with 100,000 to 750,000 servers we shouldn’t be too surprised, but at least moderately surprised.

So, it’s a coin flip or fielder’s choice whether to consider a performance advantage of 100,000X to 750,000X as a substantial or significant quantum advantage. Some will, some won’t. Or for some applications it may be, but for other applications it may not be.

I personally wouldn’t complain too vociferously if someone did consider 100,000X to 750,000X to be a substantial or significant quantum advantage, and I wouldn’t complain too loudly if they didn’t, but I would prefer that they didn’t.

But what about supercomputers?

The supercomputers of today utilize thousands of high-end microprocessors, including so-called GPUs. But the option of a user combining multiple supercomputers, let alone dozens, hundreds, or thousands of supercomputers is simply not available, at least today. As such, if a compute-intensive application is only moderately larger than the largest supercomputer, even a moderate quantum advantage of 2X to 10X can be very compelling — and constitute a dramatic quantum advantage, at least for niche applications, which are most of what supercomputers are used for anyway.

On the flip side, classic computer technology continues to progress, with newer supercomputers leapfrogging existing supercomputers. As such, a quantum advantage of 2X to 10X today could well completely evaporate within a few years as newer supercomputers are 4X to 20X what is available today.

Be sure to divide net quantum advantage by the number of classical processors used by an application

Whether dealing with a classical supercomputer or a cluster of classical commodity processors, the proper performance advantage metric is to take the performance advantage of a single quantum computer over a single classical computer and divide by the number of classical processors used in the classical solution.

The reason for this division is that we’re trying to compare solution to solution — compare the quantum solution to the classical solution.

Start with the net quantum advantage, the raw quantum advantage divided by the shot count needed to get a statistically meaningful result.

For example, if a quantum computer has a net quantum advantage of 1,000X compared to a single classical processor, and the classical solution uses 10 classical processors, then the final quantum advantage would be only 1,000 divided by 10 or 100X.

Or, if the net quantum advantage was one billion, and 10,000 classical processors are used, then the final quantum advantage would be only one billion divided by 10,000 or 100,000X.

Or, if the net quantum advantage was one million, and 1,000 classical processors are used, then the final quantum advantage would be only one million divided by 1,000 or 1,000X.

This is an example of why it’s a great reason to use a quadrillion as the threshold for dramatic quantum advantage, 1,000,000X as the threshold for substantial or significant quantum advantage, and a threshold of 1,000X for minimal quantum advantage.

To reiterate, quantum advantage should be a quantum solution to classical solution comparison — at the application level, not a quantum processor to classical processor comparison — at the algorithm and processor level.

Special case quantum advantage: Generating random numbers

There is one very narrow niche special case where quantum computers inherently have an advantage over classical computers, namely, generating true random numbers, not the pseudo-random numbers which can be generated by a classical computer. In fact, it’s truly trivial to generate true random numbers using a quantum computer. In fact, all existing NISQ quantum computers can generate true random numbers.

For details, consult this paper:

A Turing machine is incapable of generating true random numbers — simply because a Turing machine can compute any mathematically computable function, but no mathematical function can compute true randomness.

Instead, people get by with pseudo-random number generators which are computable functions, or use specialized external hardware to gather entropy from the environment. That paper lists some of the special hardware devices.

But the point is that a quantum computer can directly generate true random bits without any other special hardware or external hardware or services, and with virtually no effort.

Never underestimate the cleverness of classical programmers

Direct comparisons with classical solutions can be misleading. It’s one thing if the classical solution is truly the best possible classical solution, but many classical solutions are far from optimal or best. It would be misleading to compare a great quantum solution to a mediocre classical solution. On the other hand, it would be very notable if a mediocre quantum solution greatly outperforms a great classical solution.

A team of elite classical software developers could take a mediocre classical solution and increase its performance by anywhere from 25% to 20X — or more.

And that’s just with classical programming tricks.

Use of GPUs, large distributed clusters, FPGAs, etc. could wring out improvements of 10X to 50X — or more — for critical compute-intensive tasks.

Keep these possibilities in mind before blindly assuming that a current classical implementation is necessarily inferior to a quantum solution.

Don’t expect a dramatic quantum advantage from an algorithm offering only a quadratic speedup

Whether a quantum algorithm which delivers only a quadratic speedup, such as the Grover search algorithm, delivers a dramatic quantum advantage can be problematic — some such algorithms may deliver a dramatic quantum advantage, while some may not.

Quadratic speedup is far less desirable than exponential speedup.

In many cases, classical algorithm designers have a host of clever tricks at their disposal that can negate any benefit from a quadratic speedup.

Generally, quadratic speedup should not be expected to deliver a significant quantum advantage, let alone a dramatic quantum advantage.

That said, there may indeed be special cases or special situations where a quadratic speedup can in fact deliver a dramatic improvement over a classical solution.

Application categories for practical quantum algorithms

A range of categories of applications are potentially appropriate for quantum computing. This informal paper of mine lists many of them, including lists from specific vendors and companies considering use of quantum computers:

Quantum supremacy

Although the focus here is quantum advantage, here is the brief definition of the related term quantum supremacy, from the paper cited below:

  • quantum supremacy. A quantum computer is able to compute a solution to a particular problem when no classical computer is able to do so at all or in any reasonable amount of time. Does not imply either an advantage or supremacy for any other problems beyond the particular problem or niche of closely related problems. Alternatively, quantum advantage across a wide range of applications and computations. Or, possibly simply a synonym for quantum advantage.

The bottom line is that quantum advantage implies some advantage and dramatic quantum advantage implies a very significant advantage, while quantum supremacy implies that no solution is even possible on a classical computer, at least not in any reasonable amount of time.

Generally, years, decades, centuries, millennia, and millions or billions of years are amounts of time which are considered unreasonable for computations for real organizations attempting to solve real problems in a timely manner.

Months are a gray area. For some applications, a couple of months might be considered reasonable. But generally, taking months, more than a few months, will be considered unreasonable, and hence a quantum solution that can complete a task in seconds, minutes, or even a few hours will be considered to have achieved quantum supremacy if the best classical solution takes more than a very few months.

Maybe I might want to coin the term absolute quantum supremacy for cases where the best classical solution would take more than, say, a few decades.

For more depth on quantum supremacy, consult my paper:

What about Google’s claim of quantum supremacy?

Google has claimed to have achieved quantum supremacy, and it is indeed true, but it is for a very narrow niche case which is more of a theoretical curiosity of computer science and unrelated to any of the oft-touted applications of quantum computing, listed above.

So, Google’s achievement is a milestone for quantum computer science, but not for practical applications. This paper is focused on practical applications.

For more on my interpretation of Google’s experiment, see my own informal paper:

Why is it so hard to explain quantum computing?

Users don’t really need to know all of the details for what is under the hood of a quantum computer or how a qubit really works, but the real issue is that until we have examples of quantum computers, quantum algorithms, and quantum applications which have actually achieved dramatic quantum advantage, there is nothing for people to focus on to start tracing backwards to how it was achieved.

Right now, not having credible solutions to focus on, all we can do is look at the technical details we do know, and speculate as to what can be done with them. That’s not a very productive use of any of our time, energy, attention, or focus.

In fact, all people will really have to care about is that quantum algorithms use quantum parallelism to achieve dramatic quantum advantage.

Standard maximum classical computer configuration for comparisons

Classical computers come in a wide range of configurations, everything from a modest laptop to a large distributed cluster of high-end servers, or even a classical supercomputer. For purposes of judging quantum advantage, the classical computer for comparisons should be sized based on the organizational budget. But for any large organization, this section describes a maximum configuration of classical computing hardware. And a maximum running time as well.

The goal is not to suggest the absolute maximum or theoretical maximum, but to suggest a size that should be within reach of a typical large organization.

Of course every organization will have its own requirements and resource limits for hardware and running time, but this section suggests a reasonable standard configuration as a baseline for considering quantum advantage.

Some of the distributed cluster sizes to be considered:

  1. 1,000 servers. The low end.
  2. A few thousand servers.
  3. 10,000 servers. Seems like a sweet spot for a good standard size to use.
  4. 25,000 servers.
  5. 50,000 servers.
  6. 75,000 servers. A high-end maximum. The extreme. I’m unaware of anyone using more servers for a single application.

I’ve settled on 10,000 servers as a suggested standard size.

As far as maximum running time, the times I’ve considered:

  1. Two hours.
  2. Eight hours. A single, full shift.
  3. One day.
  4. Two days.
  5. Three days.
  6. One week.
  7. Two weeks.
  8. One month.
  9. 45 days. Seems like the sweet spot for a good standard limit to use.
  10. Two months.
  11. Three months.
  12. Six months.
  13. One year. Not likely, but possible.

I’ve settled on 45 days of running time as a suggested maximum running time.

So, my suggested standard maximum classical computer configuration for comparisons is:

  • A distributed cluster of 10,000 reasonably high-end classical servers.
  • Running for a maximum of 45 days.

Now, maybe it’s not fair to compare a single quantum computer to a cluster of thousands of classical computers, but the point is that quantum computers are being pitched and promised as being thousands, millions, or even billions or more times better than classical computers.

Maybe someday quantum computers will become a commodity chip and we can compare equal numbers of quantum computers and classical computers, but that day is not soon.

Maybe the proper long-term comparison is cost of ownership or total cost of ownership. Right now, a single quantum computer is very expensive while a single classical processor is very cheap, so comparing a single expensive quantum computer to a comparable cost of commodity classical processors seems like a fair comparison.

Asking questions about quantum advantage should be a key criteria for reviewing algorithms, applications, papers, projects, and products

How should people, especially managers and technical team leaders, review quantum computing algorithms, applications, papers, projects, and products? In short, ask lots of questions about quantum advantage. In particular:

  1. Have they achieved quantum advantage at all?
  2. What exactly is the quantum advantage that has been achieved?
  3. Have they examined not just the abstract theoretical quantum advantage, but also the actual net quantum advantage based on actual expected data input size?
  4. Have they clearly identified what input sizes were tested and quantified the actual net quantum advantage for those tested input sizes?
  5. Have they really only achieved a fractional quantum advantage, such as a modest advantage over classical solutions, but nothing particularly impressive and actually noteworthy?
  6. Not just claimed, but have they tested to confirm the claimed quantum advantage?
  7. Have they actually tested against a classical solution?
  8. Did the classical solution have sufficient hardware resources to fully exploit the extensive capabilities of classical computing?
  9. For what input size range have they achieved and tested quantum advantage?
  10. Does that input size represent a true, production-scale application, or is it really only for demonstration purposes and not ready for production-scale use?
  11. Has the classical solution really been optimized or may it be suboptimal so that the comparison with an optimal quantum solution is not a fair comparison?
  12. Exactly how much attention and how elite of a technical staff was placed on optimizing the classical solution?
  13. And have they really achieved dramatic quantum advantage as discussed in this paper?
  14. How scalable is their algorithm and its net quantum advantage over a broad range of input sizes?
  15. What shot count or circuit repetitions are needed, and how do they and quantum advantage scale as the input size scales? Have they taken shot count into account when calculating and measuring quantum advantage — to discount theoretical quantum advantage by shot count overhead?
  16. Do they have concise, crisp Big-O formulas for the algorithmic complexity of both the quantum and classical solutions.

A professional quantum team should be able to answer all of these questions — in great detail. But… we’re not there yet — quantum computing is still much more of a mere laboratory curiosity than ready for prime-time production applications.

We’re not there yet anyway

Asking a technical team what dramatic quantum advantage their quantum solution achieves is a moot point anyway since we are neither there yet nor even close. We aren’t even close to basic, bare minimal quantum advantage, let alone dramatic quantum advantage.

When will we be there?

It’s an open debate as to when we will achieve even basic, bare minimal quantum advantage, let alone true, dramatic quantum advantage.

  • This year? No way.
  • Next year? No.
  • Two years? Still not there.
  • Three to four years? Well, maybe for some narrow niche special cases, in special situations, but not generally across the board.
  • Five years? Some possibility, for some but not necessarily all application categories, but don’t bet the farm.
  • Seven years? I would hope so, but bet only very judiciously, and not for a few more years. At least for multiple application categories.
  • Ten years? A reasonable bet. For a wider range of application categories.
  • Fifteen years? A fairly sure thing. For most application categories.
  • Twenty years? Commonplace. For all application categories.

Will dramatic quantum advantage require quantum error correction and error-free logical qubits?

Dramatic quantum advantage will require qubits which have a much higher fidelity than current NISQ quantum computers, but shouldn’t require full-blown quantum error correction (QEC) and error-free logical qubits.

It is not clear exactly how much greater qubit fidelity will be required. It will likely vary from application to application.

It will probably require what I refer to as near-perfect qubits, with a qubit fidelity of from four to six nines of reliability — 99.99% to 99.9999% reliable. A mere three nines — 99.9% reliability — may be sufficient for some subset of applications, which may or may not be considered a near-perfect qubit.

For more on nines of qubit fidelity, see my paper:

For more on quantum error correction and logical qubits, see my paper:

Is dramatic quantum advantage possible on NISQ quantum computers?

Personally, I doubt that dramatic quantum advantage can be achieved for practical applications using current and projected NISQ quantum computing hardware.

As I indicated in the preceding section, I suspect that near-perfect qubits will be required, so what I call NPISQ hardware will be required — near-perfect intermediate-scale quantum hardware. Noisy qubits — only a couple of nines of qubit fidelity — probably won’t be sufficient to do the job.

Granted, some very niche applications, or contrived computer science laboratory experiments, such as so-called cross-entropy benchmarking, may be able to achieve some approximation of quantum advantage or even full dramatic quantum advantage using NISQ hardware, but that won’t be the common case for the expected practical applications for quantum computers.

Is any quantum advantage possible on any NISQ quantum computers?

Even if full, dramatic quantum advantage cannot be readily achieved for practical applications using NISQ hardware, that still leaves the question of whether significant or even minimal quantum advantage can be achieved. My answer remains the same, that noisy NISQ qubits simply won’t be up to the job — near-perfect qubits will still be required.

Sure, some fraction of minimal quantum advantage might be achieved for some subset of applications (say, 4X to 100X), but we’re looking for full quantum advantage, at least minimal quantum advantage(1,000X), and for a broad range of applications.

Granted, there will be a gray area between high-end NISQ hardware (still noisy but not quite so noisy) and low-end NPISQ hardware (maybe only three or four nines of qubit fidelity), but maybe only for some niche subset of applications in some situations, but not across a broad range of applications.

Best path to dramatic quantum advantage — Little data with a big solution space

The best or most direct path to dramatic quantum advantage is what I call little data with a big solution space. Shor’s factoring algorithm is an example of this approach — a small amount of data goes in, a vast amount of parallel computation is performed very quickly, and a very modest amount of data comes out. Basically it exploits quantum parallelism by evaluating a relatively small computation over a huge solution space to find a relatively small, discrete solution, something that is very expensive and time-consuming on a classical computer, but almost instantaneous on a quantum computer due to the nature of quantum parallelism.

I explore this concept in this informal paper:

Summary and conclusions

  1. There is no standard or even consensus as to what quantum advantage really is or what metric and what value for that metric should be used to judge quantum advantage, let alone for dramatic quantum advantage.
  2. Each user gets to decide based on their own business needs what numeric quantum advantage is sufficient to justify a quantum solution over a classical solution.
  3. Even within one organization, different projects or different business units may have different requirements or needs which result in differing thresholds for what constitutes an acceptable quantum advantage.
  4. The best to hope for is a range of expectations for each organization.
  5. Exponential speedup is the ideal, the best goal for quantum solutions. A quantum computer and quantum algorithm are not delivering on their true promise unless they deliver an exponential speedup compared to a classical algorithm running on a classical computer. Put simply, the time to execute a quantum algorithm with an exponential speedup grows much more slowly than the time to execute the comparable classical algorithm as the input size grows. Delivering an exponential speedup for a nontrivial amount of input data would certainly qualify as a dramatic quantum advantage.
  6. Regardless of the specific technical metric used to judge a dramatic quantum advantage, the real question is whether technical staff, users, managers, executives, and customers alike simply feel that the quantum solution is truly compelling — it just feels like a truly compelling advantage.
  7. But even if a quantum solution delivers something somewhat less than a strict exponential speedup, it’s still possible that the quantum solution might in fact still offer a quantum advantage of some sort, possibly a significant quantum advantage, and maybe even a dramatic quantum advantage. Anything is possible, but nothing is guaranteed — only exponential speedup for a nontrivial amount of input data provides a guarantee of dramatic quantum advantage.
  8. Whether a quantum algorithm which delivers only a quadratic speedup, such as the Grover search algorithm, delivers a dramatic quantum advantage can be problematic — some such algorithms may deliver a dramatic quantum advantage, while some may not. Generally, quadratic speedup should not be expected to deliver a significant quantum advantage, let alone a dramatic quantum advantage.
  9. I wouldn’t consider a performance advantage of 2X, 5X, 10X, 25X, 50X, or even 100X as a dramatic quantum advantage. Even 250X, 500X, and 750X just aren’t close.
  10. And anything less than 2X (25%, 50%, 75% advantage) would be hardly worth the effort since it is usually fairly easy for a skilled technical team to wring out such gains from classical computing solutions using clever tricks or just more hardware.
  11. A performance advantage of 1,000,000 X should be the primary focus as the preferred starting point threshold for dramatic quantum advantage. This is what I refer to as a substantial or significant quantum advantage.
  12. At a bare minimum, a performance advantage of 1,000X would start to be a dramatic quantum advantage, but should only be pursued if the 1,000,000X threshold cannot be achieved. It should be considered more of a distraction or mere stepping stone rather than the primary objective. This is what I refer to as a minimal quantum advantage.
  13. A performance advantage of one quadrillion X and more would be more typical of the kind of dramatic quantum advantage which we should be expecting from quantum computers. This is where the great promise of quantum computing lies. This would require the use of 50 or more qubits for the core of the quantum computation (2⁵⁰ = one quadrillion.) Even speedups of a billion or a trillion would be mere stepping stones. This is what I refer to as a dramatic quantum advantage.
  14. For some high-value niche applications a performance advantage of 5X to 10X might be sufficient to deliver significant value to an organization, but that would be more of an exception than the rule for quantum computing.
  15. Even 1,000X to 50,000X performance advantages are not necessarily truly dramatic quantum advantages since large distributed clusters of classical servers can achieve such gains without the need to completely switch from a classical mindset to a quantum mindset.
  16. Only advantages of 100,000X and more are clearly beyond even large distributed clusters of classical servers.
  17. One can consider partial achievement of a quantum advantage level, call it fractional quantum advantage, measured as a percentage or decimal fraction of the milestone metric.
  18. Direct comparisons with classical solutions can be misleading. It’s one thing if the classical solution is truly the best possible classical solution, but many classical solutions are far from optimal or best. It would be misleading to compare a great quantum solution to a mediocre classical solution. On the other hand, it would be very notable if a mediocre quantum solution greatly outperforms a great classical solution.
  19. Quantum supremacy is a special case of quantum advantage — where the quantum computer can perform a task which cannot be performed at all by a classical computer — there is no classical solution.
  20. Every algorithm, application, paper, project, and product should be carefully reviewed relative to its quantum advantage. Not just the abstract theoretical quantum advantage, but also the actual net quantum advantage based on actual expected data input size.
  21. Important caveat #1: Quantum advantage is generally nonlinear. As the size of the input grows, the advantage also grows, but not necessarily at the same rate as the input grows. Similarly, for smaller input sizes, the quantum advantage will be smaller. There will be some threshold input size where quantum advantage transitions from less than dramatic to very dramatic. Unfortunately, there can be extreme cases where the growth of the advantage eventually slows or the advantage even declines as input size grows, possibly even hitting a wall and failing. Be very cognizant of your input size to get a better handle on the actual quantum advantage.
  22. Important caveat #2: Shot count (circuit repetitions) can dramatically reduce or even eliminate the quantum advantage. Especially for noisy NISQ devices which may require a very large number of circuit repetitions to achieve a statistically meaningful result.
  23. Be sure to divide the net quantum advantage (after dividing by shot count) by the number of classical processors used by an application, so that the comparison is quantum solution to classical solution — at the application level, not simply quantum algorithm to classical algorithm at the processor level.
  24. Alas, we’re not even close to achieving any significant quantum advantage, let alone dramatic quantum advantage in the near future. The relevance of this paper is for more than a few years from now. Consider it a preview of the future of quantum computing.

--

--

Jack Krupansky
Jack Krupansky

No responses yet