Where Are All of the 40-qubit Quantum Algorithms?

Jack Krupansky
37 min readSep 8, 2021

--

Despite the dramatic advances of quantum computing in recent years, I find it odd that there are no 40-qubit quantum algorithms or even 32-qubit quantum algorithms, even though there are several real quantum computers with more than 50 qubits and we have 40-qubit simulators. This informal paper will explore the merits and issues related to 40-qubit quantum algorithms.

Current classical quantum simulators are capable of simulating 40 qubits, but how come we are not seeing published algorithms exploiting 40 or even 32 qubits?

Toy algorithms (fewer than 24 qubits) are great to prove that basic hardware features of a quantum computer function properly, and demonstrate some basic elements of quantum algorithms, but they don’t even come close to demonstrating the power of quantum parallelism or demonstrating scalability, let alone potential for dramatic quantum advantage and suitability for tackling practical, production-scale real-world problems.

Sure, the Google quantum supremacy experiment used 53 qubits, but that was a contrived computer science experiment not a solution to a practical, real-world problem.

We need to see quantum algorithms using at least 32 to 40 qubits in a single Hadamard transform for quantum parallelism to show that we are on the cusp of dramatic quantum advantage. And this is just for starters, with the prospects for larger, heavier lifts beyond the basics of 40-qubit quantum algorithms.

Again, I find the dearth of 40-qubit quantum algorithms at this juncture outright startling and quite baffling. Now let’s explore why this might be.

Topics discussed in this informal paper:

  1. Motivation for this paper
  2. My goal: Encourage development of scalable 40-qubit quantum algorithms
  3. This is my challenge — I’m throwing down the gauntlet — I want to see a lot of 40-qubit quantum algorithms within two years
  4. Toy quantum algorithms
  5. Industrial-strength quantum algorithms
  6. Production-scale quantum algorithms
  7. Are any real quantum algorithms using more than 23 qubits?
  8. Late breaking news: Google has simulated up to 30 qubits for Quantum Machine Learning
  9. Google quantum supremacy experiment used 53 qubits, but not to solve a practical problem
  10. Why limit to 40 qubits?
  11. What’s the big benefit of 40 qubits?
  12. Scaling is essential — 40 qubits is merely a stepping stone
  13. What is dramatic quantum advantage?
  14. Don’t we already have real quantum computers with more than 40 qubits?
  15. Quantum Volume of at least one trillion is needed to support 40-qubit quantum algorithms
  16. NISQ quantum computers can’t handle 40-qubit quantum algorithms
  17. Technically, 43 qubits may be needed for 40-qubit quantum parallelism
  18. There just aren’t many 40-qubit quantum algorithms published
  19. It’s a chicken and egg problem
  20. My suggested solution: Simulate 40-qubit quantum algorithms now, be ready for real quantum computers when they become available
  21. Simulation is the best near-term path to supporting 40-qubit quantum algorithms
  22. Target qubit fidelity of 2–5 years from now
  23. So, why isn’t this happening? Where are all of the 40-qubit quantum algorithms?
  24. Why aren’t there any 40-qubit quantum algorithms?
  25. A sampler of excuses for why people aren’t focused on 40-qubit quantum algorithms
  26. Significant resources needed to simulate quantum circuits with a large number of product states
  27. Qubits, circuit depth, and product states
  28. Deep circuits and product states
  29. Is circuit depth a big issue for 40-qubit quantum algorithms?
  30. But in theory, simulating 40 qubits should be practical
  31. In short, there’s no technical excuse for lack of 40-qubit quantum algorithms which can run on simulators
  32. Mindset obstacles to 40-qubit quantum algorithms
  33. Is a focus on variational methods holding back larger algorithms?
  34. Need for fine granularity of phase and probability amplitude
  35. Shouldn’t the promise of support for quantum phase estimation (QPE) and quantum Fourier transform (QFT) be sufficient to draw a focus on 40-qubit quantum algorithms?
  36. Future demand for quantum phase estimation (QPE) and quantum Fourier transform (QFT) will eventually drive 40-qubit quantum algorithms
  37. No real need for 40-qubit quantum algorithms until quantum phase estimation (QPE) and quantum Fourier transform (QFT) create a need
  38. Why little publication of 40-qubit quantum algorithms even if simulation is limited?
  39. Any 40-qubit quantum algorithms in the Quantum Algorithm Zoo?
  40. My full solution: A model for scalable quantum algorithms
  41. Tackling practical, production-scale real-world problems
  42. Will 40-qubits get us to The ENIAC Moment for quantum computing?
  43. Will 40-qubits get us to The FORTRAN Moment for quantum computing?
  44. What would resource requirements be for various application categories?
  45. 40-qubit quantum algorithms which I may not be aware of
  46. Dramatic need for additional research
  47. Open source
  48. What’s a PhD candidate or thesis advisor to do?
  49. Timeframe
  50. Who’s on first?
  51. When will all quantum algorithms be 40-qubit?
  52. Take the 40-qubit challenge!
  53. Inspire the hardware vendors to support 40-qubit algorithms
  54. Summary and conclusions

Motivation for this paper

While contemplating issues with scaling quantum algorithms I started by considering how far we can go using classical quantum simulators before simulators run out of steam (capacity.) At present, the limit for simulation appears to be in the 38 to 42-qubit range — call it 40 qubits.

I considered scaling in a number of stages, from 4 to 40 qubits.

The general thesis was that if you can prove incremental scaling from 4 to 40 qubits, it might be reasonably safe to presume scaling to 44 qubits and beyond — to 48, 50, 52, 55, 64, 72, 80, and even 96 qubits and beyond.

You can read more about my staged model for scaling in my informal paper:

Naturally, I became interested in the potential to extend the range of simulation, beyond 40 qubits, maybe to 44, 48, 50, or even 55 or 60 qubits. My thinking was that this would increase the strength of proof of scaling — proving scaling at 50 or 60 qubits would be more impressive than merely at 40 qubits.

The idea for this paper came up in my head as I considered the potential for much larger classical quantum simulators, beyond the roughly 40-qubit limit of current simulators. Although simulators with a thousand times greater capacity would certainly be more interesting, I asked the question of how many published algorithms are taking advantage of the current 40-qubit capacity of current simulators. The best I could find were algorithms using 15 to 23 qubits, and that was for algorithms running on real quantum computers, such as Google’s 53-qubit Sycamore. I wasn’t able to identify any algorithms running on simulators using 24 to 40 qubits.

So, I asked myself — why not? Indeed where are all of the 40-qubit quantum algorithms?

Alternatively, this paper explores the matter of why there isn’t much in the way of 40-qubit quantum algorithms running on classical quantum simulators.

So, for now, there’s no dramatic incentive to build simulators with much greater capacity than 40 qubits — since there aren’t even any 40-qubit quantum algorithms yet.

My goal: Encourage development of scalable 40-qubit quantum algorithms

To be clear, I am not focusing on 40 qubits as a specific target algorithm size, but focusing on scalable quantum algorithms tested in stages from 4 to 40 qubits — or whatever the size limit is for the available classical quantum simulators. And, with the expectation that those algorithms will also easily scale to 50, 64, 72, 80, and even 96 and more qubits as real quantum computers become available with sufficient qubits and qubit fidelity, connectivity, and coherence.

Also to be clear, this is primarily a research effort at this stage — we’re nowhere near close to being able to even contemplate deployment of practical production-scale quantum algorithms to solve practical real-world problems.

What I want to see in the short term is that algorithm research projects should be focused on reporting results for testing scalability to 40 qubits on classical quantum simulators.

This is my challenge — I’m throwing down the gauntlet — I want to see a lot of 40-qubit quantum algorithms within two years

Seriously, I find it profoundly depressing that we don’t have a wealth of scalable algorithms capable of utilizing 40-qubits even on a classical quantum simulator.

So, I’m throwing down the gauntlet. This is my challenge. I want to see a fairly wide range of scalable 40-qubit quantum algorithms over the next one to two years.

There’s no good reason for quantum algorithm designers not to meet this challenge.

Toy quantum algorithms

The term toy quantum algorithm is not intended to be derogatory, but simply highlights the very limited size and capacity of an algorithm that is not designed to handle production-scale computations, and not designed to be readily scaled to handle production-scale problems when a real quantum computer with sufficient capacity and fidelity does become available.

A toy quantum algorithm is in contrast to an industrial-strength quantum algorithm, which is designed to scale up to handle production-scale problems.

Industrial-strength quantum algorithms

An industrial-strength quantum algorithm has been designed for the capacity and performance required to address the needs of production-scale real-world practical problems, and to deliver dramatic quantum advantage. This is in contrast to a toy quantum algorithm with only very limited capacity, insufficient to handle any significant amount of data and very limited performance.

Generally that will require a substantial advance in real quantum computer hardware, with a lot of qubits (64 or more), very high qubit fidelity (four or five nines), decent coherence time, and great qubit connectivity.

Production-scale quantum algorithms

The terms production-scale quantum algorithm and industrial-strength quantum algorithm are essentially synonyms.

Both are expected to achieve dramatic quantum advantage for practical real-world problems.

Are any real quantum algorithms using more than 23 qubits?

I haven’t done an exhaustive literature search, but the largest quantum algorithm I could find was from Google AI using 23 qubits:

And this was an algorithm running on a real quantum computer — Sycamore.

Granted, there may be larger algorithms out there, but I haven’t run across any.

And I’m really more concerned about algorithms running on simulators, so the fact that it is difficult to run a quantum algorithm for more than 23 qubits on a real quantum computer is no obstacle to simulating larger algorithms.

Late breaking news: Google has simulated up to 30 qubits for Quantum Machine Learning

Just as I was finalizing this paper, Google announced a paper on Quantum Machine Learning where they simulated quantum circuits up to 30 qubits with a depth of 40 gates:

  • TensorFlow Quantum: A Software Framework for Quantum Machine Learning
  • Michael Broughton, et al
  • August 26, 2021
  • https://arxiv.org/abs/2003.02989
  • In the numerical experiments conducted in [22], we consider the simulation of these quantum machine learning models up to 30 qubits. The large-scale simulation allows us to gauge the potential and limitations of different quantum machine learning models better. We utilize the qsim software package in TFQ to perform large scale quantum simulations using Google Cloud Platform. The simulation reaches a peak throughput of up to 1.1 quadrillion floating-point operations per second (petaflop/s). Trends of approximately 300 teraflop/s for quantum simulation and 800 teraflop/s for classical analysis were observed up to the maximum experiment size with the overall floating-point operations across all experiments totaling approximately two quintillions (exaflop).
  • From the caption for Figure 9: “Circuits were of depth 40.

This is a great step forward, but still not even close to 40 qubits or even 32.

Google quantum supremacy experiment used 53 qubits, but not to solve a practical problem

Granted, the Google quantum supremacy experiment did use 53 qubits on their 53-qubit Sycamore processor back in 2019, but that was a contrived computer science experiment (XEB — cross-entropy benchmarking), not a solution to a practical, real-world problem.

See my discussion of this experiment:

As discussed in the preceding two sections, the best Google has managed to accomplish in a practical sense are 23 and 30-qubit algorithms.

Why limit to 40 qubits?

The only reason for the 40-qubit limit in this informal paper is that it’s roughly the current limit for classical quantum simulators. Maybe you could push 41 or 42 qubits on some simulators.

Technically, theoretically, we should be able to simulate up to about 50 qubits. That would require a lot of hardware resources, essentially a very large classical supercomputer.

But it would be pointless to offer a 50-qubit simulator today since people are generally not even exploiting the 40-qubit capacity that we already have today.

I in no way wish to discourage interest in 41 to 50-qubit (and more) algorithms.

The real goal here in this paper is to understand why there isn’t a lot more interest in 40-qubit quantum algorithms.

What’s the big benefit of 40 qubits?

Besides the fact that bigger is always better and 40 qubits happens to be the current limit for simulation, the value and benefit of 40-qubit quantum algorithms is that the ability to perform a computation on 2⁴⁰ quantum states — that’s a trillion quantum states — is on the threshold of dramatic quantum advantage.

I personally define dramatic quantum advantage as a factor of one quadrillion — 2⁵⁰ — greater than the performance of a classical computer.

Granted, 2⁵⁰ quantum states would achieve full dramatic quantum advantage, but since we don’t have any near-term prospect for simulating 50 qubits, 40 qubits is at least close enough — on the threshold of actual dramatic quantum advantage.

Sure, a quantum advantage of a quadrillion would be more dramatic, but a quantum advantage of a trillion seems good enough to say that we’re almost there, we’re getting close. Or as a Chicago Cubs fan would say, Maybe next year!

And, provided that the algorithms are scalable, the fact that we can achieve a quantum advantage of a trillion using 40 qubits on a simulator makes it very credible that we could achieve a quantum advantage of a quadrillion using 50 qubits on a real quantum computer.

Being stuck with algorithms focused on 16 to 24 qubits doesn’t give us much credibility on scaling up to dramatic quantum advantage.

Scaling is essential — 40 qubits is merely a stepping stone

As previously stated, the goal here is not 40 qubits per se, but that’s simply the approximate limit for current classical quantum simulators.

The real goal is scaling so that an algorithm designed and tested for 10, 20, 30, and 40 qubits has a credible chance to scale to 50, 64, 72, 80, 96, 128, 192, and 256 qubits and beyond.

40 qubits is simply a stepping stone on the way to full, production-scale quantum algorithms and applications.

What is dramatic quantum advantage?

I personally define dramatic quantum advantage as a factor of one quadrillion — 2⁵⁰ — greater than the performance of a classical computer.

I also define minimum quantum advantage as a factor of a thousand times the performance of a classical computer.

And I also define significant quantum advantage as a factor of a million times the performance of a classical computer.

For more on minimum quantum advantage, significant quantum advantage, and dramatic quantum advantage, read my paper:

Don’t we already have real quantum computers with more than 40 qubits?

Yes, we already do have real quantum computers with more than 40 qubits. Google and IBM both have 53-qubit quantum computers, and IBM has a 65-qubit quantum computer. But…

There are limitations to fully exploiting even 40 of those qubits:

  1. Low qubit fidelity. Noise and error rates that discourage the design of complex algorithms.
  2. Low coherence time. Severely limits circuit depth.
  3. Limited connectivity. Only a few qubits to which any qubit can be directly connected. Forces a reliance on SWAP networks which increases the impact of #1 and #2 — more gates with lower fidelity.

In short, real quantum computers are not yet ready to support 40-qubit quantum algorithms.

Quantum Volume of at least one trillion is needed to support 40-qubit quantum algorithms

For 40-qubit quantum algorithms to be generally supported, a real quantum computer would have to have a Quantum Volume of at least 2⁴⁰ — that’s one trillion, while IBM has so far only achieved QV of 128 and Honeywell achieved a QV of 512. Google doesn’t report Quantum Volume.

For more on Quantum Volume, see my paper:

NISQ quantum computers can’t handle 40-qubit quantum algorithms

Current NISQ quantum computers simply can’t handle 40-qubit quantum algorithms, as discussed in the previous section.

In fact, near-term NISQ quantum computers expected in the next year or two are just as unlikely to be able to handle 40-qubit quantum algorithms.

The “N” in NISQ is the deal killer. Much greater qubit fidelity is needed. I don’t think full quantum error correction (QEC) is needed, but what I call near-perfect qubits probably are required. Near-perfect qubits would have more nines of qubit fidelity — 99.99% to 99.999%. But at that point the “N” is no longer warranted and should be replaced with “NP” (near-perfect) — NPISQ.

To put it more simply, since NPISQ is needed for 40-qubit quantum algorithms, NISQ will never be sufficient to fully support 40-qubit quantum algorithms.

Technically, since the “IS” in NISQ stands for Intermediate Scale, defined as 50 to hundreds of qubits, a quantum computer with at least 40 qubits but under 50 qubits doesn’t qualify as a NISQ device. My proposal is to replace “IS” with “SS” for Small Scale. So, NPSSQ quantum computers are needed to fully support 40-qubit quantum algorithms, although NPISQ quantum computers should work as well.

For more on quantum error correction, see my paper:

For more on qubit fidelity and nines of qubit fidelity, see my paper:

Technically, 43 qubits may be needed for 40-qubit quantum parallelism

The real goal is to support quantum parallelism on 40-qubits, to achieve a quantum speedup of one trillion, but in practice a few more qubits, ancilla qubits, may be needed to support those 40 qubits. So, 43 qubits is probably the better target.

That’s slightly beyond current simulation capacity, but it may be just another year or two or maybe three before simulation of 43 qubits becomes practical.

Until then, quantum parallelism will be limited to 37 to 39 qubits, limiting computations to a quantum speedup of 100 billion, 250 billion, or 500 billion, rather than one trillion.

There just aren’t many 40-qubit quantum algorithms published

But the overwhelming limitation to fully exploiting 40 qubits on even a real quantum computer with 40 or more qubits is simply:

  • There just aren’t many published algorithms ready to exploit 40 qubits.

It’s a chicken and egg problem

So, we have a classical chicken and egg problem:

  1. Why bother writing a 40-qubit quantum algorithm if there isn’t sufficient hardware to run it on?
  2. Why bother building 40-qubit quantum computers with sufficient capabilities to run complex 40-qubit quantum algorithms if no such algorithms exit?

The most obvious solution to such problems is to just bite the bullet and do both with equal measure, at the same time.

My suggested solution: Simulate 40-qubit quantum algorithms now, be ready for real quantum computers when they become available

To me, the solution is simple:

  • Algorithm designers: Design, develop, and test complex 40-qubit quantum algorithms using advanced classical quantum simulators. Prove that they work. Publish them. Let the hardware engineers know that you are ready for real quantum computers with robust support for 40 qubits.
  • 40-qubit quantum algorithms will then be ready to run on advanced hardware as soon as it becomes available. Presuming that the hardware has enough qubits with sufficient qubit fidelity and connectivity.

Simulation is the best near-term path to supporting 40-qubit quantum algorithms

Just to emphasize this key point more strongly, simulation is the best path currently available for pursuing 40-qubit quantum algorithms.

For more details on design and development of scalable quantum algorithms, see my paper:

And for more details on use cases for simulation of quantum algorithms, see my paper:

Target qubit fidelity of 2–5 years from now

Since current real quantum computers cannot properl run 40-qubit algorithms at present, by definition we’re talking about running on real quantum computers as they might exist years in the future. I’m speculating that we will see real quantum computers of sufficient qubit fidelity to run 40-quibit algorithms in two to five years. Classical quantum simulators should be configured to match the likely qubit fidelities of two to five years from now.

See the Major Use Cases paper cited in the preceding section for more discussion of configuring classical quantum simulators for qubit fidelity.

It’s a fielder’s choice how to configure qubit fidelity for a classical quantum simulator. Although perfect simulation is a plausible choice, and matching current real quantum computers is another plausible choice, I would suggest that we’re trying to to be both forward looking and realistic and pragmatic, so the two to five-year timeframe for real quantum computers is a much more reasonable target.

A specific expected real quantum computer two or three years from now would probably be a most-preferred choice, but absent what exactly might become available in two to three years, a realistic projection of where qubit fidelity is likely to be in two to five years is maybe the most reasonable default choice for target qubit fidelity, unless one has specific good reasons for some other more specific target of qubit fidelity.

Even if one does accept the two to five-year horizon, it is still an open question where specifically in that range is the preferred choice.

My suggestion is to pick two or three points in that range and evaluate algorithms at each of those points. If the lower bound turns out to give decent results, it can be chosen as the single preferred point. If the upper bound still doesn’t give decent results, an even higher bound is needed.

Thinking of qubit fidelity in terms of nines of qubit fidelity, the range of three to six nines seems to be the rough range for expectations for qubit fidelity over two to five years. Fractional nines may be required — maybe 3.5 nines in two years and maybe 5.5 nines in five years.

For more on nines of qubit fidelity, see my paper:

In any case, be explicit about the target qubit fidelity of your algorithm. Or, be explicit about how many years you expect to have to wait before a real quantum computer is likely to have sufficient qubit fidelity to give decent results for your algorithm.

Of course you can always find some level of nines of qubit fidelity which enable your quantum algorithm to give decent results on a configurable classical quantum simulator. Just be explicitly honest about how many years the users of your algorithm will have to wait to run that algorithm on a real quantum computer.

So, why isn’t this happening? Where are all of the 40-qubit quantum algorithms?

But… clearly that isn’t happening. Why not? That’s the question to be explored by this informal paper.

Why aren’t there any 40-qubit quantum algorithms?

An alternative phrasing of the headline question is to ask why there aren’t any 40-qubit quantum algorithms.

A sampler of excuses for why people aren’t focused on 40-qubit quantum algorithms

Just off the top of my head, mostly my personal observations and speculation, here are just a sample of the excuses for not producing 40-qubit quantum algorithms today:

  1. It’s just technically too challenging. But I’m skeptical about that, although it’s at least partially true.
  2. Smaller algorithms are sufficient for the publication needs of most academic researchers.
  3. Smaller algorithms are easier to diagram — for publication.
  4. Smaller algorithms are easier to debug and test.
  5. Larger algorithms are very difficult to debug and test.
  6. Limited coherence time limits circuit depth, which limits how many qubits can be manipulated and entangled in a single large quantum computation.
  7. Large algorithms and NISQ quantum computers just don’t mix.
  8. The resource requirements for simulating 40 qubits could be enormous for more than fairly shallow circuits — 2⁴⁰ quantum product states (one trillion) times the circuit depth. Okay, so shoot for 30–32 qubits then.
  9. Large simulations require great patience.
  10. Simulators do exist, but simulation is still a bit immature and doesn’t inspire great confidence. It’s not easy to design, implement, and configure a noise model that does a realistic simulation of a realistic quantum computer.
  11. It’s far more impressive to run on a real quantum computer than a simulator, but current real quantum computers don’t offer robust support for 40 or even 32-qubit algorithms.
  12. People are currently impressed and focused on simply solving problems at all using a quantum computer, even if for only 23, 17, 11, or even 7 qubits. Even just saying the name of the problem being addressed is a reward in itself.
  13. Eventually 40-qubit quantum algorithms will happen, but for now they aren’t a priority.
  14. Easier to go after low-hanging fruit — diminishing returns for solving harder problems. Fewer qubits get the job done — results to be published. Breaking your back to do even just a few more qubits just isn’t worth it.
  15. Hand-coding of quantum algorithms is very tedious. Programmatic generation of quantum circuits is too big a challenge for many.
  16. Generalizing quantum algorithms to enable them to be fully parameterized and scalable is a difficult task, beyond the abilities of many. It’s easy to parameterize and scale based on an integer, but not so easy if the parameter is an atom or molecule or protein or drug.
  17. 24 qubits may be more than sufficient for variational methods. More qubits may be diminishing returns. 2⁴⁰ quantum states may be gross overkill for variational methods. And people are stuck with variational methods until the day when qubit fidelity eventually is sufficient to support full quantum phase estimation (QPE) and quantum Fourier transform (QFT.)
  18. Need for fine granularity of phase and probability amplitude. Not a problem for simulators, but very problematic for real quantum computers, with no certainty of a resolution in the next few years. If real quantum computers won’t support fine granularity of phase angles and probability amplitudes for another five years or more, why bother even simulating such algorithms. This is needed for quantum Fourier transform (QFT) and quantum phase estimation (QPE).

Significant resources needed to simulate quantum circuits with a large number of product states

Quantum circuits using a modest to moderate number of qubits and a modest to moderate number of quantum logic gates can be simulated with modest to moderate classical computing resources. No problem there.

Raw number of qubits per se is not a limiting factor.

Rather it is quantum product states which dramatically balloon the resource requirements.

A Hadamard transform using only 40 qubits would result in 2⁴⁰ quantum product states (one trillion) times the circuit depth. So, a circuit with 100 gates could generate up to 100 trillion product states.

That would be a lot of data and require a lot of processing.

And every qubit you add to that would double the resource requirements.

Qubits, circuit depth, and product states

Just to reemphasize and highlight that it is these three factors combined which impact resource requirements for simulation of quantum circuits:

  1. Qubit count.
  2. Circuit depth.
  3. Product states. Each qubit has a quantum state, but entangled qubits have a combined quantum state, called a product state. Actually, the entangled qubits can have multiple product states. A product state is simply a quantum state involving multiple qubits.

A modest qubit count times a modest circuit depth is still a relatively modest number — 40 x 20 = 800.

But product states are exponential — n qubits in a Hadamard transform produce 2^n product states — 20 qubits is 2²⁰ or one million product states, 30 qubits is 2³⁰ or one billion product states, and 40 qubits is 2⁴⁰ or one trillion product states.

A quantum circuit can have more qubits beyond those which are entangled using a Hadamard transform, but those additional quantum states would be relatively insignificant compared to qubits which are entangled in product states.

Product states must be multiplied by the circuit depth which utilizes the qubits which are entangled in product states. So, a circuit depth of 100 quantum logic gates for 40 qubits in product states would be 100 trillion total quantum states. That’s a lot of data to process.

50 qubits would be 2⁵⁰ or one quadrillion product states, which is currently well beyond the capacity of existing simulators, but theoretically possible, and should be achievable in a few years.

And 60 qubits would be 2⁶⁰ or one quintillion product states, which is well beyond our technical capabilities even years from now.

Where exactly the practical limit will be between 50 and 60 qubits is a great unknown at this stage.

Deep circuits and product states

Just to reemphasize and highlight a point from the previous section, the data storage requirements for simulation are driven most strongly not just by qubits or deeper circuits, but the combination of product states and deep circuits.

So, 40 qubits in a Hadamard transform would generate 2⁴⁰ or one trillion product states. Couple that with a complex circuit of 1,000 quantum logic gates, and you get one quadrillion product states.

I don’t imagine that current classical quantum simulators are prepared to handle that much data.

Where exactly the practical limit is between a few dozen gates and 1,000 gates is a great unknown at this stage.

Is circuit depth a big issue for 40-qubit quantum algorithms?

Shallow circuits should be very feasible at 40 qubits, but how deep can a 40-qubit circuit get before it just takes too long to simulate or consumes too much storage for quantum states?

Of course the answer is that it all depends on the degree of entanglement — the number of product states which are needed at a significant number of gates.

A Hadamard transform on 40 qubits would be 2⁴⁰ or a trillion product states. And then multiply that by the circuit depth.

I’m just not sure how far beyond a circuit depth of a couple of dozen gates you could go before 2⁴⁰ times n becomes an unworkable amount of memory and storage.

We won’t really know for sure until we start seeing actual, real 40-qubit quantum algorithms.

But in theory, simulating 40 qubits should be practical

Despite the significant resource requirements needed for 2⁴⁰ product states and a moderately deep quantum algorithm, this is all still considered practical and within the realm of reason, so resource requirements alone for a 40-qubit simulation can’t be the obstacle precluding the existence of 40-qubit quantum algorithms.

In short, there’s no technical excuse for lack of 40-qubit quantum algorithms which can run on simulators

As far as I can tell, there’s no technical reason that precludes running 40-qubit quantum algorithms on today’s 40-qubit classical quantum simulators.

Mindset obstacles to 40-qubit quantum algorithms

There may indeed be technical mindset obstacles preventing algorithm designers from thinking about 40-qubit quantum algorithms.

See the sampler of excuses listed in a preceding section. Many problems are mental rather than physical.

Is a focus on variational methods holding back larger algorithms?

There are applications which could take advantage of quantum phase estimation (QPE) and quantum Fourier transform (QFT) — if those techniques were viable on today’s quantum computers. But since current quantum computers lack the qubit fidelity, connectivity, and coherence time to support full QPE and QFT, there is a strong incentive for algorithm designers to stay away from the use of QPE and QFT. Variational methods are currently the technique of choice as an alternative to QPE and QFT.

The key benefit of variational methods is that they can indeed function properly on current and near-term quantum computers, while QPE and QFT cannot. Granted, variational methods on current and near-term quantum computers are never going to achieve dramatic quantum advantage, but the current incentives for publication reward incremental progress even if not focused on the grand goal for the long term.

The issue with a variational algorithm is the question of how far or deep the quantum portion of the algorithm computes, and then how many iterations of the classical optimization step are needed. If a fairly deep quantum step can be coupled with a relatively few iterations, then a variational approach can be productive, and QPE and QFT may not be needed. But if the quantum step is relatively short and shallow and many iterations of the classical optimization step are required, then the variational approach could be less than fully productive, in which case QPE and QFT could become much more appealing, provided that qubit fidelity and connectivity are sufficient to support the result accuracy needed.

So, 16 to 24 qubits may be more than sufficient for variational methods. More qubits may offer diminishing returns. 2⁴⁰ quantum states may be gross overkill for variational methods.

I’m also wondering if low qubit fidelity might be limiting variational algorithms to less than 24 qubits. Granted, that’s only a factor for running on a real quantum computer and not an issue for simulation, but the incentives to bias people towards running on real quantum computers could make it seem as if 24 qubits was an obstacle even though it should not be an obstacle for simulation.

Need for fine granularity of phase and probability amplitude

Advanced quantum algorithm techniques such as quantum Fourier transform (QFT), quantum phase estimation (QPE), and amplitude amplification require very fine granularity of phase angles and probability amplitudes. This is not a problem for simulators with their arbitrary precision, but very problematic for real quantum computers, with no certainty of a resolution in the next few years.

Seriously, there’s no clarity or even a roadmap as to when and how fine-grained phase angles and probability amplitudes will be in three, five, seven, or even ten years.

Vendors aren’t even talking about the issue.

The issue here is that QFT, QPE, and amplitude amplification, and other uses of fine granularity of phase angle and probability amplitude are the prime candidates for larger algorithms, in the 32 to 40-qubit range and beyond.

Without fine granularity of phase angle and probability amplitude there almost won’t be any point to pursuing 40-qubit quantum algorithms.

I can’t blame people for staying away from algorithms which are dependent on fine granularity of phase and probability amplitude.

Even if they can be simulated fine right now, that won’t do anybody much good if real quantum computers won’t support fine granularity of phase angles and probability amplitudes for another five years or more.

The flip side is that if enough algorithm designers do utilize fine granularity of phase angles and probability amplitudes, then maybe this will light a fire under the feet of quantum hardware researchers, incenting them to improve the granularity of phase angles and probability amplitudes.

For more on this topic, see my paper:

Shouldn’t the promise of support for quantum phase estimation (QPE) and quantum Fourier transform (QFT) be sufficient to draw a focus on 40-qubit quantum algorithms?

One would think so! Variational methods are popular — for now — even though they are a technological dead end for quantum computing since they offer no prospect for dramatic quantum advantage.

So why isn’t the promise of support for quantum phase estimation (QPE) and quantum Fourier transform (QFT) — with the prospect of being on the threshold of dramatic quantum advantage — sufficient to draw a focus on 40-qubit quantum algorithms?

It makes no sense, other than as I said earlier, that the incentives for publication seem biased towards algorithms which can run on current quantum computers.

The confounding factor is that even though QFT and QPE could be technically supported on simulators, difficulties with supporting fine granularity of phase angles and probability amplitudes on real quantum computers for an unknown number of years to come makes it much less attractive to rely on QFT and QPE.

Future demand for quantum phase estimation (QPE) and quantum Fourier transform (QFT) will eventually drive 40-qubit quantum algorithms

Eventually, I expect that people will wake up to the need for quantum phase estimation (QPE) and quantum Fourier transform (QFT) in order to achieve dramatic quantum advantage. Demand for QPE and QFT will finally drive the creation of 40-qubit quantum algorithms, in my view.

But, as mentioned in the preceding section, technical difficulties will make it difficult to support QFT and QPE on real quantum computers for years to come even though they can be supported on simulators today. With no credible hardware support even over the horizon, QFT and QFT will seem less appealing.

But I would urge algorithm designers to go forward with QFT and QPE anyway, in the hope that demand for fine granularity of phase angles and probability amplitudes will light a fire under the feet of quantum hardware researchers.

No real need for 40-qubit quantum algorithms until quantum phase estimation (QPE) and quantum Fourier transform (QFT) create a need

For now, until people move towards quantum phase estimation (QPE) and quantum Fourier transform (QFT), there will likely be no real need for 40-qubit quantum algorithms.

And this can’t happen until qubit fidelity, connectivity, and phase precision are sufficient for 40-qubit quantum phase estimation (QPE) and quantum Fourier transform (QFT).

See comments in the preceding three sections on the need for fine granularity of phase angles and probability amplitudes.

Why little publication of 40-qubit quantum algorithms even if simulation is limited?

Technically, there is probably a bias in favor of quantum algorithms which can actually be run — either on a real quantum computer or on a simulator — but I’d still expect to see more interest in at least publishing 32 to 40 and 50 to 80-qubit algorithms. But, I’m not seeing any such publication interest.

Any 40-qubit quantum algorithms in the Quantum Algorithm Zoo?

Actually, there were plenty of ambitious algorithms published in the early days of quantum computing, before we had even 5-qubit real quantum computers to run on. Simulators were less powerful and ran on significantly less-powerful classical computers — 10 to 25 years ago. Back then, paper algorithms were all that we had. The Quantum Algorithm Zoo website catalogs all of those early algorithms, including Shor’s factoring algorithm which uses thousands of qubits — in theory. You can find it here:

I haven’t done a search of those older published algorithms, but algorithms using hundreds or thousands of qubits wouldn’t be helpful when we’re trying to cover the range from 32 to 40 qubits. Still, there may indeed be algorithms in there which would apply to the range of 32 to 40 qubits and could and should be resurrected and reconsidered for the computing environment of today.

Resurrecting and adapting those older quantum algorithms would be an interesting project, but is beyond the scope of my interests and resources at this stage.

It would be great if some enterprising grad student — or NIST or some other government agency or open source project — would engage in such a project.

My full solution: A model for scalable quantum algorithms

My suggested solution earlier in this paper is to encourage people to focus on simulation of 40-qubit quantum algorithms to get results now and be ready for future real quantum computers when they finally have adequate support for 40-qubit quantum algorithms.

My full solution is not to focus solely on 40 qubits, but to focus on scalable quantum algorithms, in stages, from 4 to 40 qubits, to incrementally prove that an algorithm is scalable, and then there is a much better chance that the algorithm will scale beyond the limits of simulators (40 to 50 qubits or so) to 64, 72, 80, and 96 qubits and beyond, when such hardware becomes available and is reliable enough to fully support algorithms which use 60, 70, or even 80 qubits or more.

For more details on my scaling model, read my paper:

Tackling practical, production-scale real-world problems

40-qubit quantum algorithms would be a significant advance from the current state of the art, but not even close to supporting solutions for practical, production-scale real-world problems.

But the expectation or hope is that if 40-qubit quantum algorithms are scalable on realistic classical quantum simulators, then support for 50 to 64, 72, 80, and even 96 qubits on real quantum computers should be a reasonable expectation.

Support for 40-qubit quantum algorithms is simply a stepping stone — but a very significant stepping stone.

Will 40-qubits get us to The ENIAC Moment for quantum computing?

It’s hard to say for sure, but I suspect that a 40-qubit quantum algorithm, especially running only on a simulator, will not be enough to get us to The ENIAC Moment for quantum computing, where a practical real-world application is run at something approximating production-scale.

It’s very possible that 50, 60, or even 72 or more qubits might be needed, but maybe 40 qubits might just be enough in the right hands with the necessary degree of cleverness, for at least some smaller problems that still have real-world value.

I would expect that at least some minimal degree of quantum advantage (1,000X to 1,000,000X) would need to be achieved, even if full dramatic quantum advantage is not achieved, so more than 40 qubits are most likely needed to achieve The ENIAC Moment.

In any case, the short answer is not soon.

Will 40-qubits get us to The FORTRAN Moment for quantum computing?

I don’t expect that The FORTRAN Moment for quantum computing will be achieved in the next couple of years since significant research and experimentation will be needed to get there, and by then we will probably have 44 to 48-qubit simulators anyway.

Full quantum error correction (QEC) will likely be needed for The FORTRAN Moment, but simulators can act perfectly anyway. So even though much more advanced hardware will be necessary to achieve The FORTRAN Moment on a real quantum computer, simulators won’t have that obstacle.

Since The FORTRAN Moment is more about the programming model, the number of qubits is not the limiting or enabling factor, so we could indeed achieve The FORTRAN Moment on a 40-qubit quantum computer, subject to the same constraints as The ENIAC Moment of being able to handle a realistic, practical, production-scale real-world problem with only 40 qubits.

The short answer is maybe, maybe not.

I lean towards probably not since unlike The ENIAC Moment where even a single application developed by a super-elite team can constitute The ENIAC Moment, The FORTRAN Moment is more about enabling widespread development of quantum applications achieving some degree of quantum advantage even by non-elite, more-average development teams. Given that objective, I strongly suspect that 40 qubits will be insufficient.

Still, we could in fact see development of algorithmic building blocks and new programming models which enable an interesting level of algorithms that even non-elite average developers can deploy on 40-qubit quantum computers, both real and simulated. That’s a real possibility, but not soon.

In any case, the short answer is not soon.

What would resource requirements be for various application categories?

It would be interesting for those working in specific quantum application areas to catalog their estimated quantum resource requirements, either for algorithms they already know about or prospective algorithms they would like to have. Again, focused on the 32 to 40-qubit range, but whatever their actual or perceived requirements may be.

Requirements to catalog should include:

  1. Qubit count.
  2. Qubit fidelity.
  3. Circuit depth.
  4. Qubit connectivity.
  5. Fineness of granularity required for phase and probability amplitude.

40-qubit quantum algorithms which I may not be aware of

In truth, I don’t read or even scan every single research paper which comes along, so it is very possible that at least a few 40-qubit quantum algorithms exist that have escaped my attention.

In addition, proprietary algorithms developed as trade secrets within commercial firms may not get much in the way of public attention, so there may be proprietary 40-qubit quantum algorithms of which I am unaware.

And I always presume that various secretive government agencies might have advanced quantum algorithms that have not seen the light of day.

In addition there may be 40-qubit quantum algorithms which are hidden behind paywalls, registration walls, in books, proprietary conference materials, and maybe in proprietary courses.

Or maybe there are indeed none of these mythical beasts lurking out there in the shadows, existing, but hidden. Who knows.

But just to be safe, I’ll presume that there are.

In any case, this paper is really about algorithms that can be seen in the light of day.

Dramatic need for additional research

A large part of the reason that we don’t see any 40-qubit quantum algorithms is that there is not a sufficiently robust fundamental research foundation on which such algorithms can be based. The only solution is to proceed with such fundamental research.

Fundamental research is needed in algorithm design, particularly in scaling of algorithms:

  1. Metaphors.
  2. Design patterns.
  3. Algorithmic building blocks.
  4. Libraries.
  5. Frameworks.
  6. Performance characterization and measurement.
  7. Scaling in general, automated scaling, automating more situations currently requiring manual scaling.
  8. Automated algorithm and circuit analysis tools to detect design issues, such as for scaling.

Research programs should explicitly call for simulation of 28, 32, 36, and 40-qubit quantum algorithms.

Research is also needed for larger simulators, targeting 44, 48, and 50 qubits. Maybe even 54 to 56 qubits. And maybe even 60 qubits. Or at least call for research into whether there are good reasons not to try to simulate more than 50 qubits.

Open source

It would be most advantageous for quantum algorithms, code, sample data, examples, and supporting documentation to be posted online as open source.

And preferably posted on GitHub for convenient access. Tagging projects based on the number of supported qubits and scaling would be appropriate.

What’s a PhD candidate or thesis advisor to do?

I can sympathize with the plight of PhD candidates and their supervisors who have an incredibly hard choice to make:

  1. Focus on smaller algorithms (under 24 qubits) which can run on existing real quantum computers to deliver actual real results. Granted, they can’t handle production-scale practical real-world problems, but they are results sufficient for a PhD project.
  2. Focus on theoretical work with algorithms using 100 to 1,000 qubits — or even thousands of qubits — which not only can’t run on any existing real quantum computers, but are too large for even the best classical quantum simulators, even if they may be what are really needed for the long run to support production-scale practical real-world quantum applications. They can publish their theoretical findings, but nobody can use them in practice now or any time in the relatively near future.

Not many PhD candidates will have the luxury of a comfortable position pursuing only long-term research. Many will seek employment at companies and in the government where they will be expected to deliver practical solutions in short order, using current hardware, not focus on pie in the sky long-term research. Or they will obtain positions with a heavy teaching load for students who expect to be taught practical lessons in the use of current technology, not algorithms that won’t be runnable for ten years or more.

I believe that my middle-ground focus on practical simulation of scalable 40-qubit algorithms is a better target. Unfortunately, it is a compromise, so it won’t be appealing to either those interested in running on real quantum computers now or those interested in longer-term research.

Although, I believe that my focus on scalable algorithms should make it more palatable. An algorithm intended to require hundreds of qubits for a production-scale application could in theory be scaled down to 10, 20, 30, and 40 qubits for a small fraction of the production-scale application. Couple that with automated analysis of the algorithm to detect coding patterns that won’t scale, and this should deliver the best of both worlds, able to run at least under simulation for 10–40 qubits (with at least the possibility of running on current real quantum computers for 10–20 qubits, although the main focus is simulation), with technical validation of its ability to scale above 40 qubits.

In any case, this conundrum of a conflict between real results and ideal theory for PhD theses can be a contributing factor as to why we don’t see more scalable 40-qubit algorithms.

Timeframe

So, when might we begin to see some significant, nontrivial 40-qubit quantum algorithms?

There are four rational paths to significant, nontrivial 40-qubit quantum algorithms:

  1. Exhaustion of interest in variational methods. Desire to move significantly beyond what can be done with 12 to 23-qubit algorithms.
  2. Availability of real quantum computers with higher fidelity qubits and Quantum Volume of 2⁴⁰.
  3. Stronger interest in preparing for the real quantum computers which will be available in three to five years rather than on current and near-term quantum computers.
  4. Government research programs place fresh emphasis on 40-qubit quantum algorithms.

But when?

I’m not so hopeful for this year or next year.

Maybe interest will pick up within two years.

I suspect that it will be at least another two years before we start seeing significant interest in simulating 40-qubit quantum algorithms.

In any case, some magical catalyst will have to come along that gets the ball rolling, but no such catalyst is in sight at the present time.

Who’s on first?

It’s anybody’s guess what quantum algorithm and quantum application will be the first to exploit 40 qubits for quantum parallelism and approach some fractional degree of quantum advantage.

I do expect it to be on a classical quantum simulator, not on a real quantum computer.

I don’t expect to see a quantum computer with sufficient qubit fidelity and connectivity to support a 40-qubit Hadamard transform to support quantum parallelism on 40 qubits within the next two years. In fact, it may be more like three to four years.

When will all quantum algorithms be 40-qubit?

The whole point of quantum computing to to do massive calculations which aren’t possible or would take too long on a classical computer, so the true, grand promise of quantum computing won’t be achieved until essentially all quantum algorithms are capable of massive computations.

40 qubits can represent 2⁴⁰ or a trillion quantum states which seems like a reasonable minimum threshold for a massive computation — a million or even a billion operations seems small enough for a classical computer to handle.

My expectation is that 40-qubit quantum algorithms will be essentially ubiquitous on classical quantum simulators within the next two to three years, possibly sooner.

So, when might we get to the stage when essentially all quantum algorithms and applications use 40 qubits? And this is for running on a real quantum computer since people can run 40-qubit algorithms on classical quantum simulators today.

This is somewhat more complex and problematic than the question of when or what would be the first 40-qubit algorithm and application which can achieve some substantial and meaningful degree of fractional quantum advantage. If that first algorithm and application to full exploit 40 qubits (on a real quantum computer) occurs in three to four years, as I suggested in the preceding section, I suspect that it could be five to seven years before 40-qubit algorithms and applications become both common and near ubiquitous — on real quantum computers.

And that’s just for starters. 40 qubits would not be enough for full drastic quantum advantage, just enough to be near the threshold, close enough to say that we’re almost there. 50 to 64 or maybe even 72 qubits would be needed to achieve full dramatic quantum advantage.

Obviously any algorithm requiring more qubits that the capacity of a classical quantum simulator (currently around 40 qubits) would require a real quantum computer, with enough qubits and a sufficient qubit fidelity, connectivity, and coherence time.

Take the 40-qubit challenge!

I don’t have the energy or resources to pursue it, but seriously, it would be great if some enterprising individual or organization would run a 40-qubit challenge to encourage — and reward — the design and development of practical and scalable 40-qubit algorithms and applications addressing realistic real-world problems.

It would be great if we could hear people saying:

  • Take the 40-qubit challenge!

And equally great to hear people say:

  • We’re taking the 40-qubit challenge!

And eventually see people tag their research results with:

  • We took the 40-qubit challenge!

For now, all I can say is that I would like to see people take the 40-qubit challenge!

Inspire the hardware vendors to support 40-qubit algorithms

I seriously believe that if the quantum computer hardware vendors could see that algorithm designers are focused on simulation of 40-qubit quantum algorithms then they will have a strong incentive to get their act together to design and build real quantum computers which can run those algorithms sooner than later.

Without such a visible incentive, the hardware vendors won’t have the necessary focus. For example, they may not have the right targets for qubit fidelity, qubit connectivity, or measurement fidelity, even if they give us a lot more qubits.

If nothing else, they can have actual, proven, running algorithms to benchmark their new hardware.

Without a lot of proven 40-qubit algorithms sitting on the shelf, ready to go, we can’t realistically expect the hardware vendors to…

  • Take the 40-qubit challenge!

Summary and conclusions

  1. Scalable quantum algorithms are needed. 40-qubits may be the near-term limit for simulation, but scalable algorithms will extend beyond 40-qubits once quantum computers with enough qubits and sufficient qubit fidelity and connectivity become available.
  2. There’s no technical obstacle to 40-qubit quantum algorithms. Real quantum computers are not yet available with sufficient qubit fidelity and connectivity, but classical quantum simulators are available.
  3. A desire to run variational methods on a real quantum computer may be distracting people from the greater potential of simulating a larger number of qubits.
  4. There may be proprietary or secret algorithms which utilize 40 qubits.
  5. Research programs for larger algorithms are desperately needed.
  6. Research programs should explicitly call for simulation of 28, 32, 36, and 40-qubit quantum algorithms.
  7. Research is also needed for larger simulators, targeting 44, 48, and 50 qubits. Maybe even 54 to 56 qubits. And maybe even 60 qubits. Or at least call for research into whether there are good reasons not to try to simulate more than 50 qubits.
  8. I encourage people to Take the 40-qubit challenge! — design and develop practical and scalable 40-qubit algorithms and applications addressing realistic real-world problems.
  9. A significant collection of proven 40-qubit quantum algorithms, sitting on the shelf, and ready to go could be what’s needed to inspire the quantum computer hardware vendors to support 40-qubit algorithms. Only then can we realistically expect the hardware vendors to… Take the 40-qubit challenge!
  10. It may take another three or four years before we see the first 40-qubit algorithm and application running on a real quantum computer.
  11. And it could take five to seven years before 40-qubit algorithms become the norm.

--

--