Staged Model for Scaling of Quantum Algorithms

  1. Generative coding of quantum circuits. Parameterized classical code which automatically generates the quantum circuit based on the input data and input parameters, generating larger circuits for larger input data. This is where most of the work for scaling occurs.
  2. Rely on simulation for 4 to 40 qubits since near-term hardware isn’t yet scalable in terms of size, qubit fidelity, coherence time, and qubit connectivity.
  3. Specify the nines of qubit fidelity to assure that simulation is realistic for quantum computers expected to become available in a few years.
  4. Scale and test using simulation at a number of stages from 4 to 40 qubits, validating correct results at each stage to confirm scaling. Stages at roughly 4, 8, 12, 16, 20, 24, 28, 32, 36, and 40 qubits — depending on the nature of the particular algorithm.
  5. Run both perfect simulation and simulation for a specified number of nines of qubit fidelity, and compare them to see whether scaling works properly under a realistic error rate.
  6. Use automated circuit analysis tools to detect coding patterns which are not scalable, particularly beyond the limits of simulation. For example, dependence on fine granularity of phase or probability amplitude. As well as error rate which would become excessive for deeper circuits, and for hardware with relatively low qubit fidelity. Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.
  7. Attempt to replicate simulation results on real quantum computers when possible, but generally this won’t be very feasible on noisy near-term hardware for much beyond trivial cases. This is more icing on the cake, a bonus extra, rather than an expected step.
  8. The expectation is that careful testing of multiple stages of scaling from 4 to 40 qubits coupled with automated detection of non-scalable coding patterns will be sufficient to establish scaling beyond the range of simulation (40 qubits) — to 44, 48, 50, 64, 72, 80, 96, and more qubits — once hardware becomes available with sufficient qubits, sufficient qubit fidelity, and sufficient connectivity.
  1. The siren song of running on a real quantum computer
  2. Challenges and constraints
  3. Quantum algorithm vs. quantum circuit
  4. Need for simulation
  5. Perfect simulation
  6. Specify the nines of qubit fidelity to assure that simulation is realistic
  7. Need for automated analysis tools to detect scaling issues
  8. Staged testing using simulation
  9. Staged automated analysis to detect scaling issues
  10. Overall thesis of this paper
  11. Overall process
  12. Basic staged model
  13. Generative coding of quantum circuits rather than hand-coding of circuits
  14. Larger circuits require higher qubit fidelity
  15. Depth of circuits and qubit fidelity under scaling
  16. Suggested stages — qubit counts
  17. Full wavefunction vs. partial simulation
  18. Suggested stages for simulation
  19. Suggested stages for execution on a real quantum computer or automated analysis
  20. Suggested stages for scaling of Shor’s factoring algorithm
  21. What is the smallest stage for any particular algorithm?
  22. Automated circuit analysis tools
  23. Detecting upper limit for scaling
  24. How far can variational methods scale?
  25. Scalability of quantum phase estimation (QPE) and quantum Fourier transform (QFT)
  26. Automatic scaling vs. manual scaling
  27. Scaling of algorithms with other than linear integer inputs
  28. Dramatic need for additional research
  29. Need for application scaling roadmaps
  30. Scaling for the future vs. for existing quantum computers
  31. Algorithm designers must consider scaling of shot count (circuit repetitions)
  32. Published papers need to explicitly address scalability of all algorithms
  33. Timeframe
  34. Summary and conclusions

The siren song of running on a real quantum computer

Being able to run an algorithm on a real quantum computer is virtually irresistible. It gives people the feeling that they are doing something dramatic that was once considered near impossible.

Challenges and constraints

Scaling of quantum algorithms is nontrivial. Why is that? Because…

  1. Insufficient qubits. A real engineering challenge to add qubits.
  2. Insufficient qubit fidelity. Error rate is too high.
  3. Insufficient qubit coherence time. Limits circuit depth.
  4. Insufficient qubit connectivity. Requires SWAP networks which are dependent on higher qubit fidelity.
  5. Insufficient phase granularity. Insufficient precision for quantum phase estimation (QPE) and quantum Fourier transform (QFT).
  6. Insufficient probability amplitude granularity. Impacts precision of computations.
  7. Can’t debug logic on a real quantum computer. Cannot observe intermediate quantum states. Limits quantum circuit complexity to what people can debug in their heads.
  8. Can’t simulate beyond 50 qubits. Limits size of algorithms which can be debugged.
  9. Can’t simulate deep circuits beyond a relatively small number of qubits. Too many quantum states to process given time and storage constraints.
  10. Journal papers typically don’t discuss scalability of algorithms. Scaling is left as an exercise to the unsuspecting reader. Many algorithms simply aren’t scalable.

Quantum algorithm vs. quantum circuit

Quantum algorithm and quantum circuit are frequently used as synonyms, but technically there is a difference.

Need for simulation

Even if hardware engineers could fabricate a quantum computer with hundreds or thousands of perfect qubits, we would have no way of adequately testing or debugging algorithms of that size. The essential problem is that intermediate quantum states are not observable — only the final measured qubit states can be examined. That won’t be sufficient for testing and debugging complex algorithms.

Perfect simulation

As detailed in the paper just referenced in the preceding section, perfect simulation is simulation of a quantum circuit under ideal conditions, with no errors — an error rate of 0.0. This is not a very realistic simulation, but is useful to test and validate the logic of an algorithm. It is also useful as a baseline which more realistic tests can be compared against.

Specify the nines of qubit fidelity to assure that simulation is realistic

Simulating an ideal, perfect quantum computer is desirable to test for and debug logic errors, but is not appropriate to test the scalability of a quantum algorithm.

Need for automated analysis tools to detect scaling issues

Automated tools are also needed to analyze quantum circuits to detect coding patterns which are not scalable. The patterns can be subtle and it’s too easy to fall into them without realizing it.

Staged testing using simulation

The remaining piece of the puzzle is to incrementally test and validate the candidate algorithm over a nontrivial sequence of stages of qubits, from 4 to 40 qubits, using simulation to confirm that scaling is successful for at least those stages.

Staged automated analysis to detect scaling issues

The automatic analysis tools can then be used to check for potential scaling problems for larger circuit sizes, such as for 44 to 96 or more qubits — even if such circuits cannot yet be successfully executed on current real quantum computers.

Overall thesis of this paper

A quick summary of the overall thesis of this paper:

  1. Focus primarily on simulation rather than running on current NISQ hardware of dubious value, and certainly focus on simulation for qubit fidelity well in excess of near-term hardware.
  2. Scaling for current and near-term NISQ hardware is a definite dead end, not worth pursuing.
  3. Simulation is the clear way to go for scalable algorithms in the 30 to 40-qubit range. NISQ hardware has too low qubit fidelity and too low qubit connectivity.
  4. Specify the nines of qubit fidelity to assure that simulation is realistic.
  5. Generative coding of quantum circuits. Rather than hand-coding quantum circuits, classical code is used to generate the quantum circuit based on the specific input data and parameters. Much of the work for scaling of a quantum algorithm occurs at this step of each stage of scaling.
  6. Prove scaling of a quantum algorithm in stages from 4 to 40 qubits (maybe more, maybe even 50 to 55 qubits eventually) using classical quantum simulators as well as actual quantum hardware, if available and with sufficient qubits, qubit fidelity, and connectivity.
  7. Start with the expectation that algorithms working at one stage will scale reliably to subsequent stages, but that proposition must be carefully tested and validated.
  8. If scaling works for each stage from 4 to 40 qubits, the expectation is that scaling will work for stages from 40 to 64 qubits (or maybe 72 or even 80 or 96 qubits), and beyond as hardware with higher capacity and higher fidelity becomes available. Of course this depends on having enough qubits with sufficient qubit fidelity, coherence time, and connectivity. Testing will prove that out.
  9. Run tests for both perfect simulation and a specified number of nines of qubit fidelity, and compare them to see whether scaling works properly under a realistic error rate.
  10. Automated analysis tools for quantum circuits are needed to detect coding patterns and algorithm design flaws which may only come up for algorithm results at qubit counts and circuit sizes well beyond those which can be successfully tested on simulators or the real quantum computers which we can expect to use in the near term. Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.
  11. Authors of papers should discuss in some significant detail — and test — the scalability of their algorithms. Including the scaling of shot count (circuit repetitions.)

Overall process

A quick summary of the overall process:

  1. Test and validate scalability of a quantum algorithm at a very modest count of qubits before advancing to the next stage of scaling with modestly more qubits.
  2. Test and validate in the controlled environment of classical quantum simulators whenever possible (up to 40 to 50 qubits.)
  3. Run automated analysis tools for 40 or more qubits to detect coding patterns and design flaws might go undetected at smaller input sizes.
  4. Advance to incremental testing on real quantum computers only after firmly validating scalability in the controlled environment of classical quantum simulators. And this is highly conditional on the availability of sufficiently reliable quantum hardware, which commonly won’t be the case.

Basic staged model

The basic, staged model proposed by this paper has four parts:

  1. Simulate. Fully test algorithms using classical quantum simulators for multiple stages between 4 and 40 (or 50, if possible) qubits.
  2. Analyze. Use automated analysis tools to detect coding patterns in quantum circuits which are not scalable. Analyze all the way up to the maximum number of qubits and input size which the algorithm is expected to be able to handle. Those larger sizes don’t have to be executed or simulated, just analyzed. This will confirm or determine the maximum scalability for the algorithm. Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.
  3. Execute. Run the algorithms on a real quantum computer for both the same sizes as were already simulated, and multiple sizes (stages) from slightly above the largest size which can be simulated to moderately more than that, but for sizes for which the results can be readily calculated or estimated. All of the results which correspond to the sizes which could be simulated should be compared. And all of the results for test cases larger than the simulation limit should reasonably agree with the calculated or estimated results. Granted, execution is likely very problematic for current and near-term quantum computers, but some tests may be practical.
  4. Estimate. Calculate the depth of the circuit for the maximum expected input size to determine if acceptable results can be achieved given the average qubit fidelity and coherence time.

Generative coding of quantum circuits rather than hand-coding of circuits

Fully hand-coding the quantum circuits for algorithms is absolutely out of the question. What is needed is a more abstract representation of an algorithm. Generative coding of quantum circuits provides this level of abstraction. Any algorithm designed to be scalable must be generated dynamically using classical code which is parameterized with the input size or size of the simulation, so that as the input size grows or the simulation size grows, a larger number of quantum logic gates will be generated to accommodate the expanded input size.

Larger circuits require higher qubit fidelity

As generated circuits use more qubits and more gates are executed, higher qubit fidelity — or virtually perfect quantum error correction (QEC) — are needed, otherwise errors may cause results to become unacceptable.

Depth of circuits and qubit fidelity under scaling

As input size scales, generally the circuit size will scale, both in terms of qubit count and circuit depth. Qubit fidelity becomes more urgent as circuit size increases since more gates mean more errors.

Suggested stages — qubit counts

Each stage is represented by a representative count of qubits for that stage.

  1. Stages which can be simulated. Typically under 50 qubits. Or maybe even only up to 40 qubits or even 38 or 32.
  2. Stages beyond what can be simulated. Typically over 50 qubits. Or maybe even over 40 qubits.

Full wavefunction vs. partial simulation

For the purposes of this paper, simulation refers to full wavefunction simulation. Granted there are alternative modes of simulation of quantum circuits which can handle much more than 50 or so qubits, but they are special-purpose simulations and not relevant to the full wavefunction simulation required to fully validate quantum circuits as recommended by this paper.

Suggested stages for simulation

Some typical stages which should be simulated to test for scaling:

  1. 4 qubits.
  2. 5 qubits.
  3. 8 qubits.
  4. 10 qubits.
  5. 12 qubits.
  6. 16 qubits.
  7. 20 qubits.
  8. 24 qubits.
  9. 28 qubits.
  10. 32 qubits.
  11. 36 qubits.
  12. 40 qubits.
  13. 42 qubits. Possibly, if the simulator supports it.
  14. 44 qubits. Eventually, but not currently.
  15. 48 qubits. Ditto.
  16. 50 qubits. Ditto. This is currently the expected upper limit.
  17. 54 qubits. Maybe a stretch goal a few years from now.

Suggested stages for execution on a real quantum computer or automated analysis

Some typical stages which cannot be simulated and must be run on a real quantum computer or automated analysis:

  1. 44 qubits. Hopefully this can eventually be simulated as well.
  2. 48 qubits. Ditto.
  3. 50 qubits. Ditto.
  4. 52 qubits. Ditto.
  5. 55 qubits. Ditto. May be the limit for simulation for the next few years.
  6. 56 qubits. Unlikely this can be simulated in the next few years.
  7. 60 qubits.
  8. 64 qubits.
  9. 72 qubits.
  10. 80 qubits.
  11. 96 qubits.
  12. 128 qubits.
  13. 160 qubits.
  14. 192 qubits.
  15. 256 qubits.
  16. 384 qubits.
  17. 512 qubits.

Suggested stages for scaling of Shor’s factoring algorithm

Fifteen (15) is the smallest integer which is a semiprime (product of two prime numbers) which can be factored by Shor’s factoring algorithm — to 3 x 5.

  1. 4 x 4 + 3 = 19 qubits, 6 gates — 15 (= 3 x 5) is the smallest integer semiprime (ignoring 2 x 3 = 6.)
  2. 5 x 4 + 3 = 23 qubits, 10 gates.
  3. 6 x 4 + 3 = 27 qubits, 15 gates.
  4. 7 x 4 + 3 = 31 qubits, 21 gates.
  5. 8 x 4 + 3 = 35 qubits, 28 gates.
  6. 9 x 4 + 3 = 39 qubits, 36 gates — may be the largest size to simulate.
  7. 10 x 4 + 3 = 43 qubits, 45 gates — possible stretch goal for largest size to simulate.
  8. 11 x 4 + 3 = 47 qubits, 55 gates.
  9. 12 x 4 + 3 = 51 qubits, 66 gates.
  10. 13 x 4 + 3 = 55 qubits, 78 gates — maximum size to simulate a few years from now.
  11. 14 x 4 + 3 = 59 qubits, 91 gates — beyond the possibility of full simulation.
  12. 15 x 4 + 3 = 63 qubits, 105 gates.
  13. 16 x 4 + 3 = 67 qubits, 120 gates.
  14. 20 x 4 + 3 = 83 qubits, 190 gates.
  15. 24 x 4 + 3 = 99 qubits, 276 gates.
  16. 28 x 4 + 3 = 115 qubits, 378 gates.
  17. 32 x 4 + 3 = 131 qubits, 496 gates.
  18. 36 x 4 + 3 = 147 qubits, 630 gates.
  19. 40 x 4 + 3 = 163 qubits, 780 gates.
  20. 44 x 4 + 3 = 179 qubits, 1,128 gates.
  21. 48 x 4 + 3 = 195 qubits, 1,128 gates.
  22. 56 x 4 + 3 = 227 qubits, 1,540 gates.
  23. 64 x 4 + 3 = 259 qubits, 2,016 gates.
  24. 72 x 4 + 3 = 291 qubits, 2,556 gates.
  25. 80 x 4 + 3 = 323 qubits, 3,160 gates.
  26. 96 x 4 + 3 = 387 qubits, 4,560 gates.
  27. 128 x 4 + 3 = 515 qubits, 8,128 gates.
  28. 192 x 4 + 3 = 771 qubits, 18,336 gates.
  29. 256 x 4 + 3 = 1,027 qubits, 32,640 gates.
  30. 384 x 4 + 3 = 1,539 qubits, 73,536 gates.
  31. 512 x 4 + 3 = 2,051 qubits, 130,816 gates.
  1. 4 bits = 19 qubits, 6 gates.
  2. 5 bits = 23 qubits, 10 gates.
  3. 6 bits = 27 qubits, 15 gates.
  4. 7 bits = 31 qubits, 21 gates.
  5. 8 bits = 35 qubits, 28 gates.
  6. 9 bits = 39 qubits, 36 gates — may be the largest size to simulate.
  7. 10 bits = 43 qubits, 45 gates — possible stretch goal for largest size to simulate.
  1. 10 bits = 43 qubits, 45 gates — in case not readily simulatable.
  2. 11 bits = 47 qubits, 55 gates.
  3. 12 bits = 51 qubits, 66 gates.
  4. 13 bits = 55 qubits, 78 gates.
  5. 14 bits = 59 qubits, 91 gates.
  6. 15 bits = 63 qubits, 105 gates.
  7. 16 bits = 67 qubits, 120 gates.
  8. 20 bits = 83 qubits, 190 gates.
  9. 24 bits = 99 qubits, 276 gates.
  10. 28 bits = 115 qubits, 378 gates — largest for a 127-qubit machine.
  11. 32 bits = 131 qubits, 496 gates.
  12. 36 bits = 147 qubits, 630 gates.
  13. 40 bits = 163 qubits, 780 gates.
  14. 44 bits = 179 qubits, 1,128 gates.
  15. 48 bits = 195 qubits, 1,128 gates.
  16. 56 bits = 227 qubits, 1,540 gates.
  17. 64 bits = 259 qubits, 2,016 gates.
  18. 72 bits = 291 qubits, 2,556 gates.
  19. 80 bits = 323 qubits, 3,160 gates.
  20. 96 bits = 387 qubits, 4,560 gates — largest for a 441-qubit machine.

What is the smallest stage for any particular algorithm?

Generally, 4 to 8 qubits would be the smallest or initial stage for testing the scaling of a quantum algorithm.

  1. Testing GHZ states would start with 3 qubits.
  2. The smallest semiprime that can be factored using Shor’s factoring algorithm is 15 = 3 x 5, which would require 4 x 4 + 3 = 17 qubits.

Automated circuit analysis tools

Automated analysis tools are needed to detect coding patterns and algorithm design flaws which may only some up algorithm results at qubit count and circuit sizes well beyond those which can be successfully tested on simulators or the real quantum computers which we can expect to use in the near term.

  1. Generally scale poorly. Algorithm designers should refrain from using them.
  2. Only scale up to some upper bound. The automated tools should simply report what that upper bound might be. This will generally be dependent on qubit fidelity.

Detecting upper limit for scaling

Scaling of quantum algorithms will generally not be an all or nothing proposition — some algorithms will scale further than others.

How far can variational methods scale?

Variational methods are an attractive approach to achieving some minimal degree of functional utility on small-scale NISQ devices, but it is questionable whether they will scale up the dramatic quantum advantage, which is the real goal of quantum computing.

Scalability of quantum phase estimation (QPE) and quantum Fourier transform (QFT)

Given that variational methods are unlikely to achieve dramatic quantum advantage, quantum phase estimation (QPE) and quantum Fourier transform (QFT) become essential for larger scale and production applications.

Automatic scaling vs. manual scaling

The general expectation is that most scaling of algorithms can be performed automatically using simple mathematical formulas to guide the generative coding process. Unfortunately, that is not always the case.

Scaling of algorithms with other than linear integer inputs

Scaling is hard enough even when the input data is a simple, linear integer, but what approaches should be used when the input data is not a simple, linear integer? Unfortunately, the simple answer is that it can get complicated, and it varies greatly depending on the nature of the domain of interest.

  1. Real numbers.
  2. Non-linear integers. Where the integer has some meaning other than it’s numeric value.
  3. An atom.
  4. A molecule.
  5. A protein.
  6. A drug.
  7. A map coordinate.
  8. More than one of any of the above.

Dramatic need for additional research

Existing quantum algorithms have generally been hand-coded, cobbled together in a rather ad hoc manner.

  1. Metaphors.
  2. Design patterns.
  3. Algorithmic building blocks.
  4. Libraries.
  5. Frameworks.
  6. Performance characterization and measurement.
  7. Scaling in general, automated scaling, automating more situations currently requiring manual scaling.

Need for application scaling roadmaps

Scaling is rarely a purely linear integer process. Rather, there will be identifiable milestones (stages) which correspond to advances considered notable. Some may be simple integral values, real values, or symbolic values such as atoms, molecules, proteins, and drugs.

  • The plain language descriptions of the milestones.
  • Some sense of the complexity of the milestones, in application-specific terms.
  • Some sense of the advances in quantum computational resources which will be required to achieve the milestones.
  1. Qubit count.
  2. Nines of qubit fidelity.
  3. Qubit connectivity.
  4. Quantum circuit depth.
  5. Shot count (circuit repetitions.)
  • Characterize how much progress has been made.
  • Characterize how much progress remains to be made.
  • Highlight advances in quantum computational resources which will be needed to make further progress.
  • Gain a sense of when dramatic quantum advantage will be achieved, and what quantum computational resources will be required.

Scaling for the future vs. for existing quantum computers

Given the very limited capabilities of current and even near-term quantum computers, the primary interest for scaling is for the dramatically more capable quantum computers some years in the future — the machines on which dramatic quantum advantage will be possible.

Algorithm designers must consider scaling of shot count (circuit repetitions)

Currently, there is no consensus for how shot count (circuit repetitions) should scale as an algorithm is scaled up, but it is a serious issue warranting the attention of algorithm designers. To date, it has never gotten any such attention.

Published papers need to explicitly address scalability of all algorithms

Scaling or algorithms is neither free nor automatic. It must be explicitly addressed in the design, implementation, and testing of any algorithm. The discussion of scaling should be very robust, leaving no doubt as to whether and how the algorithm is scalable.

Timeframe

As mentioned in the introduction, the goal here is not scaling on current and near-term quantum computers per s, but to assure that algorithms designed and developed today will be automatically scalable to future quantum computers as they become available. When might that be?

  • Next two years? I’m confident that quite a few significant advances will occur over the next two years, but even they may not be enough to fully support the vision and model of scaling proposed by this informal paper.
  • Five years? A five-year horizon would seem more of a slam dunk for a more optimal combination of more advanced and capable classical quantum simulators coupled with real quantum computers which are robust enough to actually permit significant scaling of quantum algorithms.
  • Two to three years? One would hope that the two to three-year time period would bring us sufficient technical advances to enable a fair fraction of the capabilities detailed in this paper.
  • Three to five years? I have every expectation that fairly dramatic advances will be occurring on all fronts three to five years from now. The exact order and pace of advances is anybody’s guess.

Summary and conclusions

  1. Simulation of quantum circuits is a powerful and necessary tool for producing scalable quantum algorithms. A classical quantum simulator is needed for debugging, and also to capture intermediate quantum states to validate that execution of a quantum circuit is occurring as expected.
  2. Specify the nines of qubit fidelity to assure that simulation is realistic.
  3. Incremental testing and validation at a sequence of stages (sizes) is essential to demonstrating the scalability of a quantum algorithm.
  4. Run both perfect simulation and a specified number of nines of qubit fidelity, and compare them to see whether scaling works properly under a realistic error rate.
  5. Generative coding of quantum circuits. Rather than hand-coding quantum circuits, classical code is used to generate the quantum circuit based on the specific input data and parameters. Much of the scaling of a quantum algorithm occurs at this step of each stage of scaling.
  6. This should be a valid model and framework, but does require higher capacity and higher performance simulators, as well as a presumption that qubit fidelity, coherence time, connectivity, and qubit count will quickly progress to the level where real quantum computers can correctly execute quantum circuits for 40 to 80 or more qubits.
  7. There may still be issues which only appear for much larger quantum algorithms which don’t crop up for smaller quantum algorithms, such as dependence on fine granularity of phase and probability amplitude. These can’t be detected at the scale of algorithms which can be simulated.
  8. Automated analysis tools are needed to detect coding patterns and algorithm design flaws which may only show up in algorithm results at qubit count and circuit sizes well beyond those which can be successfully tested on simulators or the real quantum computers which we can expect to use in the near term. Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.
  9. Much fundamental research is still needed.
  10. Authors of papers should discuss in some detail the scalability of their algorithms, including scaling of shot count (circuit repetitions.) Testing and validation of scaling is preferable, but significant discussion is needed in any case.
  11. The exact timeframe for the various capabilities discussed in this paper is anybody’s guess. Don’t expect much over the next two years — people will need to rely on their own skill and discipline and the guidance from this paper, then various capabilities appear as each year passes, so that many of the capabilities should be available within three to four years, and finally everything in place after five years, or so.

--

--

Freelance Consultant

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store