Is Lack of Fine Granularity of Phase and Probability Amplitude the Fatal Achilles Heel Which Dooms Quantum Computing to Severely Limited Utility?

  • The focus of this paper is solely on gate-based quantum computers.
  • No claim is made as to whether the conjecture and discussion of this paper may or may not apply to non-gate-based quantum computers such as annealing systems (D-Wave), photonic quantum computers (Xanadu), or specialized simulation systems.
  • The focus of this paper is solely on two-level quantum systems (qubits).
  • No claim is made as to whether the impacts on non-two-level systems (e.g., qutrits and qudits) are identical or similar.
  1. In a nutshell
  2. The focus of this paper is solely on gate-based quantum computers
  3. The focus of this paper is solely on two-level quantum systems (qubits)
  4. Granularity and gradations of continuous values
  5. The essence of the problem
  6. The essence of the problem, more simply
  7. Dramatic quantum advantage — delivering on the full promise of quantum computing
  8. Fractional quantum advantage — an advantage, but not delivering on the full promise of quantum computing
  9. Exponential speedup — the great promise of quantum computing, but can it really be delivered?
  10. Background
  11. What is a quantum Fourier transform (QFT)?
  12. The problem
  13. My conjecture
  14. Implications of my conjecture
  15. I can’t prove my conjecture, but technical specifications don’t disprove it either
  16. I’m at least half-convinced that my conjecture is true
  17. To be clear, my conjecture is still only tentative and may yet be proven wrong
  18. Utility: quantum parallelism and quantum advantage
  19. How severely limited will utility be?
  20. People don’t currently see a problem because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem
  21. Beware of Quantum Algorithms Dependent on Fine Granularity of Phase
  22. Quantum Fourier transform is critically dependent on fine granularity of phase
  23. Quantum phase estimation (QPE) and quantum amplitude estimation (QAE) are also dependent on quantum Fourier transform and fine granularity of phase
  24. Any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage
  25. What is a DAC (Digital to Analog Converter)?
  26. Where is the DAC?
  27. The DAC effectively limits the granularity of phase and probability amplitude
  28. Analog signals likely also severely limit the granularity of phase and probability amplitude
  29. Additional analog circuitry after the DAC may also limit the granularity of phase and probability amplitude
  30. The analog signal must be transported to the qubit, typically as an electromagnetic signal, which probably limits the granularity of phase and probability amplitude
  31. The qubit itself limits the granularity of phase and probability amplitude
  32. Ultimately the physics of the underlying quantum phenomenon used to implement the quantum state by the qubits limits the granularity of phase and probability amplitude
  33. Some ultimate limit at the Planck level as well as quantum uncertainty itself
  34. General uncertainties around all factors
  35. It’s not clear which of the factors is the more limiting factor
  36. I personally don’t know enough about classical digital and analog electronics to figure all of this out
  37. Gate execution and qubit measurement
  38. Fidelity of SWAP gates
  39. Do the same limits apply to both phase and probability amplitude? Likely but not certain
  40. How did this all happen? Too many silos with gaps between them
  41. People don’t currently see a problem because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem
  42. When will we hit the wall and start seeing the problem? Maybe two to three years
  43. Quantum information — discrete binary and continuous values
  44. Some alternatives
  45. Are there any reasonable alternatives to quantum Fourier transform?
  46. What might be more powerful than quantum Fourier transform?
  47. No, variational methods don’t show any promise of delivering any dramatic quantum advantage
  48. Are there any reasonable alternative approaches to implementation of quantum Fourier transform?
  49. Might an alternative quantum computing architecture or programming model avoid my concern?
  50. Is a radically different technology and architecture, and maybe even programming model required to get past my concern?
  51. Might placing the classical electronics in the cryostat reduce interference to enable a few more bits of precision?
  52. Bits of precision vs. qubits of precision
  53. Currently this limitation is a non-issue since current limits on qubit fidelity and connectivity prevent running into these limits
  54. The exponential math issue
  55. Inverse quantum Fourier transform
  56. Euler’s formula
  57. Two pi conversion factor from normalized phase to phase angle in radians
  58. Phase angle of two superposed basis states
  59. Mapping phase angle to the input range for the DAC
  60. Limited precision of entries in a unitary transform matrix
  61. Total qubits vs. quantum Fourier transform precision
  62. Pulses to control qubits
  63. Pulse control
  64. No hints about pulse precision in IBM Qiskit Pulse documentation and tutorials
  65. Notes on IBM pulse control
  66. How many qubits is reasonable to expect for a quantum Fourier transform?
  67. 10–12 qubits seems like a slam dunk, maybe 14–16 qubits
  68. 16 to 18 qubits is likely a slam dunk for two to three years from now
  69. 20 or 24 qubits as a reasonable expectation for what we might have in two to three years
  70. 28 to 32 qubits seems out of reach, but who really knows
  71. 36 to 40 qubits is clearly beyond the realm of what we can expect, but it’s worth contemplating as a theoretical matter
  72. Nobody should be expecting quantum Fourier transform for more than 40 qubits
  73. 44 to 48 qubits are well beyond the realm of expectation for what a quantum Fourier transform could handle
  74. Demand transparency — transparency is mandatory
  75. Insist or demand that all limitations be clearly documented
  76. Transparency is needed everywhere
  77. No transparency on internal electronics for digital and analog circuitry
  78. Transparency needed for number of gradations for phase and probability amplitude
  79. The number of gradations can’t be greater than the DAC resolution, by definition
  80. Related circuitry and noise may reduce the absolute resolution of the DAC
  81. An open source design for a quantum computer would be helpful
  82. Quantum advantage must be fully disclosed and adequately discussed
  83. What’s your quantum advantage?
  84. What other approaches are there if quantum Fourier transform isn’t available?
  85. Are there ANY known opportunities to address the root issues? Nope
  86. Isn’t there ANY pathway to fine granularity of phase and probability amplitude? Nope.
  87. Isn’t there ANY pathway to quantum Fourier transform for over 50 qubits? Nope.
  88. What about Shor’s factoring algorithm? Sorry, but it will only work for small numbers, not very large numbers
  89. Won’t banding or approximate quantum Fourier transform (AQFT) resolve the issue for Shor’s factoring algorithm? No, not really
  90. What about derivative algorithms of Shor’s factoring algorithm? Not if they still rely on quantum Fourier transform
  91. Large-scale physics simulations may still be feasible even without support for quantum Fourier transform
  92. Computer science experiments can also achieve significant quantum advantage
  93. Google’s quantum supremacy experiment or cross-entropy benchmarking (XEB) can achieve dramatic quantum advantage
  94. What about quantum computational chemistry using quantum phase estimation (QPE)? Maybe, sometimes, it depends
  95. Still open questions as to the quality of the final quantum Fourier transform result even if a certain number of bits is supported
  96. Scalability of quantum algorithms is vital, but problematic in the absence of clarity and predictability for fine granularity of phase and probability amplitude
  97. How does quantum error correction (QEC) deal with fine granularity of phase and probability amplitude? Unknown!
  98. Quantum amplitude amplification has similar issue to phase granularity
  99. Simulators need to be configured to reflect granularity of phase and probability amplitude
  100. No hands-on testing or empirical data
  101. Need for benchmarking
  102. Include granularity on label of quantum capabilities for quantum computers, algorithms, and applications
  103. Clarion call: All silos on deck!
  104. Limited granularity is a long-term issue that won’t be fixed in a few years and then just go away
  105. 48 fully-connected near-perfect qubits may be the sweet spot goal for near-term quantum computing
  106. But start with an upgraded 27-qubit quantum computer
  107. Maybe even a 36-qubit stepping stone
  108. We’re stuck in quantum toyland with no prospect of escape
  109. But maybe what seems toy-like to some may actually be great for others
  110. Once again, my conjecture is still only tentative and may yet be proven wrong, but I’m not holding my breath
  111. Never say never
  112. Now what? What’s next?
  113. Focus on more research to potentially discover alternative approaches
  114. A golden opportunity for some fresh academic research
  115. My original proposal for this topic
  116. Summary and conclusions

In a nutshell

  1. The focus of this paper is solely on gate-based quantum computers. No claim is made as to whether the conjecture and discussion of this paper may or may not apply to non-gate-based quantum computers such as annealing systems (D-Wave), photonic quantum computers (Xanadu), or specialized simulation systems.
  2. The focus of this paper is solely on two-level quantum systems (qubits). Whether the impacts on non-two-level systems (e.g., qutrits and qudits) are identical or similar is unknown.
  3. Granularity and gradations of continuous values. Any range of continuous values, such as a phase angle or probability amplitude, theoretically contains an infinity of discrete values, but in practice there will only be a finite number of discrete values or gradations in any range. The range has a granularity — the minimum numeric spacing between any two gradations or discrete values. A range might be fine-grained or coarse-grained — fine granularity with many gradations, or coarse granularity with few gradations.
  4. My conjecture: Lack of fine granularity of phase and probability amplitude severely limits the utility of quantum computing. Severely limits quantum advantage.
  5. I can’t absolutely prove this, but I am highly confident.
  6. I offer it as a conjecture, but I also assert it as a fact.
  7. I sincerely wish it were false, but I am very afraid that it is true.
  8. Dramatic quantum advantage — delivering on the full promise of quantum computing. The goal, but in many or even most cases we won’t achieve it.
  9. Fractional quantum advantage — an advantage, but not delivering on the full promise of quantum computing. In many or most cases it will be the best we can do.
  10. Exponential speedup — the great promise of quantum computing, but can it really be delivered? Only to a limited degree, not as limitless as promised.
  11. The essential root of the problem is that quantum information is a hybrid of discrete digital 0 and 1 which can be superimposed and entangled, plus phase and probability amplitude which are continuous values, at least in theory, but an infinity of continuous values cannot be practically implemented on a real physical device.
  12. The second root of the problem is that quantum Fourier transform has a critical dependence on dividing by 2^n which is a very large number for even n of moderate size (40 to 50 qubits.)
  13. The combination of these two issues creates a real nightmare that cannot be resolved in any practical manner. Something has to give.
  14. The bits of resolution of the DAC (digital to analog converter) hardware is likely the key limiting factor.
  15. The DAC limits the number of discrete analog voltage levels that can be applied to rotate the quantum state of a qubit.
  16. And the number of discrete analog voltage levels are likely also fairly limited. Not billions, trillions, quadrillions, and a lot more as quantum computing would seem to require.
  17. Analog signals likely also severely limit the granularity of phase and probability amplitude.
  18. Additional analog circuitry after the DAC may also limit the granularity of phase and probability amplitude.
  19. The analog signal must be transported to the qubit, typically as an electromagnetic signal, which probably limits the granularity of phase and probability amplitude.
  20. The qubit itself limits the granularity of phase and probability amplitude.
  21. Ultimately the physics of the underlying quantum phenomenon used to implement the quantum state by the qubits limits the granularity of phase and probability amplitude.
  22. Some ultimate limit at the Planck level as well as quantum uncertainty itself.
  23. General uncertainties around all factors. Even if relatively fine granularity is supported, the uncertainty might be significantly larger than the nominal granularity.
  24. It’s not clear which of the factors is the more limiting factor.
  25. I personally don’t know enough about classical digital and analog electronics to figure all of this out.
  26. Do the same limits apply to both phase and probability amplitude? Likely but not certain.
  27. The number of gradations can’t be greater than the DAC resolution, by definition.
  28. Any technique which relies on fine granularity of phase or probability amplitude will be severely limited in its quantum advantage.
  29. Algorithms and applications can still function, but are unlikely to achieve dramatic quantum advantage. Only a limited degree of exponential speedup, not as limitless as promised.
  30. More modest advantage is more likely up to 1,000X or maybe even 1,000,000X. But not the kind of dramatic quantum advantage that has been talked about and promised for quantum computing. Not one quadrillion X or beyond.
  31. How did this all happen? Too many silos with gaps between them.
  32. The reason that people don’t currently see a problem is because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem. But as soon as qubit fidelity, qubit connectivity, and maximum circuit depth reach a critical mass, the problem will become manifest.
  33. When will we hit the wall and start seeing the problem? Hard to say for sure. Not in the next year to 18 months, but likely within two to three years.
  34. Quantum Fourier transform is critically dependent on fine granularity of phase. This dramatically limits its quantum advantage.
  35. The root of the problem is that there is a divisor of 2^n in the exponential term in the unitary transformation matrix for the quantum logic gate used to effect the fine-grained qubit rotation required for quantum Fourier transform which only works for smaller values of n, not for n greater than 32 or maybe even 20 to 24.
  36. This issue is not specific to any particular qubit technology. There may be differences for each qubit technology, particularly on the analog signal front, but the DAC issue and general limit on granularity will be common to all qubit technologies.
  37. Quantum phase estimation (QPE) and quantum amplitude estimation (QAE) are also dependent on quantum Fourier transform and fine granularity of phase. Will encounter the same limitations on quantum advantage.
  38. Any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage.
  39. Quantum Fourier transform (QFT) is the most powerful algorithmic building block available.
  40. Many applications rely on quantum Fourier transform to achieve dramatic quantum advantage.
  41. Nothing more powerful than quantum Fourier transform.
  42. Maybe there are alternatives, but…
  43. No obvious alternative to quantum Fourier transform.
  44. No obvious alternative implementations of quantum Fourier transform.
  45. Some physics simulations may still be possible. If they rely on nearest neighbor connectivity but not fine granularity of phase.
  46. Some quantum computational chemistry computations using quantum phase estimation (QPE) may still be possible. It will vary based on how many bits of precision are needed.
  47. Some computer science experiments may still be possible. Again if they don’t rely on much more than nearest neighbor connectivity or on fine granularity of phase.
  48. No obvious alternative architectures for quantum computing that address the concern of this paper. Maybe there could be some, but it seems unlikely.
  49. Banding for quantum Fourier transforms or Coppersmith’s approach of approximate quantum Fourier transform (AQFT) won’t overcome the problem. Those two approaches are more effective at minimizing computation (gate count and circuit size) rather than achieving greater precision. Even if they do reduce the magnitude of the problem, they are still unlikely to overcome the precision limitations for DAC hardware.
  50. Shor’s factoring algorithm to crack large public encryption keys? No way. Period. Not now. Not ever. Can work for relatively small numbers, but not for hundreds or thousands of bits. Put simply, an exponential divisor of 2⁴⁰⁹⁶ absolutely won’t work for a real DAC and real analog signals. This extreme view relies on the conjecture of this paper being true — and I believe it is.
  51. Maybe someday someone will invent a new factoring algorithm which can actually work on a real quantum computer for 4096-bit public encryption keys, but it won’t be able to use quantum Fourier transform or rely on fine granularity of phase or probability amplitude.
  52. Scalability of quantum algorithms is vital, but problematic in the absence of clarity and predictability for fine granularity of phase and probability amplitude.
  53. How does quantum error correction (QEC) deal with fine granularity of phase and probability amplitude? It’s not clear that they do deal with it at all.
  54. Quantum amplitude amplification has similar issue to phase granularity.
  55. Simulators need to be configured to reflect granularity of phase and probability amplitude.
  56. No hands-on testing or empirical data. My own personal focus is on theory, architecture, design, technical specifications, and documentation, and examination of written materials in such areas, but without any hands-on testing or empirical data. I’ll leave it to others to focus on testing and empirical data. And eventually benchmarking.
  57. Need for benchmarking. Once the issue has been adequately addressed from the theory, architecture, design, technical specifications, and documentation perspectives, then (and only then!) it will be appropriate to address benchmarking of granularity of phase and probability amplitude, as well as benchmarking of quantum Fourier transform (QFT), quantum phase estimation (QPE), and quantum amplitude estimation (QAE).
  58. Include granularity on label of quantum capabilities for quantum computers, algorithms, and applications. The number of gradations supported. As well as the qubits supported for quantum Fourier transform (QFT).
  59. Clarion call: All silos on deck! It’s time for technical staff from all technical silos to understand the entire problem, from silo A to silo Z and assure that either the issue gets resolved, fully or partially, and properly documented as a limitation if it can’t be fully resolved.
  60. Limited granularity is a long-term issue that won’t be fixed in a few years and then just go away.
  61. 48 fully-connected near-perfect qubits may be the sweet spot goal for near-term quantum computing. With the potential for a 20-qubit quantum Fourier transform which could provide a million to one computational advantage.
  62. But start with an upgraded 27-qubit quantum computer. Get the qubit fidelity and connectivity up to the level where fine granularity of phase is the gating factor for a ten to twelve-qubit quantum Fourier transform.
  63. Maybe even a 36-qubit stepping stone. If 48 qubits is still too ambitious. Support 15 to 16-qubit quantum Fourier transform.
  64. It does indeed appear that we are indeed stuck in quantum toyland, with no prospect of escape.
  65. But maybe what seems toy-like to some may actually be great for others.
  66. Once again, my conjecture is still only tentative and may yet be proven wrong. But I’m not holding my breath. Okay, maybe I am.
  67. I merely raise the issue and hope that others with much greater knowledge, skills, and expertise can confirm, deny, or resolve the issue with something approaching absolute 100% definitive certainty.
  68. Now what? What’s next? We may be screwed. We’ll just have to wait and see, and lower our expectations for many applications, to fractional quantum advantage. And focus on more research to discover alternative approaches.
  69. Focus on more research to potentially discover alternative approaches. There’s no guarantee that research will discover some magical solution, but… you never know. And generally research is a good thing, no matter what.

The focus of this paper is solely on gate-based quantum computers

The focus of this paper is solely on gate-based quantum computers.

The focus of this paper is solely on two-level quantum systems (qubits)

Although the main focus of this paper is on two-level quantum systems (qubits), it is possible that quantum systems with more than two levels (e.g., qutrits or qudits) might be impacted somewhat differently than the impact on two-level systems described in this paper. I don’t know of or even suspect differences, but I simply do not know either way and I’m not making any representations either way.

Granularity and gradations of continuous values

Although the binary 0 and 1 values of a bit or qubit are discrete, other aspects of the quantum state of a qubit are continuous values, namely phase and probability amplitude, which are nominally real numbers between 0.0 and 1.0. Since they are continuous, nominally they are not discrete — there is nominally an infinity of discrete values in the range.

  1. Fine-grained. Fine granularity with many gradations, many discrete values.
  2. Coarse-grained. Coarse granularity with few gradations, few discrete values.

The essence of the problem

Oversimplifying, the problem comes from a sequence of processing steps:

  1. Quantum Fourier transform is essential for many of the key applications of quantum computing.
  2. Quantum Fourier transform requires very fine-grained rotations of qubit state.
  3. Fine-grained rotation of qubit state requires a fine-grained analog signal.
  4. A digital to analog converter (DAC) is used to convert a discrete digital level to an analog voltage level.
  5. A DAC has a very limited resolution. From 8 to 32 bits or so.
  6. This limits how fine-grained the qubit rotation can be. The analog signal also limits granularity — the number of gradations or discrete levels.
  7. The digital qubit rotation is derived from an exponential term in the unitary transformation matrix for the quantum logic gate used to effect the fine-grained qubit rotation.
  8. The exponential term has a divisor which is 2^n for an n-bit quantum Fourier transform.
  9. That’s fine and easy for small values of n, but becomes harder for moderate size of n, and impossible for large values of n. Values of n below 14 or so are easy. Values of n up to 18 are harder but doable. Values of n from 20 to 24 get harder. Values on n above 24, 32, 40, 50, and higher are even harder and eventually impossible. Values of n in the hundreds, thousands, or millions are outright impossible.
  10. In short, a quantum Fourier transform cannot handle data wider than about 20 to 32 qubits.
  11. Other techniques must be used for larger data, but can’t achieve the dramatic quantum advantage of quantum Fourier transform for larger data.

The essence of the problem, more simply

  1. The essential root of the problem is that quantum information is a hybrid of discrete digital 0 and 1 which can be superimposed and entangled, plus phase and probability amplitude which are continuous values, at least in theory, but an infinity of continuous values cannot be practically implemented on a real physical device.
  2. The second root of the problem is that quantum Fourier transform has a critical dependence on dividing by 2^n which is a very large number for even n of moderate size (40 to 50 qubits.)
  3. The combination of these two issues creates a real nightmare that cannot be resolved in any practical manner. Something has to give.

Dramatic quantum advantage — delivering on the full promise of quantum computing

The whole point of using a quantum computer is to gain a quantum advantage over a solution which is implemented by a classical computer. A dramatic performance advantage.

Fractional quantum advantage — an advantage, but not delivering on the full promise of quantum computing

If a quantum solution is only delivering some performance advantage but far short of a dramatic quantum advantage, I call that a fractional quantum advantage. That may be fine for some or even many applications but it simply isn’t delivering on the full promise of quantum computing.

  1. Minimal quantum advantage. A 1,000X advantage over a classical computer.
  2. Substantial or significant quantum advantage. A 1,000,000X advantage over a classical solution.

Exponential speedup — the great promise of quantum computing, but can it really be delivered?

Even the most basic introduction to quantum computing hammers home the essential promise of quantum computing — exponential speedup, that with only n qubits you can effectively perform 2^n computations in parallel.

  • No, exponential speedup cannot be delivered as a general proposition for most cases, especially those at production scale, for most people most of the time.

Background

Many of the advanced programming techniques of quantum computing are expected to achieve their quantum advantage by exploiting quantum Fourier transform (QFT) or more advanced techniques such as quantum phase estimation (QPE) or quantum amplitude estimation (QAE) which in turn rely on quantum Fourier transform, but this key and very powerful algorithmic building block relies heavily on fine granularity of phase and probability amplitude, and in particular a degree of fineness which may not even exist in reality.

  1. 8-bit. 256 discrete values.
  2. 10-bit. 1024 discrete values.
  3. 12-bit. 4096 discrete values.
  4. 14-bit. 16K discrete values
  5. 16-bit. 64K discrete values.
  6. 20-bit. One million discrete values.
  7. 24-bit. Sixteen million discrete values.
  8. 32-bit. Four billion discrete values.
  9. Beyond 32 bits. To the best of my current knowledge, at present there are no DACs beyond 32-bits.
  1. 50 qubits.
  2. 64 qubits.
  3. 72 qubits.
  4. 80 qubits
  5. 96 qubits.
  6. 128 qubits.
  7. 256 qubits.
  8. Hundreds of qubits.
  9. Thousands of qubits.
  10. 2048 qubits. To crack a 1024-bit encryption key.
  11. 4096 qubits. To crack a 2048-bit encryption key.
  12. 8192 qubits. To crack a 4096-bit encryption key.
  13. Millions of qubits.
  1. 20 qubits.
  2. 24 qubits.
  3. 32 qubits.

What is a quantum Fourier transform (QFT)?

In all honesty, quantum Fourier transform (QFT) is a very complex topic and no short summary could do it justice, so I won’t try, other than to make the following brief statement:

  • The purpose of a quantum Fourier transform (QFT) is to map or compress (transform) a very large but periodic solution space into a very small integer value.
  • Generally, a 2^n solution space is compressed or mapped (transformed) to n qubits.

The problem

We start with the baseline assertion:

  • Quantum Fourier transform (QFT) is absolutely essential to realize many or most of the more intensive promises made on behalf of quantum computing in terms of achieving a dramatic quantum advantage in performance over classical computing.

My conjecture

I offer it as a conjecture to be considered and reasoned or proven true or false:

  • Lack of fine granularity of phase and probability amplitude Is the fatal achilles heel which dooms quantum computing to severely limited utility.
  • Limited granularity of phase and probability amplitude limits the utility of quantum algorithms to 24 to 36 qubits.

Implications of my conjecture

If my conjecture is true, then there are Implications:

  1. A quantum algorithm can evaluate or consider no more than a few billion possibilities, at most.
  2. Maybe a lot less — maybe only a million or even a thousand or a hundred.
  3. And once shot count (circuit repetitions) are taken into account, any apparent advantage may get divided down to a much smaller advantage or even no advantage at all.

I can’t prove my conjecture, but technical specifications don’t disprove it either

No current vendor documents the granularity of phase or probability amplitude.

I’m at least half-convinced that my conjecture is true

Although I can’t prove my conjecture to 100% absolute certainty, I do feel relatively confident that it is likely true.

To be clear, my conjecture is still only tentative and may yet be proven wrong

I am indeed reasonably convinced that my conjecture is true. Although I concede that there is some non-zero chance that I’m wrong, so I don’t consider my conjecture to be absolutely final or definitive, yet.

Utility: quantum parallelism and quantum advantage

What do we mean by utility? Beyond the simple function of a program, which can be implemented even on a simple classical computer, by utility of a quantum computer or quantum algorithm or application we mean that a significant performance advantage is delivered. It has some significant usefulness that is well beyond what can be achieved with a classical computer.

How severely limited will utility be?

If lack of fine granularity of phase and probability amplitude does severely limit the utility of quantum computers, algorithms, and applications, how severe might those limits be?

  1. Minimal quantum advantage. A 1,000X advantage over a classical computer.
  2. Substantial or significant quantum advantage. A 1,000,000X advantage over a classical solution.
  1. 2X. Barely twice the performance of a classical computer.
  2. 4X-10X. Maybe ten times the performance of a classical computer.
  3. 20X-50X.
  4. 100X.
  5. 1,000X = 10 qubits.
  6. 10,000X.
  7. 50,000X.
  8. 64K X = 16 qubits.
  9. 256K X = 18 qubits.
  10. 1,000,000X = 20 qubits. May be a near-term practical limit (2–3 years).
  11. 16M X = 24 qubits. May be a near-term practical limit (2–3 years).
  12. One billion X = 30 qubits. Probably beyond the near term.
  13. 32 qubits = Four billion X.
  14. 32–40 qubits. Maybe theoretically possible, eventually.
  15. 48 qubits. Definitely cannot be expected.

People don’t currently see a problem because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem

The reason that people don’t currently see a problem with algorithms dependent on fine granularity of phase or probability amplitude is because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem.

Beware of Quantum Algorithms Dependent on Fine Granularity of Phase

This paper builds on and deepens technical support for a related paper that I wrote almost two years earlier:

Quantum Fourier transform is critically dependent on fine granularity of phase

Just to highlight that quantum Fourier transform will have significantly limited quantum advantage due to its reliance on fine granularity of phase.

Quantum phase estimation (QPE) and quantum amplitude estimation (QAE) are also dependent on quantum Fourier transform and fine granularity of phase

Just to highlight that quantum phase estimation (QPE) and quantum amplitude estimation (QAE) will also have significantly limited quantum advantage due to their reliance on fine granularity of phase.

Any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage

Just to highlight that any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage as quantum Fourier transform, quantum phase estimation, and quantum amplitude estimation.

What is a DAC (Digital to Analog Converter)?

As already mentioned briefly, the crux of the problem is the conversion from classical discrete digital values to classical continuous analog values and vice versa. These conversions are known as digital to analog conversion, otherwise known as DAC, and analog to digital conversion, otherwise known as ADC.

  1. 8-bit. 256 discrete values.
  2. 10-bit. 1024 discrete values.
  3. 12-bit. 4096 discrete values.
  4. 14-bit. 16K discrete values
  5. 16-bit. 64K discrete values.
  6. 18-bit. 256K discrete values.
  7. 20-bit. One million discrete values.
  8. 24-bit. Sixteen million discrete values.
  9. 32-bit. Four billion discrete values.
  10. Beyond 32 bits. To the best of my current knowledge, at present there are no DACs beyond 32-bits.
  1. Analog Devices.
  2. Cirrus Logic.
  3. Texas Instruments (TI).

Where is the DAC?

Not so much where it is physically — which is in the control rack, the boxes of electronics outside the frig (cryogenic dilution refrigerator or cryostat) which send all of the coax cables into the frig, but logically in terms of code and data movement.

The DAC effectively limits the granularity of phase and probability amplitude

The DAC controls access to the quantum state of qubits since it controls the pulses which control qubits. If the DAC can only handle 2^k discrete values for a relatively small value of k (no more than 32 or so), that severely limits the granularity or gradations of phase and probability amplitude.

Analog signals likely also severely limit the granularity of phase and probability amplitude

Even if the DAC could handle a lot more bits, it’s not clear that an analog signal could handle a comparable number of discrete voltage levels. The number of discrete analog voltage levels are likely fairly limited. Maybe thousands and even millions, but likely not more than low billions, and not trillions, quadrillions, and a lot more as quantum computing would seem to require.

Additional analog circuitry after the DAC may also limit the granularity of phase and probability amplitude

There may also be additional analog circuitry after the DAC itself which might compromise or distort the analog signal before it is sent to the qubit itself, which could also limit the granularity of phase and probability amplitude.

The analog signal must be transported to the qubit, typically as an electromagnetic signal, which probably limits the granularity of phase and probability amplitude

Generally, and dependent on the specific qubit technology, the analog signal that comes out of the DAC and elated analog circuitry will eventually have to be transported to the qubit, generally in terms of an electromagnetic signal, typically either a microwave pulse or a laser pulse, and then received by the qubit where it once again becomes an analog signal, which will then be applied to the quantum device itself within the qubit.

The qubit itself limits the granularity of phase and probability amplitude

Once the analog signal comes out of the DAC and travels though additional analog circuitry and devices and has been transported to the qubit and is actually received by the qubit itself, then the qubit state is manipulated at the quantum level. We have no transparency on how reliably the analog signal is applied to the quantum state in the qubit itself, but it seems reasonable to presume that this process might not guarantee the fidelity of the granularity of phase and probability amplitude.

Ultimately the physics of the underlying quantum phenomenon used to implement the quantum state by the qubits limits the granularity of phase and probability amplitude

Once the analog signal has been transported and received by the qubit and additional analog logic is used to apply the analog signal to the quantum state of the qubit itself, we encounter the ultimately limiting factor of the physics of the underlying quantum phenomenon used to implement the quantum state by the qubit. That physics, the quantum physics of manipulating quantum state at the quantum level, will be the ultimate limit on the granularity of phase and probability amplitude.

Some ultimate limit at the Planck level as well as quantum uncertainty itself

There is likely some ultimate limit down at the Planck level which absolutely limits granularity of phase and probability amplitude, and hence the maximum number of gradations.

General uncertainties around all factors

Even if relatively fine granularity is supported, the uncertainty might be significantly larger than the nominal granularity, so that a measured value might have a coarser granularity than can be initialized or controlled.

It’s not clear which of the factors is the more limiting factor

In truth, I’m not sure which is the more limiting factor:

  1. Discrete values handled by the DAC. Resolution of k means 2^k discrete values.
  2. Limits of discrete voltage levels for an analog signal. Could it really be in the millions, billions, trillions, and quadrillions and much higher?
  3. Limits of additional analog circuitry after the DAC. Necessary analog circuitry after the DAC but before the analog signal is sent to the qubit itself may also impose limitations.
  4. Transmission of the analog signal to the qubit. Conversion of analog to electromagnetic signal (microwave or laser), and reception by the qubit and conversion back to analog.
  5. Application of the received analog signal to the quantum state of the qubit. How reliably can the phase or probability amplitude be applied to the quantum state in the qubit.
  6. The physics of the underlying quantum phenomenon used to implement the quantum state by the qubit. No matter how good our design and the quality of our science and engineering prowess, the underlying physics is what it is. If only we knew what that is.
  7. Some ultimate limit at the Planck level as well as quantum uncertainty itself.
  8. General uncertainties around all factors.

I personally don’t know enough about classical digital and analog electronics to figure all of this out

This is an informal paper. As such, it does not include all of the fine detail of classical digital and analog electronics which would be necessary to fully evaluate all of the questions and issues that this paper raises.

Gate execution and qubit measurement

The hardware needed to execute quantum logic gates will undoubtedly have some interaction with the granularity of phase and probability amplitude. I don’t have any specific knowledge here, but there is likely to be some impact on the accuracy of the results of a quantum computation.

  1. Relatively minor or insignificant.
  2. Modest to moderate.
  3. Significant to extreme.
  4. Prohibitive. Results are unusable.

Fidelity of SWAP gates

SWAP gates are really no different than any other two-qubit quantum logic gate, in fact a SWAP gate typically is just a macro operation which expands to three CNOT gates, but SWAP has a special role in simulating connectivity between qubits which do not have native connectivity — two qubits which are not adjacent and one or both of them must be repeatedly swapped with adjacent qubits until the quantum state of the original two qubits have been moved to two physically adjacent qubits where the desired two-qubit gate can be executed. Any issues with the granularity of phase and probability amplitude can be amplified dramatically if numerous SWAP gates — a so-called SWAP network or qubit routing — are required.

Do the same limits apply to both phase and probability amplitude? Likely but not certain

Since phase and probability amplitude are combined into the same complex number, I presume that they have the same limitations, but that’s at the physics modeling level, so it doesn’t necessarily tell us what limits apply down at the level of the actual physical phenomenon.

How did this all happen? Too many silos with gaps between them

It’s common for engineers and other technical staff to work in their own narrow and relatively isolated silos with only formally-negotiated interfaces between them. The net result is gaps in knowledge between silos, especially between silos which are not adjacent.

  1. Theoretical mathematicians.
  2. Applied mathematicians.
  3. Computer science researchers.
  4. Applied computer scientists.
  5. Application developers.
  6. Algorithm designers.
  7. Algorithm researchers.
  8. Software interface designers.
  9. Software interface developers.
  10. Quality assurance engineers.
  11. Quality assurance technicians.
  12. Digital electrical engineers.
  13. Analog electronics engineers.
  14. Analog component designers.
  15. Analog integrated circuit designers.
  16. Digital integrated circuit design engineers.
  17. Integrated circuit fabrication engineers.
  18. Applied physicists.
  19. Experimental physicists.
  20. Theoretical physicists.

People don’t currently see a problem because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem

The reason that people don’t currently see a problem is because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem. Current hardware precludes implementation of quantum Fourier transform with more than a few qubits, so people aren’t even trying to use quantum Fourier transform.

When will we hit the wall and start seeing the problem? Maybe two to three years

It’s hard to say for sure when we will hit the wall and people will start seeing the problem. Not in the next year to 18 months, but likely within two to three years.

Quantum information — discrete binary and continuous values

Quantum information, which is represented as quantum state in qubits, is a curious combination of discrete binary values, 0 and 1, and continuous values — phase and probability amplitude, which range from 0.0 to 1.0 as well as negative values for phase. It is those continuous values which are of interest in this paper.

Some alternatives

Even if my conjecture is correct, there might be some viable alternatives:

  1. Maybe there is an alternative approach to implementing quantum Fourier transform.
  2. Maybe there is an alternative to using quantum Fourier transform. An alternative algorithmic building block.
  3. Maybe there are alternatives that have as much or more power than quantum Fourier transform. An approach that is even better than quantum Fourier transform.
  4. Possibility that placing the classical electronics in the cryostat might reduce interference to enable a few more bits of precision.

Are there any reasonable alternatives to quantum Fourier transform?

This is a great question, whether there are any reasonable alternatives to quantum Fourier transform, but nobody has a good answer at this stage. So far, quantum Fourier transform has been the best that the best people could come up with.

What might be more powerful than quantum Fourier transform?

This is really just a variation of the previous question. At present, quantum Fourier transform is simply the most powerful algorithmic building block that the best people have been able to come up with.

  1. Variational methods. They work, but don’t really offer a lot of computational power or any opportunity for truly dramatic quantum advantage. Maybe they might work extremely well for some niche applications, but nobody has discovered any yet.
  2. Order finding. Extremely powerful, the basis for Shor’s factoring algorithm, but it relies on quantum Fourier transform for much of its power anyway.
  3. Grover search. Only quadratic speedup. Not a viable alternative, even if it’s popular.
  4. That’s about it. Really. This list of known possibilities is that short.
  5. Fund more research! This is really important. Maybe some blue-sky research could uncover some possibilities.

No, variational methods don’t show any promise of delivering any dramatic quantum advantage

If quantum Fourier transform cannot be used, one category of alternatives are variational methods. Unfortunately, they are not anywhere near as powerful as quantum Fourier transform.

Are there any reasonable alternative approaches to implementation of quantum Fourier transform?

Once again, this is a great question, whether there are any reasonable alternative approaches to implementation of quantum Fourier transform, but nobody has a good answer at this stage. So far, the existing implementations of quantum Fourier transform have been the best that the best people could come up with.

Might an alternative quantum computing architecture or programming model avoid my concern?

Another great question, whether my concern is simply an artifact of the existing quantum computing architecture and programming model, so that some alternative quantum computing architecture or programming model might avoid my issue, but nobody has a good answer at this stage. So far, the existing quantum computing architecture and programming model have been the best that the best people could come up with. And there has been no hint of alternatives which avoid my issue.

Is a radically different technology and architecture, and maybe even programming model required to get past my concern?

Restating the point of the preceding section a bit more strongly, maybe we can only get past my concern with an approach to quantum computing that is not just moderately different but radically different.

  1. Qubit technology.
  2. Qubit control.
  3. Quantum processor architecture.
  4. Programming model.

Might placing the classical electronics in the cryostat reduce interference to enable a few more bits of precision?

Who knows, maybe it is possible that placing the classical electronics in the cryostat might reduce interference and noise to enable a few more bits of precision for granularity of phase and probability amplitude. That wouldn’t completely eliminate the issue, but give us a little breathing room.

  1. 32 bits? Since it might not otherwise be practical.
  2. 36 bits? Possibly.
  3. 40 bits? Maybe.
  4. 44 bits? This might be the practical limit.
  5. 48 bits? I kind of doubt it.

Bits of precision vs. qubits of precision

There is some ambiguity over whether we should always refer to bits of precision, such as for classical digital circuitry such as a DAC, or qubits of precision when referring to a quantum Fourier transform.

Currently this limitation is a non-issue since current limits on qubit fidelity and connectivity prevent running into these limits

Although my concern is very real and looming over the horizon, it is not a near-term issue since the current limits on qubit fidelity and qubit connectivity prevent quantum circuits from running into this limitation of granularity of phase and probability amplitude.

The exponential math issue

The heart of the math for a quantum Fourier transform is an exponential term of the following form in the entries of the unitary transformation matrices:

  • EXP( two pi i j k / (2^n) )
  1. n = 3, 2³ = 8.
  2. n = 8, 2⁸ = 256.
  3. n = 10, 2¹⁰ = 1024.
  4. n = 12, 2¹² = 4096.
  5. n = 16, 2¹⁶ = 64K.
  6. n = 20, 2²⁰ = one million.
  7. n = 24, 2²⁴ = sixteen million.
  8. n = 30, 2³⁰ = one billion.
  9. n = 32, 2³² = four billion.
  10. n = 40, 2⁴⁰ = one trillion.
  11. n = 50, 2⁵⁰ = one quadrillion.

Inverse quantum Fourier transform

While we’re on the exponential terms, it is worth noting that for each quantum Fourier transform there is also an inverse quantum Fourier transform which, as one might expect, transforms in the opposite direction.

  • EXP( two pi i j k / (2^n) )
  • EXP( minus two pi i j k / (2^n) )

Euler’s formula

Just as an interesting aside, which might help guide intuition, there is a concept called Euler’s formula which says that the following two expressions are exactly equivalent:

  • EXP(i x)
  • COS(x) + i SIN(x)

Two pi conversion factor from normalized phase to phase angle in radians

If you recall the exponential term from three sections ago:

  • EXP( two pi i j k / (2^n) )
  1. 1.0 to two pi. 360 degrees, a full circle or cycle.
  2. 0.5 to pi. 180 degrees, a half circle or cycle.
  3. 0.25 to pi over two. 90 degrees, a quarter circle or cycle.

Phase angle of two superposed basis states

Just to make clear what is really going on, the quantum Fourier transform is referring to the phase angle for the two basis states, representing 0 and 1.

  1. The gross magnitude of the phase angle.
  2. How small the differences can be between two phase angles.

Mapping phase angle to the input range for the DAC

As mentioned earlier, we start with phase in normalized form, 0.0 to 1.0 (as well as negative values), then a two pi conversion factor is thrown in to convert the normalized phase to a phase angle in radians, which is what appears in the unitary transformation matrix, in the exponential terms.

Limited precision of entries in a unitary transform matrix

There’s no clarity as to how much precision is supported by the entries of unitary transform matrices (UTM) for various quantum computers and whether floating point exponents can be small enough to handle the very small exponents of exponentials which have a divisor of 2^n where n could be very large.

Total qubits vs. quantum Fourier transform precision

This paper is not speaking to the total number of qubits used in a quantum computation or quantum circuit, or the total number of qubits in a quantum computer, but simply to the qubits dedicated to a single quantum Fourier transform.

Pulses to control qubits

Ultimately the execution of the quantum logic gates which comprise a quantum circuit involves translating the math of the entries of the unitary transformation matrix of each gate into a sequence or graph of one or more electromagnetic pulses which then directly operate on the quantum state of the specified qubits.

Pulse control

Pulses have a starting time, a duration, a shape, and an amplitude and phase. These parameters permit the hardware to generate the waveform for the pulse. And they are performed on a specified channel or device. Even a single qubit can have multiple channels.

No hints about pulse precision in IBM Qiskit Pulse documentation and tutorials

I skimmed through a fair amount of information about IBM Qiskit Pulse, but was unable to find anything that hinted at the precision or resolution of pulses.

Notes on IBM pulse control

You can mostly ignore this section since it’s merely notes that I made to myself when reviewing presentations on IBM Qisit Pulse control. I preserved my notes here lest I lose track of where they were.

  1. https://qiskit.org/textbook/ch-quantum-hardware/calibrating-qubits-pulse.html
  2. https://qiskit.org/documentation/apidoc/pulse.html
  1. OpenPulse: Software framework for quantum computing with pulses
  2. https://www.youtube.com/watch?v=uBw2fo1rwr8
  3. Now called Qiskit Pulse
  4. Presenter: Lauren Capelluto, Quantum Software Engineer, IBM Research
  5. The quantum computing industry provides public access to superconducting qubit systems through open-source quantum computing frameworks such as Qiskit. Compilation techniques play a critical role in leveraging these small scale, noisy devices by driving down error rates in program execution. The compiler backend decomposes quantum operations into microwave pulses which aim to realize the desired quantum operations with the highest fidelity possible. We introduce OpenPulse, a pulse-level programming component of Qiskit, that hands over analog control of quantum computing systems to the user. Using OpenPulse, the user can specify the exact time dynamics of a program by scheduling arbitrary waveforms on control system resources, and can recover the time dynamics of the measured output. This is sufficient to allow the user to freely characterize, verify and validate the quantum system, and to explore gate optimization and error mitigation techniques to enhance system performance. OpenPulse enables the community to collectively push the field onwards towards practical computation.
  6. April 3, 2020
  7. Low-level pulse control
  8. Control electronics
  9. Default frequency settings
  10. Power user
  11. Quantum circuits translate to pulse schedules
  12. Pulse programming model
  13. Pulses — complex-valued amplitudes — phase and frequency
  14. Instructions and channels
  15. Control rack electronics
  16. DAC (Digital to Analog Converter)
  17. Diagram showing DAC and other major system components
    https://youtu.be/uBw2fo1rwr8?t=277
  18. Arbitrary waveform generator
  19. Play — passes user sample data to device
  20. Delay
  21. Sine wave generators
  22. Set frequency
  23. Shift phase
  24. Drive channel
  25. Signal channel
  26. Measure channel
  27. Qubit
  28. Readout resonator
  29. ADC
  30. Acquire — instruction for readout, to perform measurement
  31. Acquire channel
  32. Virtual channels
  33. Matching labels to drive lines
  34. Abstract away multiplexed readout
  35. Readout chain
  36. Three levels
  37. RAW: Level 0
  38. Raw data — direct output from ADC
  39. KERNELED: Level 1
  40. Kernel integrates raw data to create IQ points (IQ Plane — In-phase and Quadrature)
  41. DISCRIMINATED: Level 2
  42. Discriminator
  43. Classify — converts level 1 to qubit 1/0 state
  44. Ground state
  45. Excited state
  46. IQ scatter plots
  1. Qiskit Pulse: Programming Quantum Computers Through the Cloud with Pulses
  2. https://www.youtube.com/watch?v=V_as5PufUiU
  3. Centre for Quantum Technologies
  4. CQT Online Talks — Series: Quantum computation and simulation
  5. Speaker: Nicholas Bronn, IBM
  6. July 23, 2020
  7. Abstract: Qiskit is IBM’s open source framework for quantum computing, and Qiskit Pulse provides pulse-level control of deployed quantum computers over the cloud. In this talk, I will overview the control and measurement of superconducting transmon qubits with microwave pulses, programming such pulses with Qiskit Pulse, and the calibration and benchmarking of them. Finally, we’ll summarize some recent applications of Qiskit Pulse, including error mitigation, qubit measurement discrimination, and reduction of leakage outside the computational subspace.
  8. Pulse-level control
  9. Control of physical systems
  10. Hardware backend
  11. Characterize and mitigate errors
  12. Pulse is part of Terra. Based on OpenPulse
  13. Now Qiskit Pulse
  14. Qiskit pulse arxiv
    Qiskit Pulse: Programming Quantum Computers Through the Cloud with Pulses
    https://arxiv.org/abs/2004.06755
  15. Optimal control
  16. Error mitigation
  17. Reducing errors through dynamical decoupling
  18. Anatomy of a superconducting qubit
  19. Anharmonic oscillator
  20. Microwave pulses
  21. Cross Resonance: ZX Operation
  22. Physical channel in the lab. Output of an AWG that goes into a frig that is coupled to a qubit
  23. Acquire channel. Tells a digitizer to acquire a signal that’s coming back from a measurement signal
  24. Modulated waveform
  25. Circuit vs. pulse model
  26. Mixer skew
  27. Nonlinearity
  28. Leakage (from |1> to |2>)
  29. Mixed state
  30. Echo
  31. Rotary echo — reduce spectator errors
  32. Calibrating and benchmarking cross resonance (CR)
  33. Gaussian square pulse
  34. Process tomography
  35. Hamiltonian tomography
  36. Parametric pulses
  37. Pulse builder
  38. Error mitigation workflow. Especially for VQE. Perform Richardson Extrapolation.
    Error mitigation extends the computational reach of a noisy quantum processor
    https://www.nature.com/articles/s41586-019-1040-7
    https://arxiv.org/abs/1805.04492
  39. IQ plane. IQ is an abbreviation for In-phase and Quadrature phase components of a signal.
  40. Accessing higher orders of the transmon — qutrits

How many qubits is reasonable to expect for a quantum Fourier transform?

Some possibilities:

  1. 10–12 qubits. A slam dunk. Maybe not today, but one to two or three years out.
  2. 14–16 qubits. Likely also a slam dunk.
  3. 18 qubits. Should be a slam dunk as well.
  4. 20–24 qubits. May be the practical limit.
  5. 28 qubits. May or may not be feasible.
  6. 30 qubits. Hopefully feasible.
  7. 32 qubits. Likely limit of feasibility.
  8. 36 qubits. Not currently considered feasible, but maybe someday.
  9. 40 qubits. Really beyond the realm of practicality, but never say never.
  10. 44 qubits. Nope. Don’t even think about it.
  11. 48 qubits. Nope. Ditto.

10–12 qubits seems like a slam dunk, maybe 14–16 qubits

A quantum Fourier transform for ten to twelve qubits is certainly a slam dunk for two to three years from now, even if near-term quantum hardware still struggles with even a few qubits.

16 to 18 qubits is likely a slam dunk for two to three years from now

Sixteen to eighteen qubits is likely a slam dunk for what a quantum Fourier transform should be able to handle two to three years from now.

20 or 24 qubits as a reasonable expectation for what we might have in two to three years

Twenty to twenty four qubits is likely a reasonable expectation for what a quantum Fourier should be able to handle two to three years from now, but not an absolute slam dunk.

28 to 32 qubits seems out of reach, but who really knows

Based on current DAC hardware, it seems unlikely that a quantum Fourier transform for twenty eight to thirty two qubits would be feasible, even two to three years from now. But predicting that far in the future is always a somewhat risky proposition.

36 to 40 qubits is clearly beyond the realm of what we can expect, but it’s worth contemplating as a theoretical matter

There are no current or expected DACs that could handle more than 32 bits, but it’s always worth speculating about how hardware could be extended as hardware technology evolves.

Nobody should be expecting quantum Fourier transform for more than 40 qubits

If even 32 qubits is too much to expect for a quantum Fourier transform and DAC, then clearly 34, 36, 38, and 40 qubits and beyond need to be treated as beyond the realm of what’s reasonable to expect.

44 to 48 qubits are well beyond the realm of expectation for what a quantum Fourier transform could handle

If forty qubits is well beyond the realm of what’s reasonable to expect for a quantum Fourier transform, then clearly forty four to forty eight qubits and beyond are flat-out beyond consideration.

No, banding and approximate quantum Fourier transform (AQFT) won’t facilitate precision of quantum Fourier transforms

A large quantum Fourier transform can quickly result in very small values of phase, even zero or very close to zero. Banding (also known as approximate quantum Fourier transform (AQFT)) is a technique to skip over all of the zero values of phase and even most of the very small values of phase which are so close to zero as to not having a significant effect on the computation of the quantum Fourier transform. Adding a significant number of zeros or very small values is unlikely to significantly impact the net result significantly, so banding simply skips over them — any matrix entries outside of a relatively narrow band.

Demand transparency — transparency is mandatory

We need to push vendors to be fully transparent about what granularity they support for phase and probability amplitude. And how many qubits of precision they support for quantum Fourier transform.

  1. Expect transparency. Transparency is a reasonable default expectation.
  2. Assume transparency. We shouldn’t have to lift a finger to get transparency.
  3. Wait for transparency. Maybe hold off on products and services which lack sufficient transparency.
  4. Request transparency. We shouldn’t have to ask for transparency, but… Don’t be shy.
  5. Insist on transparency. We shouldn’t have to go beyond asking politely and nicely, but… Don’t be afraid to be assertive.
  6. Demand transparency. Hold their feet to the fire. Transparency shouldn’t be optional. And lack of transparency shouldn’t be tolerated.
  7. Transparency is mandatory. No ifs, ands, or buts. No excuses. Just do it!
  8. Without transparency, you have nothing!
  1. What is the granularity of phase and probability amplitude of your qubits? How many gradations?
  2. How many qubits of precision do you support for quantum Fourier transform?

Insist or demand that all limitations be clearly documented

Any and all limitations on the capabilities of quantum computers, algorithms, and applications should be clearly documented. We must insist on this. Even demand it.

Transparency is needed everywhere

Transparency is needed everywhere. Anywhere that a product or technology is mentioned. This includes:

  1. Documentation.
  2. Specifications.
  3. Source code, including comments.
  4. Press releases.
  5. White papers.
  6. Roadmaps.
  7. Marketing literature.
  8. Advertisements.
  9. Formal papers.
  10. Training materials.
  11. Tutorials.
  12. Educational materials.
  13. Lecture notes.
  14. General discussion.
  15. Media coverage.

No transparency on internal electronics for digital and analog circuitry

One of the most egregious areas of lack of transparency is the internal design of the classical digital and analog electronics. That would include any digital to analog convertors (DAC) or analog to digital converters (ADC).

  1. What DAC chip is used?
  2. What is the precision of the DAC chip? How many bits of precision and how many discrete digital values and analog voltage levels are supported?
  3. How many discrete analog voltage levels are supported? How many discrete values can be applied directly to the quantum state of a qubit?

Transparency needed for number of gradations for phase and probability amplitude

Just to emphasize what was just mentioned in the preceding section, that the precision of the DAC chip, the number of voltage levels supported, determines how many gradations of phase and probability amplitude are supported.

The number of gradations can’t be greater than the DAC resolution, by definition

Just to highlight this very key point from the preceding section:

  • The number of gradations can’t be greater than the DAC resolution, by definition.

Related circuitry and noise may reduce the absolute resolution of the DAC

As alluded to in the preceding sections, there may be other factors which prevent the full rated resolution of the DAC from being usable, so that the number of gradations for phase and probability amplitude may be somewhat less than the full DAC resolution. I don’t have any specific knowledge or hard details, but some possibilities are:

  1. Additional analog logic circuitry which might introduce noise or distort the signal.
  2. Oddities of the conversion from continuous real values in the range of 0.0 to 1.0 (and negative values for phase) to a discrete digital value from 0 to 2^n minus one. There might not be exactly 2^n discrete values going into the DAC.
  3. Unknown software factors. Oddities of how unitary transformation matrices and complex numbers are processed.
  4. Precision of floating point numbers and arithmetic.
  5. Environmental interference and noise. Digital signals might be relatively immune to noise, but analog signals can be especially vulnerable to environmental interference, especially if a very high number of gradations are supported.

An open source design for a quantum computer would be helpful

I don’t expect proprietary hardware vendors to respect transparency. But what would be helpful would be to have an open source design for a quantum computer.

Quantum advantage must be fully disclosed and adequately discussed

Quantum advantage is another area where everybody pays it lip service, but where there is very little if any actual transparency or disclosure.

What’s your quantum advantage?

The starting point would be the theoretical advantage. There must be a formula for the algorithmic complexity of the quantum algorithm or application. There must also be a comparison to the formula for algorithmic complexity of the best classical approach.

What other approaches are there if quantum Fourier transform isn’t available?

What can we do if limited granularity of phase and probability amplitude preclude the effective use of quantum Fourier transform? Not a lot. We have:

  1. Grover’s search algorithm. But it offers only a quadratic speedup.
  2. Variational methods. Generally mediocre advantage, at best. Finicky and problematic in general, such as the barren plateau problem. No clear and guaranteed path to dramatic quantum advantage for all use cases.
  3. Quantum phase estimation (QPE). Oops… it relies on quantum Fourier transform.
  4. Quantum amplitude estimation (QAE). Oops… it relies on quantum Fourier transform.
  5. Quantum amplitude amplification. Oops… it relies on fine granularity of probability amplitude.
  6. Order finding. Oops… it relies on quantum Fourier transform.
  7. That’s about it. Besides ad hoc algorithms for specific scenarios.

Are there ANY known opportunities to address the root issues? Nope

Sorry, but we’re really in a bind here.

Isn’t there ANY pathway to fine granularity of phase and probability amplitude? Nope.

Sorry, but this is the ugly truth. There isn’t even a known theoretical path to fine granularity of phase and probability amplitude.

  • There isn’t ANY pathway to beyond maybe a few million or possibly a billion gradations of phase and probability amplitude.

Isn’t there ANY pathway to quantum Fourier transform for over 50 qubits? Nope.

Sorry, but this is the ugly truth. There isn’t even a known theoretical path to quantum fourier transform for over 50 qubits — or even close to 50 qubits.

  • There isn’t ANY pathway to quantum Fourier transform for more than about 20 to 24 bits or maybe 32 bits at most. Transforming 50 or more qubits is out of the question.

What about Shor’s factoring algorithm? Sorry, but it will only work for small numbers, not very large numbers

Shor’s factoring algorithm for semiprime numbers, but only up to the limits of quantum Fourier transform discussed previously in this paper. And given the nature of how the algorithm works, the semiprime to be factored can have only half as many bits as the quantum Fourier transform can handle — because it requires squaring the input number, so factoring a 4096-bit public encryption key requires input and out register each with 8192 qubits.

  • Without very fine granularity of phase, Shor’s factoring algorithm will NOT work for very large semiprimes such as public encryption keys with 2048 or 4096 bits.

Won’t banding or approximate quantum Fourier transform (AQFT) resolve the issue for Shor’s factoring algorithm? No, not really

Banding (also known as approximate quantum Fourier transform (AQFT)) is a technique to eliminate the smallest terms in a quantum Fourier transform. Its purpose is to simplify the calculation of the quantum Fourier transform, to reduce the number of gates needed to perform the calculation. It does this by giving up some of the precision of the result of the transform. In short, it trades precision for performance.

  • Luckily, Coppersmith [1994] has shown that one can define an approximate Fourier transform that ignores these tiny phase factors, but which approximates the Fourier transform closely enough that it can also be used for factoring.
  1. Coppersmith informs us: “This leads us to define an Approximate Fourier Transform (AFFT), parameterized by an integer m”, where m is essentially the number of bits to preserve, discarding or skipping the remainder, but Shor doesn’t speak to how his algorithm would set m.
  2. Shor devotes only a single paragraph to this entire issue of tiny phase factors.
  3. It seems as if this was more of an afterthought for Shor than a central part of the algorithm. In fact, Coppersmith’s paper actually cites an earlier paper by Shor on his algorithm — which I don’t have access to — which presumably couldn’t have referred to Coppersmith’s paper, suggesting that this paragraph was indeed added later than Shor’s original algorithm.
  4. Shor doesn’t in fact actually incorporate Coppersmith’s approach into his own algorithm. All the reader has is this one paragraph of text, and Coppersmith’s algorithm standing by itself in a separate paper. And Coppersmith’s algorithm was based on Shor’s earlier paper, so it’s difficult to say how cleanly Coppersmith’s algorithm could be implemented in the context of Shor’s algorithm in his newer paper.
  5. Neither Coppersmith nor Shor actually gets around to dealing with exactly how small the terms can be. Both approaches are incomplete in that sense.
  6. Neither Coppersmith nor Shor addresses the issue of how many bits of precision are available for the quantum circuit designer when specifying universal transformation matrices. How small the matrix entries can actually be, and how many bits wide a single matrix entry for a gate can be. For example, is even double-precision floating point sufficient? Vendors of current quantum computing hardware don’t clearly document these details.
  7. At a minimum, Shor should have specified how Coppersmith’s input parameter m should be calculated. What math or formula should be used? Should it be based on the size (in bits) of the input semiprime, or the width of the quantum Fourier transform, which is twice as wide as the input semiprime (8192 bits for a 4096-bit input semiprime public key)? Both Coppersmith and Shor are silent on this critical factor.
  8. What mathematical proof, theorem, or principle can be used to validate the formula needed by the previous issue — calculation of m based on the size of the input semiprime?
  9. Shor doesn’t even mention Coppersmith’s m input parameter.
  10. Further, Shor failed to offer a proof, theorem, or principle that any bits could be dropped and still return a valid result, let alone how many bits could be dropped before the result would be invalid.
  11. Coppersmith’s approach is fine for transforms that are producing values intended to represent physical quantities where only a handful of digits of precision are needed — three to five digits plus a magnitude, but the result of the transform in Shor’s factoring algorithm is a ratio, not a physical quantity. It’s not clear if Shor’s algorithm would work if any of the bits of precision were lost or discarded, let alone a lot of the bits, which is what happens with banding or approximate quantum Fourier transform (AQFT).
  12. Shor simply doesn’t adequately address the question of how approximate the transform can be and still produce a valid result.
  13. Neither Coppersmith nor Shor provides a proof, theorem, or principle to justify what error would be acceptable for such an approximate quantum Fourier transform in order to return a result which would be valid for the ratio needed to calculate the order of the modular exponentiation sufficiently accurate to continue the process of deducing the factors of the input prime. No hand-waving, please. In fact, is any error acceptable in order to get the accuracy of the ratio needed to calculate the order? Show your math, as they say. Coppersmith has some math for calculating or estimating the error, but is silent on what the criteria are for an acceptable error.
  14. And finally, even if some bits can be dropped, it remains an open question whether real quantum hardware could properly execute gates with the remaining phase shifts.
  15. Neither Shor nor Coppersmith has parameterized their algorithms with the phase granularity that is actually supported by the hardware. This speaks to how many bits would have to be dropped before the hardware can actually correctly perform the requested computation, separate from whether that result is a valid result for the factoring algorithm itself.
  16. If the input were 4096 bits, the quantum Fourier transform would nominally be for 8192 bits, so even if m, the count of bits to preserve, were relatively small to be feasible on real hardware, like 50, a huge number of bits would be discarded in the transform. It’s just not credible that this would produce a valid result for Shor’s algorithm.
  17. Even if Coppersmith and Shor did manage to have a solution which mathematically works for deriving a valid transform result which theoretically can successfully be used to derive the factors of a semiprime, there is still the issue that the unitary transformation matrix and hardware (digital, DAC, analog, and quantum) might still be unable to implement that mathematics. Banding and Coppersmith’s AQFT approach can reduce the magnitude of the problem, but they are still unlikely to overcome the precision limitations for DAC hardware.
  18. Both Shor and Coppersmith seem more focused on reducing the amount of computation rather than the precision or phase granularity issue per se. In fact, Shor’s final words on this issue are: “In fact, this technique reduces the number of quantum gates needed for the (approximate) Fourier transform considerably, as it leaves out most of the gates Sj,k.” Leaves out most of the gates? How could that possibly produce a valid result for Shor’s algorithm? It’s just not credible.
  19. Overall, this issue is one of the weakest parts of Shor’s algorithm. Ultimately, it could be its fatal Achilles’ heel.

What about derivative algorithms of Shor’s factoring algorithm? Not if they still rely on quantum Fourier transform

There are indeed a number of derivative algorithms or implementations of Shor’s factoring algorithm — many listed in my reference paper cited in the preceding section. I haven’t closely examined all of them, but to the extent that they still rely on quantum Fourier transform or fine granularity of phase or probability amplitude then they will still be in the same boat of being limited to factoring semiprime integers no larger than about ten to sixteen bits, far short of the 2048 and 4096-bit public encryption keys that people have expected them to be able to factor.

  • Without very fine granularity of phase, any semiprime factoring algorithm which relies on quantum Fourier transform (QFT) will NOT work for very large semiprimes such as public encryption keys with 2048 or 4096 bits.

Large-scale physics simulations may still be feasible even without support for quantum Fourier transform

All may not be totally lost without fine granularity of phase and probability amplitude. Some large-scale physics simulations may actually be possible even without the need for quantum Fourier transform or quantum phase estimation or fine granularity of phase or probability amplitude.

Computer science experiments can also achieve significant quantum advantage

A variety of computer science experiments can still manage to achieve a significant quantum advantage — provided that they don’t rely on fine granularity of phase or probability amplitude, and don’t rely on quantum Fourier transform (QFT) or quantum phase estimation (QPE).

Google’s quantum supremacy experiment or cross-entropy benchmarking (XEB) can achieve dramatic quantum advantage

Google’s quantum supremacy experiment or cross-entropy benchmarking (XEB) is an example of such a computer science experiment, both in terms of achieving dramatic quantum advantage and not being representative of practical real-world commercial, business, or industrial applications.

What about quantum computational chemistry using quantum phase estimation (QPE)? Maybe, sometimes, it depends

Early on in quantum computing, quantum computational chemistry was quickly identified as a great candidate application. Quantum phase estimation (QPE) was quickly identified as an ideal computational approach, built on the foundation of quantum Fourier transform (QFT). This is still a widely held belief, but not considered practical on near-term quantum computers.

  1. 10 to 12 bits. A slam dunk.
  2. 13 to 14 bits. Should also be a slam dunk.
  3. 15 to 17 bits. Less of a slam dunk.
  4. 18 to 20 bits. Should be fine.
  5. 21 to 24 bits. May be fine as well, but certainly not a slam dunk.
  6. 25 to 31 bits. Quickly becoming problematic, but the breakpoint is not clear.
  7. 32 bits. Likely to be problematic.
  8. Beyond 32 bits. Likely out of reach.
  9. 48 bits and beyond. Clearly out of reach.

Still open questions as to the quality of the final quantum Fourier transform result even if a certain number of bits is supported

There are still open questions as to exactly what quality the final quantum Fourier transform result will have even if a certain number of bits is supported. 66%? 70%? 75%? 80%? 90%? 95%? Any better than 95%?

Scalability of quantum algorithms is vital, but problematic in the absence of clarity and predictability for fine granularity of phase and probability amplitude

Quantum algorithms need to be dynamically and automatically scalable so that they can deal with varying amounts of input and input parameters without requiring a complete redesign every time the input or an input parameter changes. But this requires clarity and predictability about the behavior of the quantum computer as input data and input parameters are varied.

How does quantum error correction (QEC) deal with fine granularity of phase and probability amplitude? Unknown!

I’m not yet an expert in quantum error correction (QEC), but I haven’t seen any discussion of error detection or correction for phase or probability amplitude, which are continuous values, not discrete binary 0 and 1. It’s not clear that they do deal with it at all.

Quantum amplitude amplification has similar issue to phase granularity

This paper takes pains to keep clear that phase and probability amplitude are roughly in the same boat with regard to the issue of fine granularity. This issue shows up in quantum amplitude estimation (QAE) the same as it does in quantum phase estimation (QPE). It also shows up in quantum amplitude amplification, which ultimately depends to some extent on fine granularity of probability amplitude, even if not to the same extreme extent as it does with quantum Fourier transform.

Simulators need to be configured to reflect granularity of phase and probability amplitude

Classical quantum simulators need to be configured to match target real quantum computers as closely as possible so that simulation results are a roughly accurate predictor of the results on the target real quantum computer. This includes the granularity of phase and probability amplitude.

No hands-on testing or empirical data

My own personal focus is on theory, architecture, design, technical specifications, and documentation, and examination of written materials in such areas, but without any hands-on testing or empirical data. I’ll leave it to others to focus on testing and empirical data. And eventually benchmarking.

Need for benchmarking

Once the issue has been adequately addressed from the theory, architecture, design, technical specifications, and documentation perspectives, then (and only then!) it will be appropriate to address benchmarking of granularity of phase and probability amplitude.

Include granularity on label of quantum capabilities for quantum computers, algorithms, and applications

I recently wrote a proposal for a label which summarizes the capabilities or requirements for a quantum computer, algorithm, and application. Granularity for phase and probability amplitude are indicated in that label as follows:

  • Fine granularity of phase and probability amplitude. Number of gradations. Generally approximated as a power of ten or a power of two — 100, 1,000, one million, a billion, or 2¹⁰, 2²⁰, 2³⁰. Generally needed for quantum Fourier transform (QFT) precision.
  • Quantum Fourier transform (QFT) precision. How many qubits can be transformed and achieve high quality results.

Clarion call: All silos on deck!

Time’s up! It’s time for technical staff from all technical silos to understand the entire problem, from silo A to silo Z and assure that either the issue gets resolved, fully or partially, and properly documented as a limitation if it can’t be fully resolved.

Limited granularity is a long-term issue that won’t be fixed in a few years and then just go away

To be clear, limitations on the granularity of phase and probability amplitude are not simply a near-term limitation which will be with us for just a couple of years and then magically vanish as technology advances over the next five to ten years. This is a limitation based on the physical limits of building physical devices.

48 fully-connected near-perfect qubits may be the sweet spot goal for near-term quantum computing

Too many people and quantum computer vendors are focusing on and obsessing over pursuing hundreds, thousands, millions, and even billions of qubits, but in my view that is a silly waste of effort and resources, and a distraction if qubit quality and connectivity is not sufficient to effectively utilize those qubits in practical quantum algorithms.

  1. Priority is quality over quantity.
  2. Full connectivity. But that’s a big leap from where we are today.
  3. 3.25 to 4 nines of qubit fidelity.
  4. Likely will still require some degree of error mitigation.
  5. 20 bits of granularity for phase. One million gadations. A 20-bit DAC.
  6. Support a 20-bit quantum Fourier transform.
  7. Computational leverage of one million to one — 2²⁰.
  8. Of course we want to get way beyond this ASAP, but we’re not even close to this yet.
  9. A solid goal for two to three years.
  10. A credible and palpable goal to aim at.
  11. Doesn’t rely on the distant fantasy of full quantum error correction.
  12. Should be able to stretch classical simulation to 48 bits as well, which will enable debugging of 48-qubit quantum algorithms.
  13. Maybe a few more bits of granularity could be achieved in subsequent years, or not. But focus on a goal that seems achievable and then build on it as opportunities arise in future years.

But start with an upgraded 27-qubit quantum computer

A quantum computer with 48 fully-connected near-perfect qubits is the clear goal to have, but even it may be a bit too ambitious. It may be better to start with an upgrade to current 27-qubit designs.

Maybe even a 36-qubit stepping stone

Even once a 27-qubit design has been upgraded and proved to support a ten to twelve-qubit quantum Fourier transform, 48 qubits may still seem too ambitious, and the underlying technology just not quite ready. Maybe 36 fully-connected near-perfect qubits would be a reasonable additional stepping-stone on the may to 48 qubits.

We’re stuck in quantum toyland with no prospect of escape

I hate to have to say this, but the prognosis is rather gloomy, at least relative to past extravagant promises for quantum computing. Smaller, toy-like quantum algorithms and applications are within the realm of possibility, but much beyond toy-like is beyond the realm of reasonable possibility.

But maybe what seems toy-like to some may actually be great for others

Toy-like is relative.

Once again, my conjecture is still only tentative and may yet be proven wrong, but I’m not holding my breath

Just to reiterate that I am indeed reasonably convinced that my conjecture is true. And that I concede that there is some non-zero chance that I’m wrong, so I don’t consider my conjecture to be absolutely final or definitive, yet, but I’m not holding my breath. Okay, maybe I am.

Never say never

Even if my conjecture is true for existing approaches to quantum computing, I don’t want to give up hope that some alternative approach to quantum computing or some more powerful algorithmic building block can be found.

Now what? What’s next?

We may be screwed. We’ll just have to wait and see.

Focus on more research to potentially discover alternative approaches

There’s no guarantee that research will discover some magical solution, but… you never know.

A golden opportunity for some fresh academic research

I wouldn’t ask commercial vendors of quantum computers to invest significant capital in research in this area, but this is a great opportunity for academic researchers to do some high-value, deeper, fundamental research. Especially in terms of alternatives architectures, approaches to algorithms, and programming models — and doing a better job of characterizing and benchmarking quantum computers.

My original proposal for this topic

For reference, here is the original proposal I had for this topic. It may have some value for some people wanting a more concise summary of this paper.

  • Is lack of fine Granularity of phase and probability amplitude the fatal Achilles Heel which dooms quantum computing to severely limited utility? Continuous value, but with unclear precision or sense of determinism. An intense need for clarity — too vague and undocumented at the moment. It would seem to limit quantum Fourier transform (QFT), which would limit quantum phase estimation (QPE), which would limit quantum computational chemistry and Shor’s factoring algorithm. And limit quantum amplitude estimation as well. And limit ordering finding (which is needed for Shor’s factoring algorithm) as well.

Summary and conclusions

  1. The focus of this paper is solely on gate-based quantum computers. No claim is made as to whether the conjecture and discussion of this paper may or may not apply to non-gate-based quantum computers such as annealing systems (D-Wave), photonic quantum computers (Xanadu), or specialized simulation systems.
  2. The focus of this paper is solely on two-level quantum systems (qubits). Whether the impacts on non-two-level systems (e.g., qutrits and qudits) are identical or similar is unknown.
  3. Granularity and gradations of continuous values. Any range of continuous values, such as a phase angle or probability amplitude, theoretically contains an infinity of discrete values, but in practice there will only be a finite number of discrete values or gradations in any range. The range has a granularity — the minimum numeric spacing between any two gradations or discrete values. A range might be fine-grained or coarse-grained — fine granularity with many gradations, or coarse granularity with few gradations.
  4. My conjecture: Lack of fine granularity of phase and probability amplitude severely limits the utility of quantum computing. Severely limits quantum advantage.
  5. I can’t absolutely prove this, but I am highly confident.
  6. I offer it as a conjecture, but I also assert it as a fact.
  7. I sincerely wish it were false, but I am very afraid that it is true.
  8. Dramatic quantum advantage — delivering on the full promise of quantum computing. The goal, but in many or even most cases we won’t achieve it.
  9. Fractional quantum advantage — an advantage, but not delivering on the full promise of quantum computing. In many or most cases it will be the best we can do.
  10. Exponential speedup — the great promise of quantum computing, but can it really be delivered? Only to a limited degree, not as limitless as promised.
  11. The essential root of the problem is that quantum information is a hybrid of discrete digital 0 and 1 which can be superimposed and entangled, plus phase and probability amplitude which are continuous values, at least in theory, but an infinity of continuous values cannot be practically implemented on a real physical device.
  12. The second root of the problem is that quantum Fourier transform has a critical dependence on dividing by 2^n which is a very large number for even n of moderate size (40 to 50 qubits.)
  13. The combination of these two issues creates a real nightmare that cannot be resolved in any practical manner. Something has to give.
  14. The bits of resolution of the DAC (digital to analog converter) hardware is likely the key limiting factor.
  15. The DAC limits the number of discrete analog voltage levels that can be applied to rotate the quantum state of a qubit.
  16. And the number of discrete analog voltage levels are likely also fairly limited. Not billions, trillions, quadrillions, and a lot more as quantum computing would seem to require.
  17. Analog signals likely also severely limit the granularity of phase and probability amplitude.
  18. Additional analog circuitry after the DAC may also limit the granularity of phase and probability amplitude.
  19. The analog signal must be transported to the qubit, typically as an electromagnetic signal, which probably limits the granularity of phase and probability amplitude.
  20. The qubit itself limits the granularity of phase and probability amplitude.
  21. Ultimately the physics of the underlying quantum phenomenon used to implement the quantum state by the qubits limits the granularity of phase and probability amplitude.
  22. Some ultimate limit at the Planck level as well as quantum uncertainty itself.
  23. General uncertainties around all factors. Even if relatively fine granularity is supported, the uncertainty might be significantly larger than the nominal granularity.
  24. It’s not clear which of the factors is the more limiting factor.
  25. I personally don’t know enough about classical digital and analog electronics to figure all of this out.
  26. Do the same limits apply to both phase and probability amplitude? Likely but not certain.
  27. The number of gradations can’t be greater than the DAC resolution, by definition.
  28. Any technique which relies on fine granularity of phase or probability amplitude will be severely limited in its quantum advantage.
  29. Algorithms and applications can still function, but are unlikely to achieve dramatic quantum advantage. Only a limited degree of exponential speedup, not as limitless as promised.
  30. More modest advantage is more likely up to 1,000X or maybe even 1,000,000X. But not the kind of dramatic quantum advantage that has been talked about and promised for quantum computing. Not one quadrillion X or beyond.
  31. How did this all happen? Too many silos with gaps between them.
  32. The reason that people don’t currently see a problem is because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem. But as soon as qubit fidelity, qubit connectivity, and maximum circuit depth reach a critical mass, the problem will become manifest.
  33. When will we hit the wall and start seeing the problem? Hard to say for sure. Not in the next year to 18 months, but likely within two to three years.
  34. Quantum Fourier transform is critically dependent on fine granularity of phase. This dramatically limits its quantum advantage.
  35. The root of the problem is that there is a divisor of 2^n in the exponential term in the unitary transformation matrix for the quantum logic gate used to effect the fine-grained qubit rotation required for quantum Fourier transform which only works for smaller values of n, not for n greater than 32 or maybe even 20 to 24.
  36. This issue is not specific to any particular qubit technology. There may be differences for each qubit technology, particularly on the analog signal front, but the DAC issue and general limit on granularity will be common to all qubit technologies.
  37. Quantum phase estimation (QPE) and quantum amplitude estimation (QAE) are also dependent on quantum Fourier transform and fine granularity of phase. Will encounter the same limitations on quantum advantage.
  38. Any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage.
  39. Quantum Fourier transform (QFT) is the most powerful algorithmic building block available.
  40. Many applications rely on quantum Fourier transform to achieve dramatic quantum advantage.
  41. Nothing more powerful than quantum Fourier transform.
  42. Maybe there are alternatives, but…
  43. No obvious alternative to quantum Fourier transform.
  44. No obvious alternative implementations of quantum Fourier transform.
  45. Some physics simulations may still be possible. If they rely on nearest neighbor connectivity but not fine granularity of phase.
  46. Some quantum computational chemistry computations using quantum phase estimation (QPE) may still be possible. It will vary based on how many bits of precision are needed.
  47. Some computer science experiments may still be possible. Again if they don’t rely on much more than nearest neighbor connectivity or on fine granularity of phase.
  48. No obvious alternative architectures for quantum computing that address the concern of this paper. Maybe there could be some, but it seems unlikely.
  49. Shor’s factoring algorithm to crack large public encryption keys? No way. Period. Not now. Not ever. Can work for relatively small numbers, but not for hundreds or thousands of bits. Put simply, an exponential divisor of 2⁴⁰⁹⁶ absolutely won’t work for a real DAC and real analog signals.This extreme view relies on the conjecture of this paper being true — and I believe it is.
  50. Maybe someday someone will invent a new factoring algorithm which can actually work on a real quantum computer for 4096-bit public encryption keys, but it won’t be able to use quantum Fourier transform or rely on fine granularity of phase or probability amplitude.
  51. Scalability of quantum algorithms is vital, but problematic in the absence of clarity and predictability for fine granularity of phase and probability amplitude.
  52. How does quantum error correction (QEC) deal with fine granularity of phase and probability amplitude? It’s not clear that they do deal with it at all.
  53. Quantum amplitude amplification has similar issue to phase granularity.
  54. Simulators need to be configured to reflect granularity of phase and probability amplitude.
  55. No hands-on testing or empirical data. My own personal focus is on theory, architecture, design, technical specifications, and documentation, and examination of written materials in such areas, but without any hands-on testing or empirical data. I’ll leave it to others to focus on testing and empirical data. And eventually benchmarking.
  56. Need for benchmarking. Once the issue has been adequately addressed from the theory, architecture, design, technical specifications, and documentation perspectives, then (and only then!) it will be appropriate to address benchmarking of granularity of phase and probability amplitude, as well as benchmarking of quantum Fourier transform (QFT), quantum phase estimation (QPE), and quantum amplitude estimation (QAE).
  57. Include granularity on label of quantum capabilities for quantum computers, algorithms, and applications. The number of gradations supported. As well as the qubits supported for quantum Fourier transform (QFT).
  58. Clarion call: All silos on deck! It’s time for technical staff from all technical silos to understand the entire problem, from silo A to silo Z and assure that either the issue gets resolved, fully or partially, and properly documented as a limitation if it can’t be fully resolved.
  59. Limited granularity is a long-term issue that won’t be fixed in a few years and then just go away.
  60. 48 fully-connected near-perfect qubits may be the sweet spot goal for near-term quantum computing. With the potential for a 20-qubit quantum Fourier transform which could provide a million to one computational advantage.
  61. But start with an upgraded 27-qubit quantum computer. Get the qubit fidelity and connectivity up to the level where fine granularity of phase is the gating factor for a ten to twelve-qubit quantum Fourier transform.
  62. Maybe even a 36-qubit stepping stone. If 48 qubits is still too ambitious. Support 15 to 16-qubit quantum Fourier transform.
  63. It does indeed appear that we are indeed stuck in quantum toyland, with no prospect of escape.
  64. But maybe what seems toy-like to some may actually be great for others.
  65. Once again, my conjecture is still only tentative and may yet be proven wrong. But I’m not holding my breath. Okay, maybe I am.
  66. I merely raise the issue and hope that others with much greater knowledge, skills, and expertise can confirm, deny, or resolve the issue with something approaching absolute 100% definitive certainty.
  67. Now what? What’s next? We may be screwed. We’ll just have to wait and see, and lower our expectations for many applications, to fractional quantum advantage. And focus on more research to discover alternative approaches.
  68. Focus on more research to potentially discover alternative approaches. There’s no guarantee that research will discover some magical solution, but… you never know. And generally research is a good thing, no matter what.

--

--

Freelance Consultant

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store