Is Lack of Fine Granularity of Phase and Probability Amplitude the Fatal Achilles Heel Which Dooms Quantum Computing to Severely Limited Utility?
I phrase it as a question, but it does seem unavoidable — that quantum computing requires quantum Fourier transform (QFT) or comparable techniques which rely on fine granularity of phase angles that does not currently exist in near-term quantum computers and is unlikely to ever exist in any real quantum computer. Limited granularity of phase will limit the precision of results, hence limiting the utility of quantum computers. It won’t mean that quantum computers can’t be used or won’t be useful or beneficial or advantageous, but it will mean that a large portion of the quantum advantage over classical computers will be much more limited than what has been promised. Generally we will be able to achieve only fractional quantum advantage, not full dramatic quantum advantage — maybe 2X, 4X, 10X, 100X, 1,000X, or even 1,000,000X a classical computer but not one quadrillion X and beyond. This informal paper will explore this issue.
I am unable to state with absolute 100% definitive certainty that there is an unsolvable problem here, but I do claim that I am very afraid that my concerns are indeed very warranted. I merely raise the issue and hope that others with much greater knowledge, skills, and expertise can confirm, deny, or resolve the issue with something approaching absolute 100% definitive certainty.
Caveat:
- The focus of this paper is solely on gate-based quantum computers.
- No claim is made as to whether the conjecture and discussion of this paper may or may not apply to non-gate-based quantum computers such as annealing systems (D-Wave), photonic quantum computers (Xanadu), or specialized simulation systems.
- The focus of this paper is solely on two-level quantum systems (qubits).
- No claim is made as to whether the impacts on non-two-level systems (e.g., qutrits and qudits) are identical or similar.
Topics discussed in this paper:
- In a nutshell
- The focus of this paper is solely on gate-based quantum computers
- The focus of this paper is solely on two-level quantum systems (qubits)
- Granularity and gradations of continuous values
- The essence of the problem
- The essence of the problem, more simply
- Dramatic quantum advantage — delivering on the full promise of quantum computing
- Fractional quantum advantage — an advantage, but not delivering on the full promise of quantum computing
- Exponential speedup — the great promise of quantum computing, but can it really be delivered?
- Background
- What is a quantum Fourier transform (QFT)?
- The problem
- My conjecture
- Implications of my conjecture
- I can’t prove my conjecture, but technical specifications don’t disprove it either
- I’m at least half-convinced that my conjecture is true
- To be clear, my conjecture is still only tentative and may yet be proven wrong
- Utility: quantum parallelism and quantum advantage
- How severely limited will utility be?
- People don’t currently see a problem because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem
- Beware of Quantum Algorithms Dependent on Fine Granularity of Phase
- Quantum Fourier transform is critically dependent on fine granularity of phase
- Quantum phase estimation (QPE) and quantum amplitude estimation (QAE) are also dependent on quantum Fourier transform and fine granularity of phase
- Any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage
- What is a DAC (Digital to Analog Converter)?
- Where is the DAC?
- The DAC effectively limits the granularity of phase and probability amplitude
- Analog signals likely also severely limit the granularity of phase and probability amplitude
- Additional analog circuitry after the DAC may also limit the granularity of phase and probability amplitude
- The analog signal must be transported to the qubit, typically as an electromagnetic signal, which probably limits the granularity of phase and probability amplitude
- The qubit itself limits the granularity of phase and probability amplitude
- Ultimately the physics of the underlying quantum phenomenon used to implement the quantum state by the qubits limits the granularity of phase and probability amplitude
- Some ultimate limit at the Planck level as well as quantum uncertainty itself
- General uncertainties around all factors
- It’s not clear which of the factors is the more limiting factor
- I personally don’t know enough about classical digital and analog electronics to figure all of this out
- Gate execution and qubit measurement
- Fidelity of SWAP gates
- Do the same limits apply to both phase and probability amplitude? Likely but not certain
- How did this all happen? Too many silos with gaps between them
- People don’t currently see a problem because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem
- When will we hit the wall and start seeing the problem? Maybe two to three years
- Quantum information — discrete binary and continuous values
- Some alternatives
- Are there any reasonable alternatives to quantum Fourier transform?
- What might be more powerful than quantum Fourier transform?
- No, variational methods don’t show any promise of delivering any dramatic quantum advantage
- Are there any reasonable alternative approaches to implementation of quantum Fourier transform?
- Might an alternative quantum computing architecture or programming model avoid my concern?
- Is a radically different technology and architecture, and maybe even programming model required to get past my concern?
- Might placing the classical electronics in the cryostat reduce interference to enable a few more bits of precision?
- Bits of precision vs. qubits of precision
- Currently this limitation is a non-issue since current limits on qubit fidelity and connectivity prevent running into these limits
- The exponential math issue
- Inverse quantum Fourier transform
- Euler’s formula
- Two pi conversion factor from normalized phase to phase angle in radians
- Phase angle of two superposed basis states
- Mapping phase angle to the input range for the DAC
- Limited precision of entries in a unitary transform matrix
- Total qubits vs. quantum Fourier transform precision
- Pulses to control qubits
- Pulse control
- No hints about pulse precision in IBM Qiskit Pulse documentation and tutorials
- Notes on IBM pulse control
- How many qubits is reasonable to expect for a quantum Fourier transform?
- 10–12 qubits seems like a slam dunk, maybe 14–16 qubits
- 16 to 18 qubits is likely a slam dunk for two to three years from now
- 20 or 24 qubits as a reasonable expectation for what we might have in two to three years
- 28 to 32 qubits seems out of reach, but who really knows
- 36 to 40 qubits is clearly beyond the realm of what we can expect, but it’s worth contemplating as a theoretical matter
- Nobody should be expecting quantum Fourier transform for more than 40 qubits
- 44 to 48 qubits are well beyond the realm of expectation for what a quantum Fourier transform could handle
- Demand transparency — transparency is mandatory
- Insist or demand that all limitations be clearly documented
- Transparency is needed everywhere
- No transparency on internal electronics for digital and analog circuitry
- Transparency needed for number of gradations for phase and probability amplitude
- The number of gradations can’t be greater than the DAC resolution, by definition
- Related circuitry and noise may reduce the absolute resolution of the DAC
- An open source design for a quantum computer would be helpful
- Quantum advantage must be fully disclosed and adequately discussed
- What’s your quantum advantage?
- What other approaches are there if quantum Fourier transform isn’t available?
- Are there ANY known opportunities to address the root issues? Nope
- Isn’t there ANY pathway to fine granularity of phase and probability amplitude? Nope.
- Isn’t there ANY pathway to quantum Fourier transform for over 50 qubits? Nope.
- What about Shor’s factoring algorithm? Sorry, but it will only work for small numbers, not very large numbers
- Won’t banding or approximate quantum Fourier transform (AQFT) resolve the issue for Shor’s factoring algorithm? No, not really
- What about derivative algorithms of Shor’s factoring algorithm? Not if they still rely on quantum Fourier transform
- Large-scale physics simulations may still be feasible even without support for quantum Fourier transform
- Computer science experiments can also achieve significant quantum advantage
- Google’s quantum supremacy experiment or cross-entropy benchmarking (XEB) can achieve dramatic quantum advantage
- What about quantum computational chemistry using quantum phase estimation (QPE)? Maybe, sometimes, it depends
- Still open questions as to the quality of the final quantum Fourier transform result even if a certain number of bits is supported
- Scalability of quantum algorithms is vital, but problematic in the absence of clarity and predictability for fine granularity of phase and probability amplitude
- How does quantum error correction (QEC) deal with fine granularity of phase and probability amplitude? Unknown!
- Quantum amplitude amplification has similar issue to phase granularity
- Simulators need to be configured to reflect granularity of phase and probability amplitude
- No hands-on testing or empirical data
- Need for benchmarking
- Include granularity on label of quantum capabilities for quantum computers, algorithms, and applications
- Clarion call: All silos on deck!
- Limited granularity is a long-term issue that won’t be fixed in a few years and then just go away
- 48 fully-connected near-perfect qubits may be the sweet spot goal for near-term quantum computing
- But start with an upgraded 27-qubit quantum computer
- Maybe even a 36-qubit stepping stone
- We’re stuck in quantum toyland with no prospect of escape
- But maybe what seems toy-like to some may actually be great for others
- Once again, my conjecture is still only tentative and may yet be proven wrong, but I’m not holding my breath
- Never say never
- Now what? What’s next?
- Focus on more research to potentially discover alternative approaches
- A golden opportunity for some fresh academic research
- My original proposal for this topic
- Summary and conclusions
In a nutshell
- The focus of this paper is solely on gate-based quantum computers. No claim is made as to whether the conjecture and discussion of this paper may or may not apply to non-gate-based quantum computers such as annealing systems (D-Wave), photonic quantum computers (Xanadu), or specialized simulation systems.
- The focus of this paper is solely on two-level quantum systems (qubits). Whether the impacts on non-two-level systems (e.g., qutrits and qudits) are identical or similar is unknown.
- Granularity and gradations of continuous values. Any range of continuous values, such as a phase angle or probability amplitude, theoretically contains an infinity of discrete values, but in practice there will only be a finite number of discrete values or gradations in any range. The range has a granularity — the minimum numeric spacing between any two gradations or discrete values. A range might be fine-grained or coarse-grained — fine granularity with many gradations, or coarse granularity with few gradations.
- My conjecture: Lack of fine granularity of phase and probability amplitude severely limits the utility of quantum computing. Severely limits quantum advantage.
- I can’t absolutely prove this, but I am highly confident.
- I offer it as a conjecture, but I also assert it as a fact.
- I sincerely wish it were false, but I am very afraid that it is true.
- Dramatic quantum advantage — delivering on the full promise of quantum computing. The goal, but in many or even most cases we won’t achieve it.
- Fractional quantum advantage — an advantage, but not delivering on the full promise of quantum computing. In many or most cases it will be the best we can do.
- Exponential speedup — the great promise of quantum computing, but can it really be delivered? Only to a limited degree, not as limitless as promised.
- The essential root of the problem is that quantum information is a hybrid of discrete digital 0 and 1 which can be superimposed and entangled, plus phase and probability amplitude which are continuous values, at least in theory, but an infinity of continuous values cannot be practically implemented on a real physical device.
- The second root of the problem is that quantum Fourier transform has a critical dependence on dividing by 2^n which is a very large number for even n of moderate size (40 to 50 qubits.)
- The combination of these two issues creates a real nightmare that cannot be resolved in any practical manner. Something has to give.
- The bits of resolution of the DAC (digital to analog converter) hardware is likely the key limiting factor.
- The DAC limits the number of discrete analog voltage levels that can be applied to rotate the quantum state of a qubit.
- And the number of discrete analog voltage levels are likely also fairly limited. Not billions, trillions, quadrillions, and a lot more as quantum computing would seem to require.
- Analog signals likely also severely limit the granularity of phase and probability amplitude.
- Additional analog circuitry after the DAC may also limit the granularity of phase and probability amplitude.
- The analog signal must be transported to the qubit, typically as an electromagnetic signal, which probably limits the granularity of phase and probability amplitude.
- The qubit itself limits the granularity of phase and probability amplitude.
- Ultimately the physics of the underlying quantum phenomenon used to implement the quantum state by the qubits limits the granularity of phase and probability amplitude.
- Some ultimate limit at the Planck level as well as quantum uncertainty itself.
- General uncertainties around all factors. Even if relatively fine granularity is supported, the uncertainty might be significantly larger than the nominal granularity.
- It’s not clear which of the factors is the more limiting factor.
- I personally don’t know enough about classical digital and analog electronics to figure all of this out.
- Do the same limits apply to both phase and probability amplitude? Likely but not certain.
- The number of gradations can’t be greater than the DAC resolution, by definition.
- Any technique which relies on fine granularity of phase or probability amplitude will be severely limited in its quantum advantage.
- Algorithms and applications can still function, but are unlikely to achieve dramatic quantum advantage. Only a limited degree of exponential speedup, not as limitless as promised.
- More modest advantage is more likely up to 1,000X or maybe even 1,000,000X. But not the kind of dramatic quantum advantage that has been talked about and promised for quantum computing. Not one quadrillion X or beyond.
- How did this all happen? Too many silos with gaps between them.
- The reason that people don’t currently see a problem is because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem. But as soon as qubit fidelity, qubit connectivity, and maximum circuit depth reach a critical mass, the problem will become manifest.
- When will we hit the wall and start seeing the problem? Hard to say for sure. Not in the next year to 18 months, but likely within two to three years.
- Quantum Fourier transform is critically dependent on fine granularity of phase. This dramatically limits its quantum advantage.
- The root of the problem is that there is a divisor of 2^n in the exponential term in the unitary transformation matrix for the quantum logic gate used to effect the fine-grained qubit rotation required for quantum Fourier transform which only works for smaller values of n, not for n greater than 32 or maybe even 20 to 24.
- This issue is not specific to any particular qubit technology. There may be differences for each qubit technology, particularly on the analog signal front, but the DAC issue and general limit on granularity will be common to all qubit technologies.
- Quantum phase estimation (QPE) and quantum amplitude estimation (QAE) are also dependent on quantum Fourier transform and fine granularity of phase. Will encounter the same limitations on quantum advantage.
- Any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage.
- Quantum Fourier transform (QFT) is the most powerful algorithmic building block available.
- Many applications rely on quantum Fourier transform to achieve dramatic quantum advantage.
- Nothing more powerful than quantum Fourier transform.
- Maybe there are alternatives, but…
- No obvious alternative to quantum Fourier transform.
- No obvious alternative implementations of quantum Fourier transform.
- Some physics simulations may still be possible. If they rely on nearest neighbor connectivity but not fine granularity of phase.
- Some quantum computational chemistry computations using quantum phase estimation (QPE) may still be possible. It will vary based on how many bits of precision are needed.
- Some computer science experiments may still be possible. Again if they don’t rely on much more than nearest neighbor connectivity or on fine granularity of phase.
- No obvious alternative architectures for quantum computing that address the concern of this paper. Maybe there could be some, but it seems unlikely.
- Banding for quantum Fourier transforms or Coppersmith’s approach of approximate quantum Fourier transform (AQFT) won’t overcome the problem. Those two approaches are more effective at minimizing computation (gate count and circuit size) rather than achieving greater precision. Even if they do reduce the magnitude of the problem, they are still unlikely to overcome the precision limitations for DAC hardware.
- Shor’s factoring algorithm to crack large public encryption keys? No way. Period. Not now. Not ever. Can work for relatively small numbers, but not for hundreds or thousands of bits. Put simply, an exponential divisor of 2⁴⁰⁹⁶ absolutely won’t work for a real DAC and real analog signals. This extreme view relies on the conjecture of this paper being true — and I believe it is.
- Maybe someday someone will invent a new factoring algorithm which can actually work on a real quantum computer for 4096-bit public encryption keys, but it won’t be able to use quantum Fourier transform or rely on fine granularity of phase or probability amplitude.
- Scalability of quantum algorithms is vital, but problematic in the absence of clarity and predictability for fine granularity of phase and probability amplitude.
- How does quantum error correction (QEC) deal with fine granularity of phase and probability amplitude? It’s not clear that they do deal with it at all.
- Quantum amplitude amplification has similar issue to phase granularity.
- Simulators need to be configured to reflect granularity of phase and probability amplitude.
- No hands-on testing or empirical data. My own personal focus is on theory, architecture, design, technical specifications, and documentation, and examination of written materials in such areas, but without any hands-on testing or empirical data. I’ll leave it to others to focus on testing and empirical data. And eventually benchmarking.
- Need for benchmarking. Once the issue has been adequately addressed from the theory, architecture, design, technical specifications, and documentation perspectives, then (and only then!) it will be appropriate to address benchmarking of granularity of phase and probability amplitude, as well as benchmarking of quantum Fourier transform (QFT), quantum phase estimation (QPE), and quantum amplitude estimation (QAE).
- Include granularity on label of quantum capabilities for quantum computers, algorithms, and applications. The number of gradations supported. As well as the qubits supported for quantum Fourier transform (QFT).
- Clarion call: All silos on deck! It’s time for technical staff from all technical silos to understand the entire problem, from silo A to silo Z and assure that either the issue gets resolved, fully or partially, and properly documented as a limitation if it can’t be fully resolved.
- Limited granularity is a long-term issue that won’t be fixed in a few years and then just go away.
- 48 fully-connected near-perfect qubits may be the sweet spot goal for near-term quantum computing. With the potential for a 20-qubit quantum Fourier transform which could provide a million to one computational advantage.
- But start with an upgraded 27-qubit quantum computer. Get the qubit fidelity and connectivity up to the level where fine granularity of phase is the gating factor for a ten to twelve-qubit quantum Fourier transform.
- Maybe even a 36-qubit stepping stone. If 48 qubits is still too ambitious. Support 15 to 16-qubit quantum Fourier transform.
- It does indeed appear that we are indeed stuck in quantum toyland, with no prospect of escape.
- But maybe what seems toy-like to some may actually be great for others.
- Once again, my conjecture is still only tentative and may yet be proven wrong. But I’m not holding my breath. Okay, maybe I am.
- I merely raise the issue and hope that others with much greater knowledge, skills, and expertise can confirm, deny, or resolve the issue with something approaching absolute 100% definitive certainty.
- Now what? What’s next? We may be screwed. We’ll just have to wait and see, and lower our expectations for many applications, to fractional quantum advantage. And focus on more research to discover alternative approaches.
- Focus on more research to potentially discover alternative approaches. There’s no guarantee that research will discover some magical solution, but… you never know. And generally research is a good thing, no matter what.
The focus of this paper is solely on gate-based quantum computers
The focus of this paper is solely on gate-based quantum computers.
No claim is made as to whether the conjecture and discussion of this paper may or may not apply to non-gate-based quantum computers such as annealing systems (D-Wave), photonic quantum computers (Xanadu), or specialized simulation systems.
It could well be that alternative approaches to quantum computing such as annealing systems (D-Wave), photonic quantum computers (Xanadu), or specialized simulation systems don’t have the issue detailed in this paper or are not impacted as severely.
But… it could also be the case that alternative approaches might be in even worse shape, being even more limited.
The bottom line is that the conjecture and discussion of this paper should be construed as applying solely or specifically to gate-based quantum computers.
The focus of this paper is solely on two-level quantum systems (qubits)
Although the main focus of this paper is on two-level quantum systems (qubits), it is possible that quantum systems with more than two levels (e.g., qutrits or qudits) might be impacted somewhat differently than the impact on two-level systems described in this paper. I don’t know of or even suspect differences, but I simply do not know either way and I’m not making any representations either way.
Granularity and gradations of continuous values
Although the binary 0 and 1 values of a bit or qubit are discrete, other aspects of the quantum state of a qubit are continuous values, namely phase and probability amplitude, which are nominally real numbers between 0.0 and 1.0. Since they are continuous, nominally they are not discrete — there is nominally an infinity of discrete values in the range.
Any range of continuous values theoretically contains an infinity of discrete values, but in practice, with real hardware, there will only be a finite number of discrete values or gradations in any range.
The range has a granularity — the minimum numeric spacing between any two gradations or discrete values.
A range might be fine-grained or coarse-grained:
- Fine-grained. Fine granularity with many gradations, many discrete values.
- Coarse-grained. Coarse granularity with few gradations, few discrete values.
The essence of the problem
Oversimplifying, the problem comes from a sequence of processing steps:
- Quantum Fourier transform is essential for many of the key applications of quantum computing.
- Quantum Fourier transform requires very fine-grained rotations of qubit state.
- Fine-grained rotation of qubit state requires a fine-grained analog signal.
- A digital to analog converter (DAC) is used to convert a discrete digital level to an analog voltage level.
- A DAC has a very limited resolution. From 8 to 32 bits or so.
- This limits how fine-grained the qubit rotation can be. The analog signal also limits granularity — the number of gradations or discrete levels.
- The digital qubit rotation is derived from an exponential term in the unitary transformation matrix for the quantum logic gate used to effect the fine-grained qubit rotation.
- The exponential term has a divisor which is 2^n for an n-bit quantum Fourier transform.
- That’s fine and easy for small values of n, but becomes harder for moderate size of n, and impossible for large values of n. Values of n below 14 or so are easy. Values of n up to 18 are harder but doable. Values of n from 20 to 24 get harder. Values on n above 24, 32, 40, 50, and higher are even harder and eventually impossible. Values of n in the hundreds, thousands, or millions are outright impossible.
- In short, a quantum Fourier transform cannot handle data wider than about 20 to 32 qubits.
- Other techniques must be used for larger data, but can’t achieve the dramatic quantum advantage of quantum Fourier transform for larger data.
The essence of the problem, more simply
- The essential root of the problem is that quantum information is a hybrid of discrete digital 0 and 1 which can be superimposed and entangled, plus phase and probability amplitude which are continuous values, at least in theory, but an infinity of continuous values cannot be practically implemented on a real physical device.
- The second root of the problem is that quantum Fourier transform has a critical dependence on dividing by 2^n which is a very large number for even n of moderate size (40 to 50 qubits.)
- The combination of these two issues creates a real nightmare that cannot be resolved in any practical manner. Something has to give.
Dramatic quantum advantage — delivering on the full promise of quantum computing
The whole point of using a quantum computer is to gain a quantum advantage over a solution which is implemented by a classical computer. A dramatic performance advantage.
The real promise of quantum computing is not just some speedup in performance, but an exponential speedup, a dramatic quantum advantage over classical computers.
If a quantum solution is not delivering an exponential speedup, a dramatic quantum advantage, then it is not delivering on the full promise of quantum computing.
I personally consider the basic promise for quantum advantage to be dramatic quantum advantage, which I estimate at the computational leverage you can achieve with 50 fully-entangled qubits, which gives you 2⁵⁰ or one quadrillion quantum states — effectively a one quadrillion X performance advantage.
And that’s just the starting point — the quantum advantage would rise dramatically from there as you get to 64, 72, 80, 96, 128, and 256 qubits and more.
For more on my conception of dramatic quantum advantage, see my paper:
- What Is Dramatic Quantum Advantage?
- https://jackkrupansky.medium.com/what-is-dramatic-quantum-advantage-e21b5ffce48c
Such dramatic quantum advantage would definitely be far beyond the reach of a quantum computer lacking fine granularity of phase and probability amplitude.
Fractional quantum advantage — an advantage, but not delivering on the full promise of quantum computing
If a quantum solution is only delivering some performance advantage but far short of a dramatic quantum advantage, I call that a fractional quantum advantage. That may be fine for some or even many applications but it simply isn’t delivering on the full promise of quantum computing.
I actually conceptualize more limited quantum advantage, particularly minimal and substantial quantum advantage:
- Minimal quantum advantage. A 1,000X advantage over a classical computer.
- Substantial or significant quantum advantage. A 1,000,000X advantage over a classical solution.
For more on those conceptions, what I call fractional quantum advantage, see my paper:
- Fractional Quantum Advantage — Stepping Stones to Dramatic Quantum Advantage
- https://jackkrupansky.medium.com/fractional-quantum-advantage-stepping-stones-to-dramatic-quantum-advantage-6c8014700c61
Exponential speedup — the great promise of quantum computing, but can it really be delivered?
Even the most basic introduction to quantum computing hammers home the essential promise of quantum computing — exponential speedup, that with only n qubits you can effectively perform 2^n computations in parallel.
But can this promise really be delivered?
Only to a limited degree, not as limitless as promised.
Maybe, in some cases, niche cases, it still can. There are exceptions to most rules.
And for small-ish cases, yes, but exponential benefit is relatively small-ish as well for small-ish n, such as n less than 12.
But in general, for most cases, especially those at production-scale, for most people, for most of the time, the overall thesis of this paper is that:
- No, exponential speedup cannot be delivered as a general proposition for most cases, especially those at production scale, for most people most of the time.
Background
Many of the advanced programming techniques of quantum computing are expected to achieve their quantum advantage by exploiting quantum Fourier transform (QFT) or more advanced techniques such as quantum phase estimation (QPE) or quantum amplitude estimation (QAE) which in turn rely on quantum Fourier transform, but this key and very powerful algorithmic building block relies heavily on fine granularity of phase and probability amplitude, and in particular a degree of fineness which may not even exist in reality.
A quantum computer uses a combination of classical digital and analog circuitry to control the quantum state of qubits.
The quantum state of qubits can be rotated by arbitrary amounts in all three axes — X, Y, and Z.
Programming of quantum computers, via quantum circuits, is a combination of these rotations with controlled interactions between pairs of qubits.
The fundamental basis states of qubits are simple binary 0 and 1, but those basis states can be superimposed within a qubit and entangled between qubits.
The difficulty comes with phase and probability amplitude which are continuous values rather than those two binary basis states.
In theory, a continuous value would be an infinity of possible values, but practical considerations dramatically limit the number of possible values in that otherwise continuous range.
The crux of the problem is the conversion from classical discrete digital values to classical continuous analog values and vice versa.
These conversions are known as digital to analog conversion, otherwise known as DAC, and analog to digital conversion, otherwise known as ADC.
There are a lot of heavy and consequential engineering tradeoffs with these conversions.
The net result is that there are only a relatively limited set of ranges of converted values. Most commonly:
- 8-bit. 256 discrete values.
- 10-bit. 1024 discrete values.
- 12-bit. 4096 discrete values.
- 14-bit. 16K discrete values
- 16-bit. 64K discrete values.
- 20-bit. One million discrete values.
- 24-bit. Sixteen million discrete values.
- 32-bit. Four billion discrete values.
- Beyond 32 bits. To the best of my current knowledge, at present there are no DACs beyond 32-bits.
The net result is that no quantum Fourier transform can utilize more than 32 qubits or so.
And it’s likely that even 24 and possibly 20 qubits cannot be exploited, at least at present and in the near-term or even next two years.
In fact, no quantum computer vendor even documents what precision they support for phase and probability amplitude, so even a 16-qubit quantum Fourier transform might be problematic.
Who knows, maybe advances in classical digital and analog technology might someday support 36 or 40 or even 44 bit digital values. But I seriously doubt — and nobody is talking about — support for 48-bit digital to analog conversion, let alone anything beyond 48 bits.
So, it is unlikely that quantum Fourier transform will ever be able to support quantum computations involving:
- 50 qubits.
- 64 qubits.
- 72 qubits.
- 80 qubits
- 96 qubits.
- 128 qubits.
- 256 qubits.
- Hundreds of qubits.
- Thousands of qubits.
- 2048 qubits. To crack a 1024-bit encryption key.
- 4096 qubits. To crack a 2048-bit encryption key.
- 8192 qubits. To crack a 4096-bit encryption key.
- Millions of qubits.
For now, when we are projecting out two to five or seven or even ten years, we should not presume that quantum Fourier transforms — and quantum computations in general — will be able to support more than:
- 20 qubits.
- 24 qubits.
- 32 qubits.
And again, even 16 bits might be problematic.
What is a quantum Fourier transform (QFT)?
In all honesty, quantum Fourier transform (QFT) is a very complex topic and no short summary could do it justice, so I won’t try, other than to make the following brief statement:
- The purpose of a quantum Fourier transform (QFT) is to map or compress (transform) a very large but periodic solution space into a very small integer value.
- Generally, a 2^n solution space is compressed or mapped (transformed) to n qubits.
In classical electronics we commonly use a Fourier transform to transform from the time domain to the frequency domain — the time domain being a very long but periodic waveform and the frequency domain being a very concise frequency distribution histogram. This is a common step in natural language processing of audio, for example.
The use cases in quantum computing are very different, but the principle is the same.
For a more thorough treatment, consult the Wikipedia:
- Wikipedia article on Quantum Fourier transform
- https://en.wikipedia.org/wiki/Quantum_Fourier_transform
Or consult the Qiskit textbook tutorial:
And if you really want a truly deep understanding, consult Shor’s original paper on factoring semiprime integers, which relies of quantum Fourier transform, detailed in Section 4:
- Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
- https://arxiv.org/abs/quant-ph/9508027
The problem
We start with the baseline assertion:
- Quantum Fourier transform (QFT) is absolutely essential to realize many or most of the more intensive promises made on behalf of quantum computing in terms of achieving a dramatic quantum advantage in performance over classical computing.
There are plenty of other programming techniques for quantum algorithms, but as far as I can tell, none of them approach the raw power offered by quantum Fourier transform. And the techniques which look more powerful in many cases actually use quantum Fourier transform to achieve that power (e.g., quantum phase estimation, order finding, and Shor’s factoring algorithm.)
Implementation of quantum Fourier transform requires a large number of very fine rotations of the quantum state of a collection of qubits (a register.) the purpose is to sum a large number of small terms to get the final terms of the transform, the qubits of the output register.
Each of these fine rotations is shifting the phase of each qubit by a small amount.
The phase shifts are accomplished by analog signals.
The analog signals are generated from discrete binary levels.
The discrete binary levels are converted from digital to analog form using a digital to analog converter (DAC).
The discrete binary levels are derived from normalized real values in the range of 0.0 to 1.0.
The normalized real values come from the exponents of imaginary (complex) exponentials.
These exponents multiply two integer terms and then divide by 2^n where n is the number of qubits in the input register of the quantum Fourier transform. For relatively small values of n, 2^n is still reasonably small (2⁸ = 256, 2¹⁰ = 1024), but for larger values of n, 2^n is very large (2²⁰ = one million, 2³⁰ = one billion, 2⁴⁰ = one trillion, 2³⁰⁰ = larger than the number of atoms in the entire universe.)
If 2^n is very large, then dividing by 2^n produces a very tiny exponent for the exponential, resulting in a very tiny rotation, which may round to zero or whatever the average error rate is for the gate being executed.
If the DAC and analog signal processing are unable to accurately handle such small rotations, the result is an invalid value for the quantum Fourier transform.
My conjecture
I offer it as a conjecture to be considered and reasoned or proven true or false:
- Lack of fine granularity of phase and probability amplitude Is the fatal achilles heel which dooms quantum computing to severely limited utility.
Or phrased a little differently:
- Limited granularity of phase and probability amplitude limits the utility of quantum algorithms to 24 to 36 qubits.
Implications of my conjecture
If my conjecture is true, then there are Implications:
- A quantum algorithm can evaluate or consider no more than a few billion possibilities, at most.
- Maybe a lot less — maybe only a million or even a thousand or a hundred.
- And once shot count (circuit repetitions) are taken into account, any apparent advantage may get divided down to a much smaller advantage or even no advantage at all.
I can’t prove my conjecture, but technical specifications don’t disprove it either
No current vendor documents the granularity of phase or probability amplitude.
Digital data is converted to an analog signal using a digital to analog converter (DAC). The granularity of such signal conversion can be fairly decent but even the best convertors available can’t handle more than about 20 to 32 bits. Maybe that can be extended a little, by a few bits, but supporting more than 48 or 50 bits or 64 or 72 or 80 or 96 or 128 or 256 or 1K or 2K or 4K bits is clearly out of the question.
In short, maybe 20 or possibly 32 or 36 or 40 bits could be supported, but that dramatically limits the power of a quantum computer relative to promises of supporting hundreds, thousands, or millions of qubits.
I’m at least half-convinced that my conjecture is true
Although I can’t prove my conjecture to 100% absolute certainty, I do feel relatively confident that it is likely true.
I’m at least half-convinced that my conjecture is true. And that’s a definitely above half, whether it’s 60% or 65% or even 70%.
I may be as much as 75% convinced that my conjecture is true. Maybe just a one in four chance that I’m mistaken.
You know, thinking about it some more, maybe I’m really at 80% confidence, with only a one in five possibility that I might be wrong.
And sometimes I actually do feel at least 90% certain, but always with at least some small chance that I might be mistaken.
To be clear, my conjecture is still only tentative and may yet be proven wrong
I am indeed reasonably convinced that my conjecture is true. Although I concede that there is some non-zero chance that I’m wrong, so I don’t consider my conjecture to be absolutely final or definitive, yet.
Nonetheless, I’m more afraid that my conjecture is true than that it is false.
Utility: quantum parallelism and quantum advantage
What do we mean by utility? Beyond the simple function of a program, which can be implemented even on a simple classical computer, by utility of a quantum computer or quantum algorithm or application we mean that a significant performance advantage is delivered. It has some significant usefulness that is well beyond what can be achieved with a classical computer.
A quantum computer or algorithm delivers a distinct performance advantage through quantum parallelism — evaluating a very large number of alternatives in parallel, which simply isn’t feasible with a classical computer.
Sure, you can use n classical computers to evaluate n alternatives in parallel, which works for relatively small n such as ten or fifty or even a thousand, but not for millions, billions, trillions or more.
Monte Carlo simulation (MCS) can also deliver interesting results using a classical computer, but once again only for a smaller number of possibilities, not for the very large range of possibilities which quantum computing promises.
A simple Hadamard transform can transform a simple computation to a massively parallel computation in a quantum algorithm. This is quantum parallelism.
The effect of this quantum parallelism is to deliver results much quicker than is possible using a classical computer.
Delivering results so quickly is referred to as quantum advantage.
Quantum advantage is a qualitative benefit, but it can be quantified as well, as in how much computational leverage is achieved — how many times faster is the quantum solution than the best classical solution. 1,000X, 1,000,000X, or even one quadrillion X compared to a classical solution.
Delivering a quantum advantage of one quadrillion X or more is dramatic quantum advantage and is the kind of utility which quantum computing has been promising.
Anything less than dramatic quantum advantage is… limited utility.
How severely limited will utility be?
If lack of fine granularity of phase and probability amplitude does severely limit the utility of quantum computers, algorithms, and applications, how severe might those limits be?
I personally consider the basic promise for quantum advantage to be dramatic quantum advantage, which I estimate at the computational leverage you can achieve with 50 fully-entangled qubits, which gives you 2⁵⁰ or one quadrillion quantum states or simultaneous possibilities which can be evaluated in parallel by a single quantum computation.
And that’s just the starting point — the quantum advantage would rise dramatically from there as you get to 64, 72, 80, 96, 128, and 256 qubits and more.
For more on my conception of dramatic quantum advantage, see my paper:
- What Is Dramatic Quantum Advantage?
- https://jackkrupansky.medium.com/what-is-dramatic-quantum-advantage-e21b5ffce48c
Such dramatic quantum advantage would definitely be far beyond the reach of a quantum computer lacking fine granularity of phase and probability amplitude.
I actually conceptualize more limited quantum advantage, particularly minimal and substantial quantum advantage:
- Minimal quantum advantage. A 1,000X advantage over a classical computer.
- Substantial or significant quantum advantage. A 1,000,000X advantage over a classical solution.
For more on those conceptions, what I call fractional quantum advantage, see my paper:
- Fractional Quantum Advantage — Stepping Stones to Dramatic Quantum Advantage
- https://jackkrupansky.medium.com/fractional-quantum-advantage-stepping-stones-to-dramatic-quantum-advantage-6c8014700c61
So, what exactly might we expect from quantum computers without very fine granularity of phase and probability amplitude? It would vary a lot, but some possibilities:
- 2X. Barely twice the performance of a classical computer.
- 4X-10X. Maybe ten times the performance of a classical computer.
- 20X-50X.
- 100X.
- 1,000X = 10 qubits.
- 10,000X.
- 50,000X.
- 64K X = 16 qubits.
- 256K X = 18 qubits.
- 1,000,000X = 20 qubits. May be a near-term practical limit (2–3 years).
- 16M X = 24 qubits. May be a near-term practical limit (2–3 years).
- One billion X = 30 qubits. Probably beyond the near term.
- 32 qubits = Four billion X.
- 32–40 qubits. Maybe theoretically possible, eventually.
- 48 qubits. Definitely cannot be expected.
And don’t forget that whatever the limit is, it would need to be further discounted by shot count (circuit repetitions). For example even an advantage of one million would be reduced to a thousand if a shot count of 1,000 is required.
In fact, the shot count could easily wipe out all of the quantum advantage.
For more on shot counts, see my paper:
- Shots and Circuit Repetitions: Developing the Expectation Value for Results from a Quantum Computer
- https://jackkrupansky.medium.com/shots-and-circuit-repetitions-developing-the-expectation-value-for-results-from-a-quantum-computer-3d1f8eed398
People don’t currently see a problem because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem
The reason that people don’t currently see a problem with algorithms dependent on fine granularity of phase or probability amplitude is because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem.
Current quantum hardware doesn’t support a quantum Fourier transform with enough qubits to run into the problem. Qubit fidelity, qubit connectivity, and maximum circuit depth are key limiting factors for current quantum hardware.
But as soon as qubit fidelity, qubit connectivity, and maximum circuit depth reach a critical mass, the problem will become manifest. Whether that is at 32 qubits, 40 qubits, or 50 qubits remains to be seen, but the limit is lurking somewhere out there, for sure.
Beware of Quantum Algorithms Dependent on Fine Granularity of Phase
This paper builds on and deepens technical support for a related paper that I wrote almost two years earlier:
- Beware of Quantum Algorithms Dependent on Fine Granularity of Phase
- https://jackkrupansky.medium.com/beware-of-quantum-algorithms-dependent-on-fine-granularity-of-phase-525bde2642d8
This paper should stand on its own, but the background of that earlier paper might make things a little more clear from a user perspective while this paper focuses on the technical detail.
Quantum Fourier transform is critically dependent on fine granularity of phase
Just to highlight that quantum Fourier transform will have significantly limited quantum advantage due to its reliance on fine granularity of phase.
Quantum phase estimation (QPE) and quantum amplitude estimation (QAE) are also dependent on quantum Fourier transform and fine granularity of phase
Just to highlight that quantum phase estimation (QPE) and quantum amplitude estimation (QAE) will also have significantly limited quantum advantage due to their reliance on fine granularity of phase.
Any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage
Just to highlight that any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage as quantum Fourier transform, quantum phase estimation, and quantum amplitude estimation.
What is a DAC (Digital to Analog Converter)?
As already mentioned briefly, the crux of the problem is the conversion from classical discrete digital values to classical continuous analog values and vice versa. These conversions are known as digital to analog conversion, otherwise known as DAC, and analog to digital conversion, otherwise known as ADC.
The hardware device performing the conversion from discrete digital bits to a continuous analog voltage signal is known as a DAC. Bits flow into a DAC and a voltage comes out.
These digital values are not values derived from qubit state, but the entries from the unitary transform matrices which specify quantum logic gates in the quantum circuit being executed.
The hardware device performing the conversion from a continuous analog voltage signal to discrete digital bits is known as an ADC. a voltage flows into an ADC and bits flow out.
DAC and ADC are both chips or integrated circuits (IC) which are mounted on printed circuit boards inside the quantum computer.
The specs (hardware specifications) for a DAC specify the resolution of the DAC, which is the number of discrete values or levels which can be represented as an input to the DAC. A DAC with a resolution of n bits can represent 2^n levels of analog voltage signal. When we say that a DAC has a resolution of 24 bits, it means that the DAC has 2²⁴ or approximately 16 million discrete analog signal voltage values.
For some general background on DACs, see the Wikipedia:
- Wikipedia article on Digital-to-analog converter
- https://en.wikipedia.org/wiki/Digital-to-analog_converter
The common DAC resolutions:
- 8-bit. 256 discrete values.
- 10-bit. 1024 discrete values.
- 12-bit. 4096 discrete values.
- 14-bit. 16K discrete values
- 16-bit. 64K discrete values.
- 18-bit. 256K discrete values.
- 20-bit. One million discrete values.
- 24-bit. Sixteen million discrete values.
- 32-bit. Four billion discrete values.
- Beyond 32 bits. To the best of my current knowledge, at present there are no DACs beyond 32-bits.
Companies that manufacture DACs include, among others:
Where is the DAC?
Not so much where it is physically — which is in the control rack, the boxes of electronics outside the frig (cryogenic dilution refrigerator or cryostat) which send all of the coax cables into the frig, but logically in terms of code and data movement.
The DAC is completely invisible at the quantum circuit and gate level where unitary transform matrices dominate.
Essentially, the DAC is down at the pulse level. It’s the hardware controlled by the software and firmware (FPGA) which translates the entries of a unitary transform matrix into pulse operations which actually control the qubits. The digital form of a pulse goes into the DAC and comes out as the analog signal or waveform that actually drives the qubit, although there may be additional analog circuitry beyond the DAC before the actual qubit is reached.
The DAC effectively limits the granularity of phase and probability amplitude
The DAC controls access to the quantum state of qubits since it controls the pulses which control qubits. If the DAC can only handle 2^k discrete values for a relatively small value of k (no more than 32 or so), that severely limits the granularity or gradations of phase and probability amplitude.
Whether the DAC is the absolute limiting factor is not clear, but at a minimum it is one of the key limiting factors, and it is likely to be the key or most limiting factor. It is at least the most prominent and obvious factor.
Analog signals likely also severely limit the granularity of phase and probability amplitude
Even if the DAC could handle a lot more bits, it’s not clear that an analog signal could handle a comparable number of discrete voltage levels. The number of discrete analog voltage levels are likely fairly limited. Maybe thousands and even millions, but likely not more than low billions, and not trillions, quadrillions, and a lot more as quantum computing would seem to require.
Additional analog circuitry after the DAC may also limit the granularity of phase and probability amplitude
There may also be additional analog circuitry after the DAC itself which might compromise or distort the analog signal before it is sent to the qubit itself, which could also limit the granularity of phase and probability amplitude.
The analog signal must be transported to the qubit, typically as an electromagnetic signal, which probably limits the granularity of phase and probability amplitude
Generally, and dependent on the specific qubit technology, the analog signal that comes out of the DAC and elated analog circuitry will eventually have to be transported to the qubit, generally in terms of an electromagnetic signal, typically either a microwave pulse or a laser pulse, and then received by the qubit where it once again becomes an analog signal, which will then be applied to the quantum device itself within the qubit.
The qubit itself limits the granularity of phase and probability amplitude
Once the analog signal comes out of the DAC and travels though additional analog circuitry and devices and has been transported to the qubit and is actually received by the qubit itself, then the qubit state is manipulated at the quantum level. We have no transparency on how reliably the analog signal is applied to the quantum state in the qubit itself, but it seems reasonable to presume that this process might not guarantee the fidelity of the granularity of phase and probability amplitude.
Ultimately the physics of the underlying quantum phenomenon used to implement the quantum state by the qubits limits the granularity of phase and probability amplitude
Once the analog signal has been transported and received by the qubit and additional analog logic is used to apply the analog signal to the quantum state of the qubit itself, we encounter the ultimately limiting factor of the physics of the underlying quantum phenomenon used to implement the quantum state by the qubit. That physics, the quantum physics of manipulating quantum state at the quantum level, will be the ultimate limit on the granularity of phase and probability amplitude.
Unfortunately, we lack full transparency into the actual physics of the quantum phenomenon at work deep inside a qubit.
Some ultimate limit at the Planck level as well as quantum uncertainty itself
There is likely some ultimate limit down at the Planck level which absolutely limits granularity of phase and probability amplitude, and hence the maximum number of gradations.
I haven’t seen any hints of what Planck units would be for phase and quantum amplitude, but even if they were roughly ten to the minus fifty, that would mean that a quantum Fourier transform of 200 qubits, requiring an exponential divisor of 2²⁰⁰ which is 10⁶⁰ would require more gradations than a Planck unit could supply.
And there is undoubtedly some degree of quantum uncertainty which limits the granularity at the level of the quantum effect being used to represent and manipulate quantum state within the qubit.
Again, I have no knowledge and have seen no hints or documentation or specification of what that quantum uncertainty might be for phase or probability amplitude, although I’m fairly confident that it wouldn’t be as fine as one part in a billion or a trillion or a quadrillion or more as would be required for large quantum Fourier transforms of fifty to a hundred to two hundred qubits or beyond would require.
General uncertainties around all factors
Even if relatively fine granularity is supported, the uncertainty might be significantly larger than the nominal granularity, so that a measured value might have a coarser granularity than can be initialized or controlled.
Certainly at the quantum level there is outright quantum uncertainty. Even if you can nominally control granularity to a fairly fine level, the quantum uncertainty might be much larger or maybe just a little larger than the nominal granularity.
Even above the quantum level, the various software, digital, and analog interfaces may allow values to be specified at a finer level of granularity than the physical system actually supports. Or, the underlying physical system may support the specified level of granularity but with some modest wobbling or oscillating which results in an uncertainty when the value is measured, observed, transferred, or used to control some other part of the system.
Every factor listed here and every level of the quantum computer system may have its own nominal level of granularity and nominal uncertainty.
It’s not clear which of the factors is the more limiting factor
In truth, I’m not sure which is the more limiting factor:
- Discrete values handled by the DAC. Resolution of k means 2^k discrete values.
- Limits of discrete voltage levels for an analog signal. Could it really be in the millions, billions, trillions, and quadrillions and much higher?
- Limits of additional analog circuitry after the DAC. Necessary analog circuitry after the DAC but before the analog signal is sent to the qubit itself may also impose limitations.
- Transmission of the analog signal to the qubit. Conversion of analog to electromagnetic signal (microwave or laser), and reception by the qubit and conversion back to analog.
- Application of the received analog signal to the quantum state of the qubit. How reliably can the phase or probability amplitude be applied to the quantum state in the qubit.
- The physics of the underlying quantum phenomenon used to implement the quantum state by the qubit. No matter how good our design and the quality of our science and engineering prowess, the underlying physics is what it is. If only we knew what that is.
- Some ultimate limit at the Planck level as well as quantum uncertainty itself.
- General uncertainties around all factors.
Ultimately, it doesn’t really matter which is the ultimate limit, but it would be nice to know which could be improved through better science and better engineering and have some net beneficial impact.
I personally don’t know enough about classical digital and analog electronics to figure all of this out
This is an informal paper. As such, it does not include all of the fine detail of classical digital and analog electronics which would be necessary to fully evaluate all of the questions and issues that this paper raises.
That said, I do think that I have reached a sufficient depth of detail to at least phrase the questions and issues in enough detail for others to finish the task.
And I do apologize in advance for any errors in math or electrical engineering contained herein since I am neither a card-carrying mathematician nor a card-carrying electrical engineer, although I have taken college-level and grad school-level courses in both.
Gate execution and qubit measurement
The hardware needed to execute quantum logic gates will undoubtedly have some interaction with the granularity of phase and probability amplitude. I don’t have any specific knowledge here, but there is likely to be some impact on the accuracy of the results of a quantum computation.
Similarly, the hardware needed to perform a measurement of a qubit will undoubtedly have some interaction with the granularity of phase and probability amplitude. I don’t have any specific knowledge here, but there is likely to be some impact on the accuracy of the results of a quantum computation.
The precise impacts on the fidelity of quantum algorithms is unknown and uncertain. Depending on the specific case it may be:
- Relatively minor or insignificant.
- Modest to moderate.
- Significant to extreme.
- Prohibitive. Results are unusable.
Fidelity of SWAP gates
SWAP gates are really no different than any other two-qubit quantum logic gate, in fact a SWAP gate typically is just a macro operation which expands to three CNOT gates, but SWAP has a special role in simulating connectivity between qubits which do not have native connectivity — two qubits which are not adjacent and one or both of them must be repeatedly swapped with adjacent qubits until the quantum state of the original two qubits have been moved to two physically adjacent qubits where the desired two-qubit gate can be executed. Any issues with the granularity of phase and probability amplitude can be amplified dramatically if numerous SWAP gates — a so-called SWAP network or qubit routing — are required.
The concern here is that the fidelity of the granularity of phase and probability amplitude can be significantly reduced due to a reliance on SWAP networks.
Quantum algorithms which do not rely heavily on fine granularity of phase or probability amplitude will be impacted to a lesser degree, while quantum algorithms which rely on very fine granularity of phase and probability amplitude could be impacted greatly or even to the extent that the results are unusable.
Do the same limits apply to both phase and probability amplitude? Likely but not certain
Since phase and probability amplitude are combined into the same complex number, I presume that they have the same limitations, but that’s at the physics modeling level, so it doesn’t necessarily tell us what limits apply down at the level of the actual physical phenomenon.
To the extent that the math for both goes through the same software, the same DAC, and the same analog signal processing and transport process, I wouldn’t expect any difference.
But what happens within the qubit and down at the level of the quantum physical level of the quantum effects which implement quantum state is another matter.
I suspect that phase might be limited differently at the quantum level, but it’s not clear whether the quantum level is the ultimate limit on granularity of phase and probability amplitude.
If the DAC is indeed the critical limiting factor, then both phase and probability amplitude likely have the same overall limitations.
But the bottom line is that I really don’t know for sure, although it is likely that the same limits do apply, at least in a general and rough sense.
How did this all happen? Too many silos with gaps between them
It’s common for engineers and other technical staff to work in their own narrow and relatively isolated silos with only formally-negotiated interfaces between them. The net result is gaps in knowledge between silos, especially between silos which are not adjacent.
Some of the silos involved with quantum computing:
- Theoretical mathematicians.
- Applied mathematicians.
- Computer science researchers.
- Applied computer scientists.
- Application developers.
- Algorithm designers.
- Algorithm researchers.
- Software interface designers.
- Software interface developers.
- Quality assurance engineers.
- Quality assurance technicians.
- Digital electrical engineers.
- Analog electronics engineers.
- Analog component designers.
- Analog integrated circuit designers.
- Digital integrated circuit design engineers.
- Integrated circuit fabrication engineers.
- Applied physicists.
- Experimental physicists.
- Theoretical physicists.
The problem here is that the technical issue spans a very wide range of technical silos, everything from application developers to physicists designing qubits and even physicists theorizing the quantum phenomena upon which the qubits are based.
That’s too wide a range for everybody to understand all of the silos.
But even worse, and fatal here, is that apparently nobody understands all of the silos to assure that the technical features of the granularity of phase and probability amplitude function as everybody at all levels, in all silos, expects them too.
The net result is the technical issue which this informal paper focuses on — lack of fine granularity of phase and probability amplitude.
The other key factor on this particular issue is that nobody will run into it until qubit fidelity, qubit connectivity, and circuit depth reach enough of a critical mass so that quantum circuits will actually run into situations where the lack of fine granularity of phase and probability amplitude actually makes a difference, in a negative way.
In other words, there’s currently no incentive to address the gaps between the isolated silos.
People don’t currently see a problem because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem
The reason that people don’t currently see a problem is because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem. Current hardware precludes implementation of quantum Fourier transform with more than a few qubits, so people aren’t even trying to use quantum Fourier transform.
But as soon as qubit fidelity, qubit connectivity, and maximum circuit depth reach a critical mass and people start trying to use quantum Fourier transform for 16, 20, 24, and more qubits, then the problem will become manifest.
When will we hit the wall and start seeing the problem? Maybe two to three years
It’s hard to say for sure when we will hit the wall and people will start seeing the problem. Not in the next year to 18 months, but likely within two to three years.
The gating factors are the need for greater qubit fidelity, full qubit connectivity, and much deeper circuit depth sufficient to support quantum Fourier transform on 20 or more qubits.
On some systems we may see the problem with as few as 16 qubits, while on other systems it may not be until users attempt quantum Fourier transforms with 24 or more qubits.
Who knows, if progress is slow enough on the hardware front we may not see the problem for three or four years.
Two to three years seems like a good bet, for now.
Quantum information — discrete binary and continuous values
Quantum information, which is represented as quantum state in qubits, is a curious combination of discrete binary values, 0 and 1, and continuous values — phase and probability amplitude, which range from 0.0 to 1.0 as well as negative values for phase. It is those continuous values which are of interest in this paper.
Even though phase and probability amplitude are continuous, real values — technically an infinity of values between 0.0 and 1.0 (and negative values for phase), at some level in the software and firmware stack those continuous real values are converted to discrete digital values to be used as input to a DAC, which then converts them to a continuous voltage level.
But to be clear, the DAC never sees values from qubits, only the values from the entries in the unitary transformation matrices for the quantum logic gates which comprise the quantum circuit to be executed.
On the other hand, the ADC or analog to digital converter does see analog values read from qubits and converts them to discrete digital values, but this is completely separate from the processing which uses the DAC.
The quantum algorithm designer or quantum application developer will see only these continuous real values from the unitary transformation matrix entries and not have any sense of the discrete digital values or the conversion from those discrete digital values to a continuous analog voltage level by the DAC.
The number of bits of precision or resolution of the DAC and conversion process will be similarly hidden from view of the quantum algorithm designer and quantum application developer.
For more on quantum information, see my paper:
- What Is Quantum Information?
- https://jackkrupansky.medium.com/what-is-quantum-information-ae006642625
Some alternatives
Even if my conjecture is correct, there might be some viable alternatives:
- Maybe there is an alternative approach to implementing quantum Fourier transform.
- Maybe there is an alternative to using quantum Fourier transform. An alternative algorithmic building block.
- Maybe there are alternatives that have as much or more power than quantum Fourier transform. An approach that is even better than quantum Fourier transform.
- Possibility that placing the classical electronics in the cryostat might reduce interference to enable a few more bits of precision.
Whether any such alternatives exist is itself a conjecture, but it can’t be ruled out.
I don’t believe that such alternatives actually exist, but I mention them for completeness and for the remote possibility that they might actually exist.
And also to raise the opportunity for funding of research to seek out such alternatives.
Are there any reasonable alternatives to quantum Fourier transform?
This is a great question, whether there are any reasonable alternatives to quantum Fourier transform, but nobody has a good answer at this stage. So far, quantum Fourier transform has been the best that the best people could come up with.
Still, it’s a great research topic to fund.
What might be more powerful than quantum Fourier transform?
This is really just a variation of the previous question. At present, quantum Fourier transform is simply the most powerful algorithmic building block that the best people have been able to come up with.
Some alternatives that people might suggest:
- Variational methods. They work, but don’t really offer a lot of computational power or any opportunity for truly dramatic quantum advantage. Maybe they might work extremely well for some niche applications, but nobody has discovered any yet.
- Order finding. Extremely powerful, the basis for Shor’s factoring algorithm, but it relies on quantum Fourier transform for much of its power anyway.
- Grover search. Only quadratic speedup. Not a viable alternative, even if it’s popular.
- That’s about it. Really. This list of known possibilities is that short.
- Fund more research! This is really important. Maybe some blue-sky research could uncover some possibilities.
No, variational methods don’t show any promise of delivering any dramatic quantum advantage
If quantum Fourier transform cannot be used, one category of alternatives are variational methods. Unfortunately, they are not anywhere near as powerful as quantum Fourier transform.
They work, in a fashion, but don’t really offer a lot of computational power or opportunity for truly dramatic quantum advantage. Maybe they might work extremely well for some niche applications, but nobody has discovered any yet. So far, only mediocre results, at best.
And they have difficulties such as so-called barren plateaus which make them difficult to work with and problematic as well.
Mostly such an approach simply confirms that a solution can be implemented on a quantum computer, not that such a solution has any great advantage over classical solutions.
The incremental and iterative nature of a variational method eliminates the potential for any dramatic quantum advantage, even if some more modest fractional quantum advantage might still be possible.
While a quantum Fourier transform might evaluate 2^n possible solutions all at once, a variational method will only evaluate 2^k possible solutions at a time, where k is much smaller than n, and a sequence of attempts for different ranges of 2^k solutions must be attempted iteratively using classical code. So, there is far less than a 2^n computational advantage over classical methods. In fact, the advantage isn’t even 2^k since a sequence of attempts must be made, with a classical optimization step between each of them.
In short, reliance on variational methods will not deliver the full promise of quantum computing, no dramatic quantum advantage. Any quantum advantage of a variational method will be modest at best, a fractional quantum advantage.
For more on variational methods, see the Google tutorial:
Are there any reasonable alternative approaches to implementation of quantum Fourier transform?
Once again, this is a great question, whether there are any reasonable alternative approaches to implementation of quantum Fourier transform, but nobody has a good answer at this stage. So far, the existing implementations of quantum Fourier transform have been the best that the best people could come up with.
Still, it’s a great research topic to fund.
Might an alternative quantum computing architecture or programming model avoid my concern?
Another great question, whether my concern is simply an artifact of the existing quantum computing architecture and programming model, so that some alternative quantum computing architecture or programming model might avoid my issue, but nobody has a good answer at this stage. So far, the existing quantum computing architecture and programming model have been the best that the best people could come up with. And there has been no hint of alternatives which avoid my issue.
Still, it’s a great research topic to fund.
Is a radically different technology and architecture, and maybe even programming model required to get past my concern?
Restating the point of the preceding section a bit more strongly, maybe we can only get past my concern with an approach to quantum computing that is not just moderately different but radically different.
Again, much research is needed. Particularly in:
- Qubit technology.
- Qubit control.
- Quantum processor architecture.
- Programming model.
And until then maybe current technology is stuck or limited in the doomed manner described by this paper until such radically different technology should become available.
Might placing the classical electronics in the cryostat reduce interference to enable a few more bits of precision?
Who knows, maybe it is possible that placing the classical electronics in the cryostat might reduce interference and noise to enable a few more bits of precision for granularity of phase and probability amplitude. That wouldn’t completely eliminate the issue, but give us a little breathing room.
What level of precision might be achieved:
- 32 bits? Since it might not otherwise be practical.
- 36 bits? Possibly.
- 40 bits? Maybe.
- 44 bits? This might be the practical limit.
- 48 bits? I kind of doubt it.
Achieving 32 to 44 qubits of precision for quantum Fourier transform could be a fairly big deal. It’s certainly not up at the 50-bit one quadrillion level of dramatic quantum advantage, but a trillion would be a very interesting level of advantage over classical computing.
In any case, it sure seems like an interesting research topic.
Bits of precision vs. qubits of precision
There is some ambiguity over whether we should always refer to bits of precision, such as for classical digital circuitry such as a DAC, or qubits of precision when referring to a quantum Fourier transform.
Part of the confusion is that when measured, the qubits of the output register of a quantum Fourier transform will in fact be classical bits as the wavefunction is collapsed, but the output register itself, before measurement, is clearly qubits.
As a practical matter, both terms can be used as synonyms.
As a theoretical, technical matter, it’s qubits when they have quantum state and bits when only the classical state matters.
On the other hand, the whole point of a quantum Fourier transform is to derive a collection of classical bits from the aggregate quantum state of a collection of qubits, so it may simply be a question of when the bits transition from being primarily quantum to primarily classical.
The flip side of that is that the input data is also classical bits, n classical bits, which are transformed into 2^n quantum states in n qubits before ultimately being transformed back into n classical bits upon completion and measurement of the results of the quantum Fourier transform.
Currently this limitation is a non-issue since current limits on qubit fidelity and connectivity prevent running into these limits
Although my concern is very real and looming over the horizon, it is not a near-term issue since the current limits on qubit fidelity and qubit connectivity prevent quantum circuits from running into this limitation of granularity of phase and probability amplitude.
And generally speaking, anything beyond a fairly trivial quantum Fourier transform isn’t currently practical anyway.
The exponential math issue
The heart of the math for a quantum Fourier transform is an exponential term of the following form in the entries of the unitary transformation matrices:
- EXP( two pi i j k / (2^n) )
Where n is the number of qubits being transformed, and j iterates from 0 to 2^n minus one and k iterates from 0 to n minus 1. i is not a variable and simply indicates that the entire term of the exponent is the imaginary part of a complex number.
This exponential appears as an entry in the unitary transformation matrix which is calculating one term of the quantum Fourier transform.
The essential problem is that divisor, 2^n, which can become extremely large even for relatively modest values of n. And the consequential problem is that a real DAC can’t handle a very large number of discrete values.
Some values of n and 2^n:
- n = 3, 2³ = 8.
- n = 8, 2⁸ = 256.
- n = 10, 2¹⁰ = 1024.
- n = 12, 2¹² = 4096.
- n = 16, 2¹⁶ = 64K.
- n = 20, 2²⁰ = one million.
- n = 24, 2²⁴ = sixteen million.
- n = 30, 2³⁰ = one billion.
- n = 32, 2³² = four billion.
- n = 40, 2⁴⁰ = one trillion.
- n = 50, 2⁵⁰ = one quadrillion.
And as indicated earlier, even 20-bit DAC might be problematic.
Inverse quantum Fourier transform
While we’re on the exponential terms, it is worth noting that for each quantum Fourier transform there is also an inverse quantum Fourier transform which, as one might expect, transforms in the opposite direction.
This is comparable to classical electronics where a Fourier transform maps from the time domain to the frequency domain, while the inverse Fourier transform maps from the frequency domain back to the time domain.
The overall math for an inverse quantum Fourier transform has the same overall format, but with a negative exponent rather than a positive exponent.
So, if the quantum Fourier transform uses:
- EXP( two pi i j k / (2^n) )
Then the inverse quantum Fourier transform uses:
- EXP( minus two pi i j k / (2^n) )
The issue discussed by this paper remains the same regardless of whether a quantum Fourier transform or an inverse quantum Fourier transform is being used. The only distinction is the sign of the exponent in the exponential term. The magnitude of the value which will be sent through the DAC will be the same.
Euler’s formula
Just as an interesting aside, which might help guide intuition, there is a concept called Euler’s formula which says that the following two expressions are exactly equivalent:
- EXP(i x)
- COS(x) + i SIN(x)
Again, i is not a variable, but simply indicates that the term is the imaginary part of a complex number.
In short, the exponential is really sine and cosine, which highlights the fact that rotation is involved.
And, by definition the values of the SIN and COS function are between 0.0 and 1.0 as well as negative values from minus 1.0 to 0.0. So they are effectively normalized.
For more on Euler’s formula see the Wikipedia:
Two pi conversion factor from normalized phase to phase angle in radians
If you recall the exponential term from three sections ago:
- EXP( two pi i j k / (2^n) )
The two pi seemed a little odd. It is there as a conversion factor to map phase from a normalized value from the range 0.0 to 1.0 to a phase angle in radians in the range 0.0 to two pi.
Some common mappings:
- 1.0 to two pi. 360 degrees, a full circle or cycle.
- 0.5 to pi. 180 degrees, a half circle or cycle.
- 0.25 to pi over two. 90 degrees, a quarter circle or cycle.
Phase angle of two superposed basis states
Just to make clear what is really going on, the quantum Fourier transform is referring to the phase angle for the two basis states, representing 0 and 1.
I personally don’t know enough about the underlying physics, but I am reluctant to be too presumptuous about how fine-grained the phase angle for the two basis states can actually be.
There are two separate matters:
- The gross magnitude of the phase angle.
- How small the differences can be between two phase angles.
I’d like to see more discussion of this by the physicists or the quantum hardware engineers who actually do understand the physics.
Most importantly, I want to know what they actually know and can measure and detect and otherwise prove. I’m not interested in any hand-waving arguments that might be made.
And I also don’t want to be involved in any trial and error testing to see what it happens to be — we really need to know what it should be and what the vendor and specifications are committed to be. We need full transparency. Guesswork and speculation are not appropriate.
Mapping phase angle to the input range for the DAC
As mentioned earlier, we start with phase in normalized form, 0.0 to 1.0 (as well as negative values), then a two pi conversion factor is thrown in to convert the normalized phase to a phase angle in radians, which is what appears in the unitary transformation matrix, in the exponential terms.
But then, down deep in the software near the hardware when we wish to feed the phase into the DAC hardware to convert it to an analog voltage, we need to normalize it again from the range of 0.0 to two pi, to a range of 0 to 2^k minus one where k is the precision or resolution of the DAC.
I’m not sure what happens with the negative values, whether they simply wrap around and switch the sign, or whether radians from negative two pi to positive two pi get normalized to 0.0 to four pi, so that the number of gradations is effectively cut in half. It would be great to have some transparency on this.
Errors and approximations can creep in during all of these conversions.
Limited precision of entries in a unitary transform matrix
There’s no clarity as to how much precision is supported by the entries of unitary transform matrices (UTM) for various quantum computers and whether floating point exponents can be small enough to handle the very small exponents of exponentials which have a divisor of 2^n where n could be very large.
Precision of the entries in the matrices will depend on a combination of the programming language used to implement the interface to the low-level software which interfaces to the quantum hardware as well as the implementation of the low-level software used to implement the hardware interface, including the code which actually maps the transformation matrix entries to hardware operations that manipulate qubits.
Double precision floating point can handle a binary exponent as large as 1023–2¹⁰²³.
That could still be fine for quantum algorithms and applications using only hundreds of qubits, but wouldn’t be sufficient for Shor’s factoring algorithm for factoring numbers with thousands of bits.
And this is just the math at the software level, not to mention how many bits the DAC or analog voltage levels can handle.
Total qubits vs. quantum Fourier transform precision
This paper is not speaking to the total number of qubits used in a quantum computation or quantum circuit, or the total number of qubits in a quantum computer, but simply to the qubits dedicated to a single quantum Fourier transform.
A given quantum circuit could also have any number of additional or ancillary qubits.
And it’s possible that a single quantum circuit might execute two or more quantum Fourier transforms.
And there may be any number of additional computational steps using any number of qubits besides those dedicated to the quantum Fourier transform.
And, finally, as a general proposition, a quantum Fourier transform will transform quantum states from an input register to an output register, which doubles the number of qubits used by the quantum Fourier transform.
In short, the total number of qubits used by a quantum algorithm could be substantially greater than the maximum number that can be used in a single quantum Fourier transform.
For example, a quantum algorithm performing a 24-qubit quantum Fourier transform might use 60 or 80 or 96 or even 115 or more qubits.
To be clear, a 24-bit quantum Fourier transform would require 48 qubits — double the nominal 24-qubit precision to account for having both an input and an output register.
But to be clear, if you had a 48-qubit quantum computer, you could only implement a maximum of a 20-qubit quantum Fourier transform since a typical quantum algorithm requires at least a few ancillary qubits. You might be able to squeeze in a 22-qubit quantum Fourier transform if your algorithm required no more than four ancillary qubits.
Pulses to control qubits
Ultimately the execution of the quantum logic gates which comprise a quantum circuit involves translating the math of the entries of the unitary transformation matrix of each gate into a sequence or graph of one or more electromagnetic pulses which then directly operate on the quantum state of the specified qubits.
For some qubit technologies those pulses are lasers (such as for trapped-ion qubits) while for other qubit technologies those pulses are microwaves (such as for superconducting transmon qubits.)
There are various layers of software, firmware, and hardware — both digital and analog — required to complete this complete process, including components such as FPGAs (Field Programmable Gate Arrays) and DACs and ADCs (Digital to Analog Converters and Analog to Digital Converters.)
The pulse level is one level (actually several) of this process.
Pulse control
Pulses have a starting time, a duration, a shape, and an amplitude and phase. These parameters permit the hardware to generate the waveform for the pulse. And they are performed on a specified channel or device. Even a single qubit can have multiple channels.
Toolkits such as IBM Qiskit Pulse allow the quantum circuit designer to fully control all aspects of pulses operating on the qubits of a quantum computer.
No hints about pulse precision in IBM Qiskit Pulse documentation and tutorials
I skimmed through a fair amount of information about IBM Qiskit Pulse, but was unable to find anything that hinted at the precision or resolution of pulses.
No hint of how many gradations of phase or probability might be supported.
In short, pulse control is interesting, but wasn’t any help for addressing the concerns of this paper.
More transparency in the documentation and tutorials for IBM Qiskit Pulse would be helpful.
Notes on IBM pulse control
You can mostly ignore this section since it’s merely notes that I made to myself when reviewing presentations on IBM Qisit Pulse control. I preserved my notes here lest I lose track of where they were.
A couple of IBM pages I reviewed on Qiskit Pulse control:
- https://qiskit.org/textbook/ch-quantum-hardware/calibrating-qubits-pulse.html
- https://qiskit.org/documentation/apidoc/pulse.html
I found no hint of precision for the Pulse API in the doc.
Introductory video on Pulse:
- OpenPulse: Software framework for quantum computing with pulses
- https://www.youtube.com/watch?v=uBw2fo1rwr8
- Now called Qiskit Pulse
- Presenter: Lauren Capelluto, Quantum Software Engineer, IBM Research
- The quantum computing industry provides public access to superconducting qubit systems through open-source quantum computing frameworks such as Qiskit. Compilation techniques play a critical role in leveraging these small scale, noisy devices by driving down error rates in program execution. The compiler backend decomposes quantum operations into microwave pulses which aim to realize the desired quantum operations with the highest fidelity possible. We introduce OpenPulse, a pulse-level programming component of Qiskit, that hands over analog control of quantum computing systems to the user. Using OpenPulse, the user can specify the exact time dynamics of a program by scheduling arbitrary waveforms on control system resources, and can recover the time dynamics of the measured output. This is sufficient to allow the user to freely characterize, verify and validate the quantum system, and to explore gate optimization and error mitigation techniques to enhance system performance. OpenPulse enables the community to collectively push the field onwards towards practical computation.
- April 3, 2020
- Low-level pulse control
- Control electronics
- Default frequency settings
- Power user
- Quantum circuits translate to pulse schedules
- Pulse programming model
- Pulses — complex-valued amplitudes — phase and frequency
- Instructions and channels
- Control rack electronics
- DAC (Digital to Analog Converter)
- Diagram showing DAC and other major system components
https://youtu.be/uBw2fo1rwr8?t=277 - Arbitrary waveform generator
- Play — passes user sample data to device
- Delay
- Sine wave generators
- Set frequency
- Shift phase
- Drive channel
- Signal channel
- Measure channel
- Qubit
- Readout resonator
- ADC
- Acquire — instruction for readout, to perform measurement
- Acquire channel
- Virtual channels
- Matching labels to drive lines
- Abstract away multiplexed readout
- Readout chain
- Three levels
- RAW: Level 0
- Raw data — direct output from ADC
- KERNELED: Level 1
- Kernel integrates raw data to create IQ points (IQ Plane — In-phase and Quadrature)
- DISCRIMINATED: Level 2
- Discriminator
- Classify — converts level 1 to qubit 1/0 state
- Ground state
- Excited state
- IQ scatter plots
Overall, no hint or mention of granularity of phase or probability amplitude or frequency. Or minimum signal, maximum signal, minimum discrimination of signal, or uncertainty of measurement of a signal.
More detailed talk on Qiskit Pulse:
- Qiskit Pulse: Programming Quantum Computers Through the Cloud with Pulses
- https://www.youtube.com/watch?v=V_as5PufUiU
- Centre for Quantum Technologies
- CQT Online Talks — Series: Quantum computation and simulation
- Speaker: Nicholas Bronn, IBM
- July 23, 2020
- Abstract: Qiskit is IBM’s open source framework for quantum computing, and Qiskit Pulse provides pulse-level control of deployed quantum computers over the cloud. In this talk, I will overview the control and measurement of superconducting transmon qubits with microwave pulses, programming such pulses with Qiskit Pulse, and the calibration and benchmarking of them. Finally, we’ll summarize some recent applications of Qiskit Pulse, including error mitigation, qubit measurement discrimination, and reduction of leakage outside the computational subspace.
- Pulse-level control
- Control of physical systems
- Hardware backend
- Characterize and mitigate errors
- Pulse is part of Terra. Based on OpenPulse
- Now Qiskit Pulse
- Qiskit pulse arxiv
Qiskit Pulse: Programming Quantum Computers Through the Cloud with Pulses
https://arxiv.org/abs/2004.06755 - Optimal control
- Error mitigation
- Reducing errors through dynamical decoupling
- Anatomy of a superconducting qubit
- Anharmonic oscillator
- Microwave pulses
- Cross Resonance: ZX Operation
- Physical channel in the lab. Output of an AWG that goes into a frig that is coupled to a qubit
- Acquire channel. Tells a digitizer to acquire a signal that’s coming back from a measurement signal
- Modulated waveform
- Circuit vs. pulse model
- Mixer skew
- Nonlinearity
- Leakage (from |1> to |2>)
- Mixed state
- Echo
- Rotary echo — reduce spectator errors
- Calibrating and benchmarking cross resonance (CR)
- Gaussian square pulse
- Process tomography
- Hamiltonian tomography
- Parametric pulses
- Pulse builder
- Error mitigation workflow. Especially for VQE. Perform Richardson Extrapolation.
Error mitigation extends the computational reach of a noisy quantum processor
https://www.nature.com/articles/s41586-019-1040-7
https://arxiv.org/abs/1805.04492 - IQ plane. IQ is an abbreviation for In-phase and Quadrature phase components of a signal.
- Accessing higher orders of the transmon — qutrits
How many qubits is reasonable to expect for a quantum Fourier transform?
Some possibilities:
- 10–12 qubits. A slam dunk. Maybe not today, but one to two or three years out.
- 14–16 qubits. Likely also a slam dunk.
- 18 qubits. Should be a slam dunk as well.
- 20–24 qubits. May be the practical limit.
- 28 qubits. May or may not be feasible.
- 30 qubits. Hopefully feasible.
- 32 qubits. Likely limit of feasibility.
- 36 qubits. Not currently considered feasible, but maybe someday.
- 40 qubits. Really beyond the realm of practicality, but never say never.
- 44 qubits. Nope. Don’t even think about it.
- 48 qubits. Nope. Ditto.
It is the conjecture of this paper that targeting more than 40 or so bits is well beyond the realm of practicality, a key limitation being the number bits that the DAC (Digital to Analog Convertor) hardware can handle.
10–12 qubits seems like a slam dunk, maybe 14–16 qubits
A quantum Fourier transform for ten to twelve qubits is certainly a slam dunk for two to three years from now, even if near-term quantum hardware still struggles with even a few qubits.
Fourteen to sixteen qubits may be a slam dunk as well.
16 to 18 qubits is likely a slam dunk for two to three years from now
Sixteen to eighteen qubits is likely a slam dunk for what a quantum Fourier transform should be able to handle two to three years from now.
This presumes some solid progress on quantum computer hardware and qubit fidelity, but that is expected.
20 or 24 qubits as a reasonable expectation for what we might have in two to three years
Twenty to twenty four qubits is likely a reasonable expectation for what a quantum Fourier should be able to handle two to three years from now, but not an absolute slam dunk.
This presumes fairly aggressive progress on quantum computer hardware, but is within reach, even if not a slam dunk.
28 to 32 qubits seems out of reach, but who really knows
Based on current DAC hardware, it seems unlikely that a quantum Fourier transform for twenty eight to thirty two qubits would be feasible, even two to three years from now. But predicting that far in the future is always a somewhat risky proposition.
36 to 40 qubits is clearly beyond the realm of what we can expect, but it’s worth contemplating as a theoretical matter
There are no current or expected DACs that could handle more than 32 bits, but it’s always worth speculating about how hardware could be extended as hardware technology evolves.
Nobody should be expecting quantum Fourier transform for more than 40 qubits
If even 32 qubits is too much to expect for a quantum Fourier transform and DAC, then clearly 34, 36, 38, and 40 qubits and beyond need to be treated as beyond the realm of what’s reasonable to expect.
44 to 48 qubits are well beyond the realm of expectation for what a quantum Fourier transform could handle
If forty qubits is well beyond the realm of what’s reasonable to expect for a quantum Fourier transform, then clearly forty four to forty eight qubits and beyond are flat-out beyond consideration.
But 48 works well as a guidepost for what’s clearly beyond the realm of what can be supported.
The overall quantum computer may have 96 qubits, but a quantum Fourier transform would clearly be limited to transform no more than about twenty to twenty four qubits, using two registers for a total of about forty to forty eight qubits.
No, banding and approximate quantum Fourier transform (AQFT) won’t facilitate precision of quantum Fourier transforms
A large quantum Fourier transform can quickly result in very small values of phase, even zero or very close to zero. Banding (also known as approximate quantum Fourier transform (AQFT)) is a technique to skip over all of the zero values of phase and even most of the very small values of phase which are so close to zero as to not having a significant effect on the computation of the quantum Fourier transform. Adding a significant number of zeros or very small values is unlikely to significantly impact the net result significantly, so banding simply skips over them — any matrix entries outside of a relatively narrow band.
In essence, a banded QFT is a performance optimization, which dramatically reduces the number of computation steps, but results in an approximate result.
Approximate does mean faster, but usually with a concomitant loss of precision.
Banding may be sufficient for some algorithms and applications, but a careful analysis is required to assess whether the loss of precision is significant to the particular algorithm or application.
And to be clear, banding is not intended to improve or increase the precision of the result, but simply to optimize or improve performance.
For more detail on banding of quantum Fourier transform:
- Scaling laws for Shor’s algorithm with a banded quantum Fourier transform
- https://arxiv.org/abs/1302.5844
For more detail on approximate quantum Fourier transform (AQFT):
For more on the issues around banding and approximate quantum Fourier transform, see the later section Won’t banding or approximate quantum Fourier transform (AQFT) resolve the issue for Shor’s factoring algorithm? No, not really.
Demand transparency — transparency is mandatory
We need to push vendors to be fully transparent about what granularity they support for phase and probability amplitude. And how many qubits of precision they support for quantum Fourier transform.
We deserve transparency on all capabilities and limitations of quantum computers, algorithms, and applications.
In fact, the quantum computers, algorithms, and applications themselves deserve transparency.
It’s very easy for vendors to pay lip service to transparency, but walking the talk is more difficult — and more rare.
We should:
- Expect transparency. Transparency is a reasonable default expectation.
- Assume transparency. We shouldn’t have to lift a finger to get transparency.
- Wait for transparency. Maybe hold off on products and services which lack sufficient transparency.
- Request transparency. We shouldn’t have to ask for transparency, but… Don’t be shy.
- Insist on transparency. We shouldn’t have to go beyond asking politely and nicely, but… Don’t be afraid to be assertive.
- Demand transparency. Hold their feet to the fire. Transparency shouldn’t be optional. And lack of transparency shouldn’t be tolerated.
- Transparency is mandatory. No ifs, ands, or buts. No excuses. Just do it!
- Without transparency, you have nothing!
So the essential questions for all vendors of quantum computers and designers of qubit technology is:
- What is the granularity of phase and probability amplitude of your qubits? How many gradations?
- How many qubits of precision do you support for quantum Fourier transform?
Insist or demand that all limitations be clearly documented
Any and all limitations on the capabilities of quantum computers, algorithms, and applications should be clearly documented. We must insist on this. Even demand it.
If the granularity of phase and probability amplitude really is severely limited, this needs to be clearly documented. Vendors need to be fully transparent on such limits.
Transparency is needed everywhere
Transparency is needed everywhere. Anywhere that a product or technology is mentioned. This includes:
- Documentation.
- Specifications.
- Source code, including comments.
- Press releases.
- White papers.
- Roadmaps.
- Marketing literature.
- Advertisements.
- Formal papers.
- Training materials.
- Tutorials.
- Educational materials.
- Lecture notes.
- General discussion.
- Media coverage.
No transparency on internal electronics for digital and analog circuitry
One of the most egregious areas of lack of transparency is the internal design of the classical digital and analog electronics. That would include any digital to analog convertors (DAC) or analog to digital converters (ADC).
I honestly don’t expect transparency there since a lot of this circuitry is proprietary and intellectual property (IP).
Still, one can and should ask or at least wonder:
- What DAC chip is used?
- What is the precision of the DAC chip? How many bits of precision and how many discrete digital values and analog voltage levels are supported?
- How many discrete analog voltage levels are supported? How many discrete values can be applied directly to the quantum state of a qubit?
These two latter questions cut right to the heart of the problem of how many gradations of phase and probability amplitude are supported.
Transparency needed for number of gradations for phase and probability amplitude
Just to emphasize what was just mentioned in the preceding section, that the precision of the DAC chip, the number of voltage levels supported, determines how many gradations of phase and probability amplitude are supported.
In short, if the DAC has n-bit resolution, 2^n gradations are supported. Roughly. Approximately. As a maximum. The actual number of gradations might be smaller due to other factors involved in the digital to analog conversion process. But the number of gradations can’t be greater than the DAC resolution, by definition.
The number of gradations can’t be greater than the DAC resolution, by definition
Just to highlight this very key point from the preceding section:
- The number of gradations can’t be greater than the DAC resolution, by definition.
There may be other factors which further reduce the number of gradations, but the DAC provides an upper bound.
Related circuitry and noise may reduce the absolute resolution of the DAC
As alluded to in the preceding sections, there may be other factors which prevent the full rated resolution of the DAC from being usable, so that the number of gradations for phase and probability amplitude may be somewhat less than the full DAC resolution. I don’t have any specific knowledge or hard details, but some possibilities are:
- Additional analog logic circuitry which might introduce noise or distort the signal.
- Oddities of the conversion from continuous real values in the range of 0.0 to 1.0 (and negative values for phase) to a discrete digital value from 0 to 2^n minus one. There might not be exactly 2^n discrete values going into the DAC.
- Unknown software factors. Oddities of how unitary transformation matrices and complex numbers are processed.
- Precision of floating point numbers and arithmetic.
- Environmental interference and noise. Digital signals might be relatively immune to noise, but analog signals can be especially vulnerable to environmental interference, especially if a very high number of gradations are supported.
An open source design for a quantum computer would be helpful
I don’t expect proprietary hardware vendors to respect transparency. But what would be helpful would be to have an open source design for a quantum computer.
I don’t expect this to happen any time soon, but ultimately I do feel that it is inevitable, sooner or later, even if later.
Maybe it might happen as a side effect of my proposal for Intel to focus on producing hardware components for others to design and build quantum computers of their own. See my paper:
- Call for Intel to Focus on Components for Others to Easily Build Their Own Quantum Computers
- https://jackkrupansky.medium.com/call-for-intel-to-focus-on-components-for-others-to-easily-build-their-own-quantum-computers-f8e8d6d0500f
Quantum advantage must be fully disclosed and adequately discussed
Quantum advantage is another area where everybody pays it lip service, but where there is very little if any actual transparency or disclosure.
Quantum advantage is almost never really discussed quantitatively.
Quantum algorithm designers and quantum application developers must be more clear and very explicit about what quantum advantage they expect to achieve or deliver.
What’s your quantum advantage?
The starting point would be the theoretical advantage. There must be a formula for the algorithmic complexity of the quantum algorithm or application. There must also be a comparison to the formula for algorithmic complexity of the best classical approach.
There should be explicit disclosure of the actual numeric advantage for realistic input data.
There should be a clear discussion of the current advantage in contrast to a projection for true production-scale input data.
Full disclosure and transparency is needed. As I write in my paper:
- What Is the Quantum Advantage of Your Quantum Algorithm?
- https://jackkrupansky.medium.com/what-is-the-quantum-advantage-of-your-quantum-algorithm-8f481aefe8d0
Technically a quantum computer itself doesn’t have a quantum advantage — any advantage is for a quantum algorithm or quantum application.
But a quantum computer enables quantum algorithms and quantum applications to achieve quantum advantage.
Generally, the qubits bits of a quantum Fourier transform correspond to the quantum advantage. For example, 20 qubits of quantum Fourier transform enable a computational leverage (quantum advantage) of 2²⁰ or one million to one.
But the raw quantum advantage must be divided by the shot count to get net or final quantum advantage. The qubit fidelity of a quantum computer will play a major role in determining what shot count is needed — lower fidelity requires more shots while higher fidelity requires fewer shots.
A 20-qubit quantum Fourier transform is great, but a shot count of 25,000 would reduce its top line quantum advantage of 1,000,000 to one to a mere 40 to one, which is not exactly a notable advantage over classical approaches.
And if your top line advantage was 10,000 but 25,000 shots were required then you would actually be two and a half times slower than a classical solution.
For more on shot counts, see my paper:
- Shots and Circuit Repetitions: Developing the Expectation Value for Results from a Quantum Computer
- https://jackkrupansky.medium.com/shots-and-circuit-repetitions-developing-the-expectation-value-for-results-from-a-quantum-computer-3d1f8eed398
What other approaches are there if quantum Fourier transform isn’t available?
What can we do if limited granularity of phase and probability amplitude preclude the effective use of quantum Fourier transform? Not a lot. We have:
- Grover’s search algorithm. But it offers only a quadratic speedup.
- Variational methods. Generally mediocre advantage, at best. Finicky and problematic in general, such as the barren plateau problem. No clear and guaranteed path to dramatic quantum advantage for all use cases.
- Quantum phase estimation (QPE). Oops… it relies on quantum Fourier transform.
- Quantum amplitude estimation (QAE). Oops… it relies on quantum Fourier transform.
- Quantum amplitude amplification. Oops… it relies on fine granularity of probability amplitude.
- Order finding. Oops… it relies on quantum Fourier transform.
- That’s about it. Besides ad hoc algorithms for specific scenarios.
Are there ANY known opportunities to address the root issues? Nope
Sorry, but we’re really in a bind here.
The hardware limits of digital to analog conversion just don’t present any opportunities to dramatically boost the granularity of phase and probability amplitude.
Sure, classical digital and analog hardware will advance every year, but there just isn’t a lot of runway left for more than relatively minor advances. No quantum leaps.
Isn’t there ANY pathway to fine granularity of phase and probability amplitude? Nope.
Sorry, but this is the ugly truth. There isn’t even a known theoretical path to fine granularity of phase and probability amplitude.
To make this super clear:
- There isn’t ANY pathway to beyond maybe a few million or possibly a billion gradations of phase and probability amplitude.
Isn’t there ANY pathway to quantum Fourier transform for over 50 qubits? Nope.
Sorry, but this is the ugly truth. There isn’t even a known theoretical path to quantum fourier transform for over 50 qubits — or even close to 50 qubits.
To make this super clear:
- There isn’t ANY pathway to quantum Fourier transform for more than about 20 to 24 bits or maybe 32 bits at most. Transforming 50 or more qubits is out of the question.
What about Shor’s factoring algorithm? Sorry, but it will only work for small numbers, not very large numbers
Shor’s factoring algorithm for semiprime numbers, but only up to the limits of quantum Fourier transform discussed previously in this paper. And given the nature of how the algorithm works, the semiprime to be factored can have only half as many bits as the quantum Fourier transform can handle — because it requires squaring the input number, so factoring a 4096-bit public encryption key requires input and out register each with 8192 qubits.
Even if quantum Fourier transform can handle 24 qubits, the largest semiprime that could be factored must be no more than half that number of bits, 12, a number between 15 and 4095–4087 is the largest semiprime less than 4096, the product of the prime numbers 61 and 67.
People have this expectation of factoring large public encryption keys with 2048 or 4096 bits. That is clearly out of the question. Not just now. Not just five years from now. Not just ten years from now. Not ever. Not with current physics of the current universe. At least for Shor’s algorithm or anything similar.
Shor’s factoring algorithm can work for relatively small numbers — fewer than 32 bits, but not for hundreds or thousands of bits. Put simply, an exponential divisor of 2⁴⁰⁹⁶ or even 2²⁰⁴⁸ absolutely won’t work for a real DAC and real analog signals and real-world physics — they can’t physically be that fine-grained.
To make this super clear:
- Without very fine granularity of phase, Shor’s factoring algorithm will NOT work for very large semiprimes such as public encryption keys with 2048 or 4096 bits.
When I was viewing one of the recent videos for IBM Qiskit Pulse in preparation for this paper, somebody asked about Shor’s factoring algorithm and the IBM PhD presenter laughed and noted that 21 was the largest number that anybody had been able to factor using Shor’s algorithm. It just illustrates how big a challenge Shor’s algorithm really is and how far quantum computing has to go. And granularity of phase and support for quantum Fourier transform is a big part of that. In short, factoring small numbers is a possibility, but factoring very large numbers is out of the question.
For more background on Shor’s algorithm, consult Shor’s original paper on factoring semiprime integers, which relies of quantum Fourier transform, detailed in Section 4:
- Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
- https://arxiv.org/abs/quant-ph/9508027
For more extensive background on Shor’s algorithm, consult my paper, which in turn references other papers of mine on Shor’s algorithm as well as other resources on Shor’s algorithm:
- References for Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer
- https://jackkrupansky.medium.com/references-for-shors-algorithm-for-cracking-strong-encryption-using-a-quantum-computer-6c87edec0d9e
Won’t banding or approximate quantum Fourier transform (AQFT) resolve the issue for Shor’s factoring algorithm? No, not really
Banding (also known as approximate quantum Fourier transform (AQFT)) is a technique to eliminate the smallest terms in a quantum Fourier transform. Its purpose is to simplify the calculation of the quantum Fourier transform, to reduce the number of gates needed to perform the calculation. It does this by giving up some of the precision of the result of the transform. In short, it trades precision for performance.
That’s fine for transforms that are producing values intended to represent physical quantities where only a handful of digits of precision are needed — three to five digits plus a magnitude, but the result of the transform in Shor’s factoring algorithm is a ratio, not a physical quantity. It’s not clear if Shor’s algorithm would work if any of the bits of precision were lost or discarded, let alone a lot of the bits, which is what happens with banding or approximate quantum Fourier transform (AQFT).
Shor’s paper does acknowledge the problem with very tiny terms in the transform, in the last paragraph of section 4, on page 15:
- When k − j is large in the gate Sj,k in (4.3), we are multiplying by a very small phase factor. This would be very difficult to do accurately physically, and thus it would be somewhat disturbing if this were necessary for quantum computation.
- Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
- https://arxiv.org/abs/quant-ph/9508027
In fact, I assert, based on what I’ve noted previously in this informal paper, that it would be physically impossible to do this, not merely very difficult to do accurately as Shor asserts.
Shor sidesteps this impossibility by claiming:
- Luckily, Coppersmith [1994] has shown that one can define an approximate Fourier transform that ignores these tiny phase factors, but which approximates the Fourier transform closely enough that it can also be used for factoring.
Coppersmith’s paper on approximate quantum Fourier transform (AQFT):
There are some basic issues here:
- Coppersmith informs us: “This leads us to define an Approximate Fourier Transform (AFFT), parameterized by an integer m”, where m is essentially the number of bits to preserve, discarding or skipping the remainder, but Shor doesn’t speak to how his algorithm would set m.
- Shor devotes only a single paragraph to this entire issue of tiny phase factors.
- It seems as if this was more of an afterthought for Shor than a central part of the algorithm. In fact, Coppersmith’s paper actually cites an earlier paper by Shor on his algorithm — which I don’t have access to — which presumably couldn’t have referred to Coppersmith’s paper, suggesting that this paragraph was indeed added later than Shor’s original algorithm.
- Shor doesn’t in fact actually incorporate Coppersmith’s approach into his own algorithm. All the reader has is this one paragraph of text, and Coppersmith’s algorithm standing by itself in a separate paper. And Coppersmith’s algorithm was based on Shor’s earlier paper, so it’s difficult to say how cleanly Coppersmith’s algorithm could be implemented in the context of Shor’s algorithm in his newer paper.
- Neither Coppersmith nor Shor actually gets around to dealing with exactly how small the terms can be. Both approaches are incomplete in that sense.
- Neither Coppersmith nor Shor addresses the issue of how many bits of precision are available for the quantum circuit designer when specifying universal transformation matrices. How small the matrix entries can actually be, and how many bits wide a single matrix entry for a gate can be. For example, is even double-precision floating point sufficient? Vendors of current quantum computing hardware don’t clearly document these details.
- At a minimum, Shor should have specified how Coppersmith’s input parameter m should be calculated. What math or formula should be used? Should it be based on the size (in bits) of the input semiprime, or the width of the quantum Fourier transform, which is twice as wide as the input semiprime (8192 bits for a 4096-bit input semiprime public key)? Both Coppersmith and Shor are silent on this critical factor.
- What mathematical proof, theorem, or principle can be used to validate the formula needed by the previous issue — calculation of m based on the size of the input semiprime?
- Shor doesn’t even mention Coppersmith’s m input parameter.
- Further, Shor failed to offer a proof, theorem, or principle that any bits could be dropped and still return a valid result, let alone how many bits could be dropped before the result would be invalid.
- Coppersmith’s approach is fine for transforms that are producing values intended to represent physical quantities where only a handful of digits of precision are needed — three to five digits plus a magnitude, but the result of the transform in Shor’s factoring algorithm is a ratio, not a physical quantity. It’s not clear if Shor’s algorithm would work if any of the bits of precision were lost or discarded, let alone a lot of the bits, which is what happens with banding or approximate quantum Fourier transform (AQFT).
- Shor simply doesn’t adequately address the question of how approximate the transform can be and still produce a valid result.
- Neither Coppersmith nor Shor provides a proof, theorem, or principle to justify what error would be acceptable for such an approximate quantum Fourier transform in order to return a result which would be valid for the ratio needed to calculate the order of the modular exponentiation sufficiently accurate to continue the process of deducing the factors of the input prime. No hand-waving, please. In fact, is any error acceptable in order to get the accuracy of the ratio needed to calculate the order? Show your math, as they say. Coppersmith has some math for calculating or estimating the error, but is silent on what the criteria are for an acceptable error.
- And finally, even if some bits can be dropped, it remains an open question whether real quantum hardware could properly execute gates with the remaining phase shifts.
- Neither Shor nor Coppersmith has parameterized their algorithms with the phase granularity that is actually supported by the hardware. This speaks to how many bits would have to be dropped before the hardware can actually correctly perform the requested computation, separate from whether that result is a valid result for the factoring algorithm itself.
- If the input were 4096 bits, the quantum Fourier transform would nominally be for 8192 bits, so even if m, the count of bits to preserve, were relatively small to be feasible on real hardware, like 50, a huge number of bits would be discarded in the transform. It’s just not credible that this would produce a valid result for Shor’s algorithm.
- Even if Coppersmith and Shor did manage to have a solution which mathematically works for deriving a valid transform result which theoretically can successfully be used to derive the factors of a semiprime, there is still the issue that the unitary transformation matrix and hardware (digital, DAC, analog, and quantum) might still be unable to implement that mathematics. Banding and Coppersmith’s AQFT approach can reduce the magnitude of the problem, but they are still unlikely to overcome the precision limitations for DAC hardware.
- Both Shor and Coppersmith seem more focused on reducing the amount of computation rather than the precision or phase granularity issue per se. In fact, Shor’s final words on this issue are: “In fact, this technique reduces the number of quantum gates needed for the (approximate) Fourier transform considerably, as it leaves out most of the gates Sj,k.” Leaves out most of the gates? How could that possibly produce a valid result for Shor’s algorithm? It’s just not credible.
- Overall, this issue is one of the weakest parts of Shor’s algorithm. Ultimately, it could be its fatal Achilles’ heel.
Some of these issues could be addressed, but unless all of them are addressed, Shor’s factoring algorithm is not a guaranteed success for large semiprimes. From what I currently know and can extrapolate, there’s no credible chance that all of these issues can be adequately addressed.
That’s as deep as I’ve gotten, but even so far I am left with no confidence that this issue of lack of fine granularity for phase has been adequately addressed.
In short, I am unable to definitively prove that Shor’s algorithm will fail for factoring of very large semiprimes — 1024, 2048, or 4096 bits — but I am also unable to identify either a proof that it can do so or good reason to believe that it can. A belief that it can do so is a very risky and dubious proposition.
What about derivative algorithms of Shor’s factoring algorithm? Not if they still rely on quantum Fourier transform
There are indeed a number of derivative algorithms or implementations of Shor’s factoring algorithm — many listed in my reference paper cited in the preceding section. I haven’t closely examined all of them, but to the extent that they still rely on quantum Fourier transform or fine granularity of phase or probability amplitude then they will still be in the same boat of being limited to factoring semiprime integers no larger than about ten to sixteen bits, far short of the 2048 and 4096-bit public encryption keys that people have expected them to be able to factor.
To make this super clear:
- Without very fine granularity of phase, any semiprime factoring algorithm which relies on quantum Fourier transform (QFT) will NOT work for very large semiprimes such as public encryption keys with 2048 or 4096 bits.
Maybe someday someone will invent a new factoring algorithm which can actually work on a real quantum computer for 4096-bit public encryption keys, but it won’t be able to use quantum Fourier transform or rely on fine granularity of phase or probability amplitude.
Large-scale physics simulations may still be feasible even without support for quantum Fourier transform
All may not be totally lost without fine granularity of phase and probability amplitude. Some large-scale physics simulations may actually be possible even without the need for quantum Fourier transform or quantum phase estimation or fine granularity of phase or probability amplitude.
In some cases, physics simulations may be able to get by using only raw qubits with raw nearest neighbor connectivity. It may be possible to even achieve dramatic quantum advantage over physics simulations based on classical computing.
I have seen a few references to such simulations but I haven’t kept track of them to cite them here.
But to be clear, these are not computations where a specific numeric result is expected — approximations are the best that can be achieved. In those cases where deterministic correct results are required, techniques such as quantum phase estimation are needed and hence limited by the granularity of phase and probability amplitude.
Computer science experiments can also achieve significant quantum advantage
A variety of computer science experiments can still manage to achieve a significant quantum advantage — provided that they don’t rely on fine granularity of phase or probability amplitude, and don’t rely on quantum Fourier transform (QFT) or quantum phase estimation (QPE).
But, none of them are in any way representative of commercial, business, or industrial applications of quantum computing.
The key is exploiting nearest-neighbor connectivity (quantum entanglement.)
Google’s quantum supremacy experiment or cross-entropy benchmarking (XEB) can achieve dramatic quantum advantage
Google’s quantum supremacy experiment or cross-entropy benchmarking (XEB) is an example of such a computer science experiment, both in terms of achieving dramatic quantum advantage and not being representative of practical real-world commercial, business, or industrial applications.
And achieving this result did not rely on fine granularity of phase or probability amplitude, or on quantum Fourier transform (QFT).
The key is exploiting nearest-neighbor connectivity (quantum entanglement.)
For more on Google’s quantum supremacy experiment, see my paper:
- What Should We Make of Google’s Claim of Quantum Supremacy?
- https://jackkrupansky.medium.com/what-should-we-make-of-googles-claim-of-quantum-supremacy-630c76fffac7
What about quantum computational chemistry using quantum phase estimation (QPE)? Maybe, sometimes, it depends
Early on in quantum computing, quantum computational chemistry was quickly identified as a great candidate application. Quantum phase estimation (QPE) was quickly identified as an ideal computational approach, built on the foundation of quantum Fourier transform (QFT). This is still a widely held belief, but not considered practical on near-term quantum computers.
The limited capabilities of near-term quantum computers heralded a shift from emphasis on quantum phase estimation to the use of variational methods. Although the net impact is a loss of precision, some problematic use cases (e.g., the barren plateau problem), and a loss of quantum advantage over classical computing, at least variational methods do work on near-term quantum computers.
As the capabilities of quantum computers expand, the hope and expectation is that eventually quantum Fourier transform and hence quantum phase estimation will become practical.
I believe that this will indeed become the case, but limited by the granularity of phase and probability amplitude as discussed in this paper.
Some quantum computational chemistry computations using quantum phase estimation (QPE) may still be possible. It will vary based on how many bits of precision are needed:
- 10 to 12 bits. A slam dunk.
- 13 to 14 bits. Should also be a slam dunk.
- 15 to 17 bits. Less of a slam dunk.
- 18 to 20 bits. Should be fine.
- 21 to 24 bits. May be fine as well, but certainly not a slam dunk.
- 25 to 31 bits. Quickly becoming problematic, but the breakpoint is not clear.
- 32 bits. Likely to be problematic.
- Beyond 32 bits. Likely out of reach.
- 48 bits and beyond. Clearly out of reach.
I do recall somebody suggesting that they needed 110 and 125 qubits for some interesting quantum computational chemistry problems, but I don’t recall whether that was total qubits or the precision of quantum phase estimation. Either way, that would be beyond what appears to be the limit of granularity of phase and probability amplitude, which would mean such quantum computations would not be practical.
Still open questions as to the quality of the final quantum Fourier transform result even if a certain number of bits is supported
There are still open questions as to exactly what quality the final quantum Fourier transform result will have even if a certain number of bits is supported. 66%? 70%? 75%? 80%? 90%? 95%? Any better than 95%?
Ultimately the transform result is an approximation. Its accuracy is indeterminate. It will have some error rate.
Each individual qubit and gate will have its own error rate, but the overall quantum Fourier transform will have some aggregate error rate.
In theory, a smaller quantum Fourier transform will have a lower error rate, although once the transform is too small, it may not be sufficient to calculate a meaningful result.
Larger quantum Fourier transforms will have lower aggregate error rates until the transform gets too big and then the qubit and gate error rates will cause the final aggregate results to fall in quality.
The task for the quantum algorithm designer is to use enough qubits to generate a meaningful and accurate result, but not so many as to generate erroneous results due to aggregation of the qubit and gate error rates.
Larger quantum Fourier transforms can be used for computations which can accept higher error rates and less accurate results. But other computations might have more stringent requirements for accuracy and error rates, so that only smaller quantum Fourier transforms will succeed.
Scalability of quantum algorithms is vital, but problematic in the absence of clarity and predictability for fine granularity of phase and probability amplitude
Quantum algorithms need to be dynamically and automatically scalable so that they can deal with varying amounts of input and input parameters without requiring a complete redesign every time the input or an input parameter changes. But this requires clarity and predictability about the behavior of the quantum computer as input data and input parameters are varied.
Variations in the reliance on fine granularity of phase and probability amplitude can have uncertain and unpredictable impact on the results of quantum computations.
In short, quantum algorithm designers need to have clarity and predictability in the behavior of fine granularity of phase and probability amplitude.
If there are limits, quantum algorithm designers need to know so that they can work around them or to alert the user when a quantum computation simply isn’t possible or may require an alternative approach.
For more detail on scaling of quantum algorithms, see my paper:
- Staged Model for Scaling of Quantum Algorithms
- https://jackkrupansky.medium.com/staged-model-for-scaling-of-quantum-algorithms-d1070056907f
How does quantum error correction (QEC) deal with fine granularity of phase and probability amplitude? Unknown!
I’m not yet an expert in quantum error correction (QEC), but I haven’t seen any discussion of error detection or correction for phase or probability amplitude, which are continuous values, not discrete binary 0 and 1. It’s not clear that they do deal with it at all.
That doesn’t mean it’s not there, just that I haven’t personally seen it.
So I’ve definitely not seen any treatment of error detection and correction for very fine granularity of phase and probability amplitude.
Given that error correction generally requires a reliance on redundancy of some form, I’m not sure how redundancy would or could be engineered if phase or probability amplitude required millions, billions, trillions, or quadrillions and more gradations of value.
I’m definitely interested in what experts in this niche of quantum computing have to say about this aspect of the problem.
For more on my efforts to understand quantum error correction, see my paper:
- Preliminary Thoughts on Fault-Tolerant Quantum Computing, Quantum Error Correction, and Logical Qubits
- https://jackkrupansky.medium.com/preliminary-thoughts-on-fault-tolerant-quantum-computing-quantum-error-correction-and-logical-1f9e3f122e71
Quantum amplitude amplification has similar issue to phase granularity
This paper takes pains to keep clear that phase and probability amplitude are roughly in the same boat with regard to the issue of fine granularity. This issue shows up in quantum amplitude estimation (QAE) the same as it does in quantum phase estimation (QPE). It also shows up in quantum amplitude amplification, which ultimately depends to some extent on fine granularity of probability amplitude, even if not to the same extreme extent as it does with quantum Fourier transform.
This also relates to Grover’s search algorithm, which essentially uses the same approach as quantum amplitude amplification.
In short, quantum amplitude amplification has this same granularity issue, which will be problematic for a larger number of qubits even if it won’t be problematic for a small number of qubits. Two dozen qubits or so may be the approximate limit of practicality. More than two or three dozen or hundreds or thousands of qubits are going to be out of the question.
Simulators need to be configured to reflect granularity of phase and probability amplitude
Classical quantum simulators need to be configured to match target real quantum computers as closely as possible so that simulation results are a roughly accurate predictor of the results on the target real quantum computer. This includes the granularity of phase and probability amplitude.
No hands-on testing or empirical data
My own personal focus is on theory, architecture, design, technical specifications, and documentation, and examination of written materials in such areas, but without any hands-on testing or empirical data. I’ll leave it to others to focus on testing and empirical data. And eventually benchmarking.
That said, I would not encourage anyone to approach this issue purely or primarily based on hands-on trial and error testing. We need to get this issue right from the theory, architecture, design, technical specifications, and documentation perspectives first and foremost before any actual testing.
Need for benchmarking
Once the issue has been adequately addressed from the theory, architecture, design, technical specifications, and documentation perspectives, then (and only then!) it will be appropriate to address benchmarking of granularity of phase and probability amplitude.
And then benchmarking of quantum Fourier transform (QFT), quantum phase estimation (QPE), and quantum amplitude estimation (QAE) as well.
Benchmarking needs to be a very rigorous and formalized process, with carefully constructed and evaluated tests, thorough evaluation of the test results, and careful and formal presentation of the results.
Then these benchmarking results can be incorporated into the label of capabilities for quantum computers that I recently proposed:
- Proposal for a Quantum Capabilities Label for Quantum Computers, Algorithms, and Applications
- https://jackkrupansky.medium.com/proposal-for-a-quantum-capabilities-label-for-quantum-computers-algorithms-and-applications-430d3b94c4ca
Include granularity on label of quantum capabilities for quantum computers, algorithms, and applications
I recently wrote a proposal for a label which summarizes the capabilities or requirements for a quantum computer, algorithm, and application. Granularity for phase and probability amplitude are indicated in that label as follows:
- Fine granularity of phase and probability amplitude. Number of gradations. Generally approximated as a power of ten or a power of two — 100, 1,000, one million, a billion, or 2¹⁰, 2²⁰, 2³⁰. Generally needed for quantum Fourier transform (QFT) precision.
Since the key motivating doctor for fine granularity of phase is the need to support quantum Fourier transform (QFT), it makes sense that the count of qubits supported for quantum Fourier transform (QFT) should also be included in the label as follows:
- Quantum Fourier transform (QFT) precision. How many qubits can be transformed and achieve high quality results.
For more details of the label, see my recent paper:
- Proposal for a Quantum Capabilities Label for Quantum Computers, Algorithms, and Applications
- https://jackkrupansky.medium.com/proposal-for-a-quantum-capabilities-label-for-quantum-computers-algorithms-and-applications-430d3b94c4ca
Clarion call: All silos on deck!
Time’s up! It’s time for technical staff from all technical silos to understand the entire problem, from silo A to silo Z and assure that either the issue gets resolved, fully or partially, and properly documented as a limitation if it can’t be fully resolved.
Limited granularity is a long-term issue that won’t be fixed in a few years and then just go away
To be clear, limitations on the granularity of phase and probability amplitude are not simply a near-term limitation which will be with us for just a couple of years and then magically vanish as technology advances over the next five to ten years. This is a limitation based on the physical limits of building physical devices.
Sure, specific current limits can likely be expected to improve as each year goes by, but not by much and not for long.
Maybe we can get past 32 qubits or even make it to 40 or 44 qubits, but we absolutely won’t make it past 48 qubits.
No real electronic device will be able to discriminate trillions or even billions of voltage levels in a single qubit. Ditto for states (e.g. squeezed states) of a single photon.
48 fully-connected near-perfect qubits may be the sweet spot goal for near-term quantum computing
Too many people and quantum computer vendors are focusing on and obsessing over pursuing hundreds, thousands, millions, and even billions of qubits, but in my view that is a silly waste of effort and resources, and a distraction if qubit quality and connectivity is not sufficient to effectively utilize those qubits in practical quantum algorithms.
If you accept the basic conjecture of this paper, 20 qubits is a reasonable expectation of what might be achievable over the next two to three years for a quantum Fourier transform. Two 20-qubit registers is 40 qubits, plus a handful of ancillary qubits, gets you to roughly 48 qubits.
Or maybe two 22-qubit registers could be achieved, boosting the computational leverage by a factor of four, from 1,000,000 to 4,000,000. But that’s not a slam dunk.
So a 48-qubit quantum computer might indeed be a much more optimal goal to pursue over the next two years. Something that is achievable. Something that is practical. And something that offers some interesting level of computational advantage over a classical computer.
The downside is that this may be at or near the limit of what can be achieved with a quantum computer. Maybe a few more bits of granularity of phase can be achieved eventually, or maybe not. But at least it would be something that actually works and has some advantage and some benefit.
Key points:
- Priority is quality over quantity.
- Full connectivity. But that’s a big leap from where we are today.
- 3.25 to 4 nines of qubit fidelity.
- Likely will still require some degree of error mitigation.
- 20 bits of granularity for phase. One million gadations. A 20-bit DAC.
- Support a 20-bit quantum Fourier transform.
- Computational leverage of one million to one — 2²⁰.
- Of course we want to get way beyond this ASAP, but we’re not even close to this yet.
- A solid goal for two to three years.
- A credible and palpable goal to aim at.
- Doesn’t rely on the distant fantasy of full quantum error correction.
- Should be able to stretch classical simulation to 48 bits as well, which will enable debugging of 48-qubit quantum algorithms.
- Maybe a few more bits of granularity could be achieved in subsequent years, or not. But focus on a goal that seems achievable and then build on it as opportunities arise in future years.
But start with an upgraded 27-qubit quantum computer
A quantum computer with 48 fully-connected near-perfect qubits is the clear goal to have, but even it may be a bit too ambitious. It may be better to start with an upgrade to current 27-qubit designs.
Get the qubit fidelity and connectivity up to the level where fine granularity of phase is the gating factor for a ten to twelve-qubit quantum Fourier transform.
This would provide only a 1K X to 4K X quantum advantage, but would prove the concepts and be a decent stepping stone towards the 48-qubit version.
Maybe even a 36-qubit stepping stone
Even once a 27-qubit design has been upgraded and proved to support a ten to twelve-qubit quantum Fourier transform, 48 qubits may still seem too ambitious, and the underlying technology just not quite ready. Maybe 36 fully-connected near-perfect qubits would be a reasonable additional stepping-stone on the may to 48 qubits.
36 qubits would enable fifteen or sixteen-qubit quantum Fourier transform, upping the quantum advantage to 32K X or 64K X, which is fairly respectable relative to current toy-like algorithms or even 1K X to 4K X for an upgraded 27-qubit quantum computer.
We’re stuck in quantum toyland with no prospect of escape
I hate to have to say this, but the prognosis is rather gloomy, at least relative to past extravagant promises for quantum computing. Smaller, toy-like quantum algorithms and applications are within the realm of possibility, but much beyond toy-like is beyond the realm of reasonable possibility.
It sure does look like we are indeed stuck in quantum toyland, with no prospect of escape.
But maybe what seems toy-like to some may actually be great for others
Toy-like is relative.
As a variation of the old adage — one person’s trash is another person’s treasure, one person’s toy can be another person’s treasure.
To some, a 2X, 4X, 10X, or even 100X quantum advantage may not seem worth the trouble, but others might find that both acceptable and quite advantageous.
So even if the conjecture of this paper is true and quantum advantage is severely limited to 1,000X to 1,000,000X rather than one quadrillion X and beyond, that might still be considered awesome by some even while others consider it a profound disappointment.
Once again, my conjecture is still only tentative and may yet be proven wrong, but I’m not holding my breath
Just to reiterate that I am indeed reasonably convinced that my conjecture is true. And that I concede that there is some non-zero chance that I’m wrong, so I don’t consider my conjecture to be absolutely final or definitive, yet, but I’m not holding my breath. Okay, maybe I am.
Nonetheless, I’m more afraid that my conjecture is true than that it is false.
I merely raise the issue and hope that others with much greater knowledge, skills, and expertise can confirm, deny, or resolve the issue with something approaching absolute 100% definitive certainty.
Never say never
Even if my conjecture is true for existing approaches to quantum computing, I don’t want to give up hope that some alternative approach to quantum computing or some more powerful algorithmic building block can be found.
Never say never!
But I do have to acknowledge where we are today and seem likely to be headed tomorrow and in the years ahead.
Now what? What’s next?
We may be screwed. We’ll just have to wait and see.
Generally, we simply need to lower our expectations for many applications, to fractional quantum advantage.
And focus on more research to potentially discover alternative approaches.
Focus on more research to potentially discover alternative approaches
There’s no guarantee that research will discover some magical solution, but… you never know.
And generally research is a good thing, no matter what.
A golden opportunity for some fresh academic research
I wouldn’t ask commercial vendors of quantum computers to invest significant capital in research in this area, but this is a great opportunity for academic researchers to do some high-value, deeper, fundamental research. Especially in terms of alternatives architectures, approaches to algorithms, and programming models — and doing a better job of characterizing and benchmarking quantum computers.
Sure, it would be nice if commercial vendors such as IBM, Intel, Microsoft, Google, and Amazon could chip in a little money as well, but the primary value they can add is simply to endorse the need for the traditional deep-pocket funders of research, including national governments, to prioritize this type of research.
My original proposal for this topic
For reference, here is the original proposal I had for this topic. It may have some value for some people wanting a more concise summary of this paper.
- Is lack of fine Granularity of phase and probability amplitude the fatal Achilles Heel which dooms quantum computing to severely limited utility? Continuous value, but with unclear precision or sense of determinism. An intense need for clarity — too vague and undocumented at the moment. It would seem to limit quantum Fourier transform (QFT), which would limit quantum phase estimation (QPE), which would limit quantum computational chemistry and Shor’s factoring algorithm. And limit quantum amplitude estimation as well. And limit ordering finding (which is needed for Shor’s factoring algorithm) as well.
Summary and conclusions
- The focus of this paper is solely on gate-based quantum computers. No claim is made as to whether the conjecture and discussion of this paper may or may not apply to non-gate-based quantum computers such as annealing systems (D-Wave), photonic quantum computers (Xanadu), or specialized simulation systems.
- The focus of this paper is solely on two-level quantum systems (qubits). Whether the impacts on non-two-level systems (e.g., qutrits and qudits) are identical or similar is unknown.
- Granularity and gradations of continuous values. Any range of continuous values, such as a phase angle or probability amplitude, theoretically contains an infinity of discrete values, but in practice there will only be a finite number of discrete values or gradations in any range. The range has a granularity — the minimum numeric spacing between any two gradations or discrete values. A range might be fine-grained or coarse-grained — fine granularity with many gradations, or coarse granularity with few gradations.
- My conjecture: Lack of fine granularity of phase and probability amplitude severely limits the utility of quantum computing. Severely limits quantum advantage.
- I can’t absolutely prove this, but I am highly confident.
- I offer it as a conjecture, but I also assert it as a fact.
- I sincerely wish it were false, but I am very afraid that it is true.
- Dramatic quantum advantage — delivering on the full promise of quantum computing. The goal, but in many or even most cases we won’t achieve it.
- Fractional quantum advantage — an advantage, but not delivering on the full promise of quantum computing. In many or most cases it will be the best we can do.
- Exponential speedup — the great promise of quantum computing, but can it really be delivered? Only to a limited degree, not as limitless as promised.
- The essential root of the problem is that quantum information is a hybrid of discrete digital 0 and 1 which can be superimposed and entangled, plus phase and probability amplitude which are continuous values, at least in theory, but an infinity of continuous values cannot be practically implemented on a real physical device.
- The second root of the problem is that quantum Fourier transform has a critical dependence on dividing by 2^n which is a very large number for even n of moderate size (40 to 50 qubits.)
- The combination of these two issues creates a real nightmare that cannot be resolved in any practical manner. Something has to give.
- The bits of resolution of the DAC (digital to analog converter) hardware is likely the key limiting factor.
- The DAC limits the number of discrete analog voltage levels that can be applied to rotate the quantum state of a qubit.
- And the number of discrete analog voltage levels are likely also fairly limited. Not billions, trillions, quadrillions, and a lot more as quantum computing would seem to require.
- Analog signals likely also severely limit the granularity of phase and probability amplitude.
- Additional analog circuitry after the DAC may also limit the granularity of phase and probability amplitude.
- The analog signal must be transported to the qubit, typically as an electromagnetic signal, which probably limits the granularity of phase and probability amplitude.
- The qubit itself limits the granularity of phase and probability amplitude.
- Ultimately the physics of the underlying quantum phenomenon used to implement the quantum state by the qubits limits the granularity of phase and probability amplitude.
- Some ultimate limit at the Planck level as well as quantum uncertainty itself.
- General uncertainties around all factors. Even if relatively fine granularity is supported, the uncertainty might be significantly larger than the nominal granularity.
- It’s not clear which of the factors is the more limiting factor.
- I personally don’t know enough about classical digital and analog electronics to figure all of this out.
- Do the same limits apply to both phase and probability amplitude? Likely but not certain.
- The number of gradations can’t be greater than the DAC resolution, by definition.
- Any technique which relies on fine granularity of phase or probability amplitude will be severely limited in its quantum advantage.
- Algorithms and applications can still function, but are unlikely to achieve dramatic quantum advantage. Only a limited degree of exponential speedup, not as limitless as promised.
- More modest advantage is more likely up to 1,000X or maybe even 1,000,000X. But not the kind of dramatic quantum advantage that has been talked about and promised for quantum computing. Not one quadrillion X or beyond.
- How did this all happen? Too many silos with gaps between them.
- The reason that people don’t currently see a problem is because limited qubit fidelity, limited qubit connectivity, and limited circuit depth prevent people from actually running into the problem. But as soon as qubit fidelity, qubit connectivity, and maximum circuit depth reach a critical mass, the problem will become manifest.
- When will we hit the wall and start seeing the problem? Hard to say for sure. Not in the next year to 18 months, but likely within two to three years.
- Quantum Fourier transform is critically dependent on fine granularity of phase. This dramatically limits its quantum advantage.
- The root of the problem is that there is a divisor of 2^n in the exponential term in the unitary transformation matrix for the quantum logic gate used to effect the fine-grained qubit rotation required for quantum Fourier transform which only works for smaller values of n, not for n greater than 32 or maybe even 20 to 24.
- This issue is not specific to any particular qubit technology. There may be differences for each qubit technology, particularly on the analog signal front, but the DAC issue and general limit on granularity will be common to all qubit technologies.
- Quantum phase estimation (QPE) and quantum amplitude estimation (QAE) are also dependent on quantum Fourier transform and fine granularity of phase. Will encounter the same limitations on quantum advantage.
- Any other techniques which also rely on fine granularity of phase or probability amplitude will also encounter the same limitations on quantum advantage.
- Quantum Fourier transform (QFT) is the most powerful algorithmic building block available.
- Many applications rely on quantum Fourier transform to achieve dramatic quantum advantage.
- Nothing more powerful than quantum Fourier transform.
- Maybe there are alternatives, but…
- No obvious alternative to quantum Fourier transform.
- No obvious alternative implementations of quantum Fourier transform.
- Some physics simulations may still be possible. If they rely on nearest neighbor connectivity but not fine granularity of phase.
- Some quantum computational chemistry computations using quantum phase estimation (QPE) may still be possible. It will vary based on how many bits of precision are needed.
- Some computer science experiments may still be possible. Again if they don’t rely on much more than nearest neighbor connectivity or on fine granularity of phase.
- No obvious alternative architectures for quantum computing that address the concern of this paper. Maybe there could be some, but it seems unlikely.
- Shor’s factoring algorithm to crack large public encryption keys? No way. Period. Not now. Not ever. Can work for relatively small numbers, but not for hundreds or thousands of bits. Put simply, an exponential divisor of 2⁴⁰⁹⁶ absolutely won’t work for a real DAC and real analog signals.This extreme view relies on the conjecture of this paper being true — and I believe it is.
- Maybe someday someone will invent a new factoring algorithm which can actually work on a real quantum computer for 4096-bit public encryption keys, but it won’t be able to use quantum Fourier transform or rely on fine granularity of phase or probability amplitude.
- Scalability of quantum algorithms is vital, but problematic in the absence of clarity and predictability for fine granularity of phase and probability amplitude.
- How does quantum error correction (QEC) deal with fine granularity of phase and probability amplitude? It’s not clear that they do deal with it at all.
- Quantum amplitude amplification has similar issue to phase granularity.
- Simulators need to be configured to reflect granularity of phase and probability amplitude.
- No hands-on testing or empirical data. My own personal focus is on theory, architecture, design, technical specifications, and documentation, and examination of written materials in such areas, but without any hands-on testing or empirical data. I’ll leave it to others to focus on testing and empirical data. And eventually benchmarking.
- Need for benchmarking. Once the issue has been adequately addressed from the theory, architecture, design, technical specifications, and documentation perspectives, then (and only then!) it will be appropriate to address benchmarking of granularity of phase and probability amplitude, as well as benchmarking of quantum Fourier transform (QFT), quantum phase estimation (QPE), and quantum amplitude estimation (QAE).
- Include granularity on label of quantum capabilities for quantum computers, algorithms, and applications. The number of gradations supported. As well as the qubits supported for quantum Fourier transform (QFT).
- Clarion call: All silos on deck! It’s time for technical staff from all technical silos to understand the entire problem, from silo A to silo Z and assure that either the issue gets resolved, fully or partially, and properly documented as a limitation if it can’t be fully resolved.
- Limited granularity is a long-term issue that won’t be fixed in a few years and then just go away.
- 48 fully-connected near-perfect qubits may be the sweet spot goal for near-term quantum computing. With the potential for a 20-qubit quantum Fourier transform which could provide a million to one computational advantage.
- But start with an upgraded 27-qubit quantum computer. Get the qubit fidelity and connectivity up to the level where fine granularity of phase is the gating factor for a ten to twelve-qubit quantum Fourier transform.
- Maybe even a 36-qubit stepping stone. If 48 qubits is still too ambitious. Support 15 to 16-qubit quantum Fourier transform.
- It does indeed appear that we are indeed stuck in quantum toyland, with no prospect of escape.
- But maybe what seems toy-like to some may actually be great for others.
- Once again, my conjecture is still only tentative and may yet be proven wrong. But I’m not holding my breath. Okay, maybe I am.
- I merely raise the issue and hope that others with much greater knowledge, skills, and expertise can confirm, deny, or resolve the issue with something approaching absolute 100% definitive certainty.
- Now what? What’s next? We may be screwed. We’ll just have to wait and see, and lower our expectations for many applications, to fractional quantum advantage. And focus on more research to discover alternative approaches.
- Focus on more research to potentially discover alternative approaches. There’s no guarantee that research will discover some magical solution, but… you never know. And generally research is a good thing, no matter what.
For more of my writing: List of My Papers on Quantum Computing.