# Some Preliminary Questions About Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer

Shor’s algorithm for factorization of large integers will supposedly break even the strongest public-key encryption, but how real and imminent is this threat? I’m in the middle of trying to deeply digest the algorithm, but this informal paper catalogs some of the questions that are in my mind about the algorithm itself and the trajectory of the evolution of quantum computing hardware which may be required to support the algorithm.

My personal goal is not to either prove or disprove the potential of the algorithm per se, but simply to understand it and to be able to either confirm or dispel the tremendous hype that has developed around this algorithm.

My efforts to deconstruct the algorithm are ongoing, so I will be updating this list of questions and reference sources as I make incremental progress.

Shor’s algorithm is not a pure quantum algorithm, but a hybrid of classical and quantum processing. An outer loop generates random numbers and does some preliminary checks, then invokes the inner, quantum processing which determines what is called the *period* or *order* of the random number, followed by further classical processing which evaluates whether the period leads to the two factors. If not, the outer loop continues with a new random number.

My comprehension of the inner, quantum portion of Shor’s algorithm is still somewhat limited. I’ve focused this paper on the outer loop and the post-processing. I’ll be shifting my focus to the inner, quantum portion of the algorithm soon, I hope. It’s all a matter of how quickly I can nail down enough of my questions about the outer loop to feel comfortable tackling the inner, quantum guts of the algorithm.

Before I enumerate the full list, some superficial and high level questions can be considered:

- Can the algorithm break strong encryption in theory, given the right hardware? Supposedly.
- How many qubits are needed? It’s complicated. It depends.
- Can any and all current quantum computers break strong encryption? No.
- Anytime soon? Not likely.
- Within five years? Unlikely.
- Within ten years? Possibly, but very uncertain.
- Eventually? Probably. In theory.
- Do we have a roadmap for progressing to that eventuality? No.
- What makes this algorithm tick?
- What gives it its power?
- What limits it?
- What breakthroughs in hardware are needed to allow a quantum computer running Shor’s algorithm to finally crack strong encryption?

This informal paper will not endeavor to provide a complete description or complete tutorial on Shor’s algorithm. In fact, it won’t even attempt to provide enough of a basic introduction so that the reader will come away with at least an appreciation of what Shor’s algorithm is all about — other than simply that it can (supposedly, in theory) efficiently factor very large integers which are the product of two very large prime numbers — called a *biprime* or *semiprime* — such as used for public-key cryptography. A subsequent paper will provide a basic, high-level summary of the algorithm. If the reader wishes more detail now, consult with the citations listed in the *References* section.

# The questions

This list of questions is a work in progress and will be updated as I progress in my understanding of the algorithm.

The goal here in this paper is not to give answers to these questions, but to simply catalog and articulate the questions as clearly as possible.

In some cases I may indeed propose *tentative* answers, but I currently lack the depth and certainty of comprehension to assert that they are any more than tentative answers. I’ll update any answers as my comprehension progresses.

- How many qubits does a quantum computer need to factor a number of n bits which is the product of two prime numbers (a
*semiprime*)? What are the qubit requirements for 8-bit, 16-bit, 32-bit, 64-bit, 128-bit, 256-bit, 512-bit, 768-bit, 1024-bit, 2048-bit, and 4096-bit integers? And 8192-bit and 16384-bit integers as well. My preliminary understanding is that Shor’s original algorithm would require at least four times as many qubits as the input number, so that factoring a 4096-bit semiprime would require at least 16,384 qubits. Derivative algorithms could require significantly fewer qubits. One paper, Beauregard, reports requiring only 2n + 3 qubits — 8195 qubits for a 4096-bit semiprime. Another algorithm, by Pavlidis and Gizopoulos, requires a lot more qubits, 9n + 3, but fewer gates must be executed. [TBD: find paper for 3n qubits, which I recall seeing at some point, I think] - What connectivity of qubits (entanglement) is required by Shor’s algorithm? Current quantum computers have very limited connectivity.
- What coherence time is needed by Shor’s algorithm? Is it constant or does it vary by length of the input?
- What circuit depth is needed by Shor’s algorithm? Is it constant or does it vary by length of the input?
- What is the total count of quantum logic gates required based on the length of the input number?
- How many of the total quantum logic gates can be executed in parallel, reducing the circuit depth?
- How tolerant or susceptible to quantum noise is Shor’s algorithm? Is quantum error correction (QEC) ultimately needed, or can the full algorithm simply be run repeatedly to statistically assure a correct result?
- What running time can be expected based on length of input?
- Should input length be measured in decimal digits or binary bits? The algorithm uses the latter (binary), but some descriptions use the former (decimal). I’ll try to stick with binary bits in my own descriptions.
- Even if a quantum computer has enough qubits, are the total quantum logic gate count and circuit depth likely to be too large to be executed completely and successfully before the limit of quantum coherence time is reached?
- What is the trajectory of hardware advances which are required to solve factoring of each of the milestone key lengths, especially 64-bit, 256-bit, 512-bit, 768-bit, 1024-bit, 2048-bit, and 4096-bit?
- What is the expected trajectory of hardware advances for quantum computers over the next two, five, seven, ten, twelve, fifteen, and twenty years?
- What other characterizations for performance are needed for factorization, or any other quantum algorithm for that matter? For classical algorithms we normally focus on running time alone, and sometimes possibly memory, but for quantum computers we need to be concerned with qubit count, circuit depth, total quantum logic gates, parallel execution of gates, and quantum coherence time. A single Big-O is insufficient to adequately characterize either performance or the total hardware needed to execute an algorithm. We need a bunch of Big-O’s.
- What is the largest number which which has been factored by Shor’s algorithm to date? I’ve seen references to 15 and 21. There are references to factoring 143 and larger numbers, but they were not achieved using Shor’s algorithm, supposedly. Have there been any recent advances which I am not aware of?
- What is the largest number which can be factored by Shor’s algorithm on a quantum simulator? In say a second, minute, hour, two hours, four hours, eight hours, twelve hours, day, two days, one week?
- Can simulation of Shor’s algorithm be split and distributed to multiple classical computers? Given that the outer loop is based on generating random numbers, that would seem possible — give each distributed machine a different range of random numbers. That won’t ultimately compete with a quantum computer which is fully parallel, but would scale up simulation by a healthy fraction.
- Can a mere mortal code the general implementation of Shor’s algorithm for input of arbitrary size or some large but fixed size, or is a relatively sophisticated classical computer program needed to algorithmically generate the quantum logic circuit (quantum program)? Seems like the latter, at this stage.
- Is the same quantum logic circuit used for each iteration of the outer loop, or must it be regenerated for each iteration? Other than the
*preparation*logic gates since they depend on the specific random number being evaluated. - What are all of the derivative algorithms which are closely based on Shor’s algorithm? Some are listed in the
*References*section of this paper. - Are any of the derivative algorithms so superior that Shor’s original algorithm is now considered no longer relevant?
- Is a deep comprehension of Shor’s original algorithm essential to deeply comprehending each of the derivative algorithms? Is Shor’s still a reasonable
*benchmark*for starting any discussion of quantum factorization? - What (open source)
*reference implementations*of Shor’s algorithm and its derivatives exist? Have any of them been validated, by whom, and how? - Is there any
*definitive reference implementation*of Shor’s algorithm? - Are there (competent, industrial quality) implementations of Shor’s for any of the current 5-qubit, 8-qubit, 16-qubit, 20-qubit, 49-qubit, 50-qubit, and 72-qubit quantum computers — and their simulators?
- How well-behaved is Shor’s algorithm for pathological cases such as 0, 1, 2, even numbers, pure-prime numbers, prime powers, more than two factors, large n-bit values of 0, 1, 2, 2^n minus 1?
- Does Shor’s algorithm return both or all factors of the input number, or just one or the first factor discovered, with the expectation that the caller can either continue to factor the remainder of the number or simply divide the number for the one factor to get the other factor? The descriptions are somewhat vague in this area.
- How purely random do the random numbers used within the classical portion of the algorithm need to be? Is every classical pseudo-random number generator sufficient? Is a quantum computer needed to generate the random numbers? What techniques are recommended or required, particularly for the very large numbers of 1024-bit, 2048-bit, and 4096-bit length?
- Does Shor’s algorithm potentially suffer from the classical Turing halting problem — that the random number generation could presumably fail to hit on a number whose period leads to a factor?
- Can quantum noise cause the inner, quantum portion of the algorithm to fail in a way that causes the outer loop to continue past what should have been a solution, leading to the Turing halting problem? Might it be sufficient simply to rerun the inner, quantum logic circuit repeatedly to statistically assure a correct result? Might there be quantum noise failure modes of such complexity that even a large number of runs fails to statistically converge on the correct result for the order of a given random number?
- Could the outer loop of Shor’s algorithm ever be made to run directly on a quantum computer rather than requiring a classical computer as originally envisioned? Possibly, but that has yet to be demonstrated.
- Could the post-processing stage of Shor’s algorithm ever be made to run directly on a quantum computer rather than requiring a classical computer as originally envisioned? Shor does say “
*While this post-processing could in principle be done on a quantum computer, there is no reason not to use a classical computer if they are more efficient in practice*”, but doesn’t appear to have done the detailed analysis to show how this would be possible, especially with the GCD calculations, which require looping which quantum computers do not have, although Wang, Jiang, Mu, and Fan have a paper on quantum GCD. But there is more to the outer loop than the GCD calculations alone. It may well be possible, to do the entire algorithm in pure quantum processing, but nobody appears to have demonstrated that result, so far. - How efficient are classical algorithms for the GCD function for very large integers, such as 1024, 2048, 4096, and even 8192-bit numbers? As originally proposed, all GCD calculations are in the non-quantum classical stages of the algorithm.
- How many iterations of the outer classical loop can be expected for various input lengths? What does the distribution look like? What is the worst case — and is it even remotely possible that the algorithm might never halt? One paper, Grosshans, Lawson, Morain, and Smith, reports a derivative algorithm for “safe” semiprimes which requires only a single iteration.
- What prevents the outer loop from being exponential, some multiple or fraction of 2^n (for n-bit input numbers) iterations? How many random numbers must be generated to assure that one of the two factors is reached?
- What is the distribution of random numbers which can be raised to various powers such that the order (power) will test a multiple of one of the two prime factors of the input number? Many? A few? Only one? Is even one guaranteed (the traditional Turing halting problem)? Is there a theorem, proof, or principle for this belief?
- What is the gist of the rationale for believing that
*period finding*is the best way to factor large semiprimes? An expert in the field of number theory may know the answer, but it has yet to be articulated clearly and plainly in the papers on quantum factorization. - Could the period for a candidate random number be so large that the quantum probability might be too small to reliably detect? Could this happen only for large random numbers relatively close in value of the input number, or could it happen for more modest candidate random numbers less than a fraction of the size of the input number?
- Are there any mathematical proofs for any of these questions?
- Can Shor’s algorithm be split and distributed so that more than one quantum computer can work on the same input value?
- Is there a standard or convention for how to number the bits of the input number for Shor’s algorithm? Is the index zero for the low-order or the highest-order bit? Do the rules and conventions of classical computers apply? I did see at one paper which placed the least significant bit at the left end of the quantum representation, the opposite of the classical convention.
- Is the
*Chinese remainder theorem*required to comprehend Shor’s algorithm? Some descriptions refer to it, but some do not. - How much
*number theory*is needed to fully comprehend Shor’s algorithm? - What theorem, proof, or principle assures that X^R for some random number less than N of order R will fall between N² and 2 * N²? I don’t doubt that this may likely be true, but what guarantees it — since Shor’s algorithm requires it?
- How many random numbers will lead to a modular exponentiation period which results in one of the two factors of the input number? Exactly one? More than one? Potentially a significant number? Is it very rare or relatively common? What theorem, proof, or principle answers this question?
- What theorem, proof, or principle requires that at least one random number will lead to one of the two factors of the input number? Does a random number exactly one greater than either of the two factors guarantee success?
- Is there any significant distinction between the concept of
*order*and*period*or*order-finding*and*period-finding*? They seem to be used interchangeably, as if they were synonyms, but is there some technical distinction or nuance between them? Does it make sense to refer to the*order*of a*period*? - What is the distribution of values for
*order*(*period*) based on length of the input number? Mostly relatively short, small numbers? Relatively long, big numbers? - Will
*order*ever be so potentially large that a*quantum amplitude*is not large enough to measure in a detectable manner? - For any given iteration of the inner, quantum algorithm, what is the probability of hitting one of the factors of the input number?
- Given that quantum programs are probabilistic rather than deterministic, is there any absolute guarantee that the inner, quantum algorithm will always produce a value for
*order*such that GCD(x^(r/2) +/- 1, N) will result in one of the two factors? - If the probability of hitting a factor using a given random number is as high as some claim (0.50 or 50%), why not just start with 2, 3, 4, …, as the trial “random” numbers and simply increment by 1 to iterate, or are some ranges in 1 to N-1 more fruitful or less fruitful than others? What is the proof, theorem, or principle that dictates that random numbers are needed and that sequential numbers up from 2 or down from N-1 are not equally sufficient?
- If either GCD(x^(r/2) — 1, N) or GCD(x^(r/2) + 1, N) is not equal to 1, indicating that it is a factor, will the other GCD always also evaluate to be the other factor, or could only one of the two be not equal to 1? In other words, do GCD(x^(r/2) — 1, N) and GCD(x^(r/2) + 1, N) correspond precisely to p and q of p * q = N — when one of them does indeed correspond to p or q, that is?
- What range of values are generated for the pair of values for a public key of a given length? Is there a minimum value and a maximum value, such that the range of the two factors is significantly less than half the length of the input number for Shor’s algorithm?
- Is there any preferred usage for letters for variables in Shor’s algorithm? Shor used x for the random number and r for the period (order), but I’ve seen a and m for the random number and P for the period as well. It gets confusing. And then there’s n vs. N for the input number.
- What happens if the
*order*(*period*) for a random number is*odd*? The algorithm seems to simply skip such numbers. Is that simply confirming that some numbers have no chance of pointing towards a factor? What theorem, proof, or principle assures that such random numbers can be completely ignored, as opposed to them possibly requiring more specialized treatment? I do note that the algorithm does require*even*periods since it uses x^(r/2) so that (x^(r/2) — 1) * (x^(r/2) + 1) = x^r — 1, which requires r to be even. - What is the rationale, significance, and consequence of the algorithm skipping random numbers for which x^(r/2) = -1 (mod N)? None of the descriptions I’ve read of the algorithm are completely clear on this condition.
- I’ve seen (dubious) references suggesting that
*Grover’s algorithm*can also be used for factoring. Is that really true? - Can Shor’s algorithm be used or adapted easily for quantum computers which use more than two states for each quantum value, such as a three-state
*qutrit*or a ten-state*qudit*? - Can Shor’s algorithm be used or adapted easily for photonic quantum computers using
*qumodes*rather that two-state qubits? - What is the precise range for the random numbers to be generated by the outer, classical loop? 1 and N are not particularly useful. Ditto for 2, N-1, N-2, and possibly other cases. Is 3 to N/3 a reasonable range, or simply use 1 to N and skip 1, 2, and values greater than N/3?
- How long before a quantum computer is actually able to break strong encryption? Completely unknown. If 4n qubits are needed, that would require a 4096-qubit quantum computer to crack 1024-bit keys, an 8192-qubit quantum computer to crack 2048-bit keys, and a 16384-qubit quantum computer to crack 4096-bit keys. And that’s just the raw qubit count — coherence time, error rate, and entanglement requirements must also be met. Even at a Moore’s Law-like pace of doubling qubit-count every 1–2 years, the current size of 50–72 qubits (call it 64) would have to double to 128, 256, 512, 1024, 2048, 4096 to first crack 1024-bit keys. That would be six doublings, taking 6–12 years. Another doubling would crack 2048-bit keys in 7–14 years. And a final doubling would crack 4096-bit keys in 8–16 years. Again, all of that presumes that coherence time, error rate, and entanglement advance rapidly as well. In short, 2048-bit keys might be at risk in 7 years, 4096-bit keys in 8 years. Or using the averages of the ranges, 10 and 12 years, respectively for 2048 and 4096-bit keys. At this stage, nobody considers 1024-bit keys to be protected strongly, although technically they still are reasonably well-protected.
- How exactly does phase estimation fit into the calculation of the order of the random number in Shor’s algorithm, especially relative to modular exponentiation? What exactly is the phase that is being estimated? Trying to read Shor’s paper, these aspects are somewhat murky.
- The math for phase factors in the quantum Fourier transform in Shor’s algorithm seems to have an incredibly huge divisor, with q between N² and 2 * N² or roughly 2⁸¹⁹² for a 4096-bit encryption key, which seems virtually impossible to support on a real machine subject to real quantum physics. Is this really the case? Are these numbers which must be computed in the classical code and then passed to the quantum computer via a unitary matrix which will have to translate a very, very small number into a real microwave signal which is similarly incredibly small. Is that correct? Actually, e^ix = cos x + i sin x (Euler’s formula) which would be extremely close to 1.0 for any very tiny x, not allowing much discrimination between the 8192 columns of the Fourier transformation matrix. On the other hand, for exp(2 pi i a c / q), some fraction of products of “a” and “c” would probably be close enough to the range of q to be not as small as many of the values in the 0 to 2⁸¹⁹² range, but the exact distribution is unclear. The reasoning for why this would all work out is unclear. Also, see the question related to Coppersmith’s suggestion for dealing with this issue.
- Has Coppersmith’s suggestion for an approximate Fourier transform that sidesteps the prospect of the very tiny phase factors which may not be practical for real quantum computers been fully incorporated into Shor’s algorithm, or is the brief mention of Coppersmith by Shor merely an indication of possible future improvement? If the latter, has anyone done that enhancement?
- How should the value of the parameter “m” for the Coppersmith approximate Fourier transform be determined? How can we confirm that a proposed value for m is reasonable? Does the reasonableness of m depend critically on the value of the input number to be factored or is it independent of the magnitude of the input number?
- Will the value of the “m” parameter (maximum power of 2 for the denominator of the phase factors) for the Coppersmith approximate Fourier transform be dependent on the particular quantum hardware or is it a general, abstract quantity that is based on quantum mechanics or mathematics in general and not dependent on the particular physical realization of the quantum computer?
- Which is the greater limitation for the Coppersmith approximate Fourier transform: the need for a higher m to capture the period/order in sufficient fidelity, the need for a low enough m to keep gate count low enough to fit within the coherence time of the quantum computer, or the maximum m that aligns with the precision of the phase of a real physical qubit which can be resolved and maintained in coherence? In other words, what minimum delta of phase is required to accurately calculate the order/period for very large input numbers (1K to 4K bits)?
- Does Shor’s algorithm depend on the Extended Riemann Hypothesis (ERH) which Miller’s paper, referenced by Shor, claims to depend on? What does it really mean to depend on or assume the ERH and how can we confirm whether Shor’s algorithm will work with this assumption?
- Shor notes that “
*While the**Schönhage-Strassen algorithm**is the best multiplication algorithm discovered to date for large l, it does not scale well for small l. For small numbers, the best gate arrays for multiplication essentially use elementary-school longhand multiplication in binary*”, but he fails to delineate what constitutes a “small” versus a “large” number. Is small 4 bits, 16 bits, 64 bits, 256 bits, 1024 bits, or what? And he also fails to elaborate a quantum algorithm for*elementary-school longhand multiplication*. Presumably the classical code would conditionally generate the quantum logic circuits for one multiplication algorithm or the other based on the size of the number. Also, when using the multiplication algorithms to generate powers, would it be necessary to use both algorithms for the same base number, generating one algorithm for the smaller powers and then switch to generating the other algorithm for the larger powers of the same number, or should the classical code stick with the more sophisticated algorithm for all powers if it might be needed for any of the higher powers? - Does Shor’s algorithm produce the order, r, exactly, or only a number relatively near r, even if a classical implementation of the same modular exponentiation would produce an exact r? His paper says “
*we obtain r with probability at least φ(r)/3r*” (totient function of r divided by three times r) and he has a discussion about two techniques for trying numbers relatively near the number produced by the quantum order-finding algorithm. So, it’s not clear what the exact algorithm should be, or how successful it is likely to be. - In the
paper by Beauregard, adder is based on phase shift of exp(2*pi * i / 2^k), but how big can k be before the phase of a qubit no longer has the resolution to discriminate 1/2^k? Maybe this works for k=8 to k=20, but for k=4096 it seems absurdly unlikely to resolve such an incredibly small phase increment. In essence, what is the minimum phase angle which can be discriminated in a real qubit?*Circuit for Shor’s algorithm using 2n+3 qubits* - Am I correct when I read the
paper by Beauregard as indicating a gate count of O(n³ * log2(n)) gates and a circuit depth of O(n³) gates? For n=4096 this would be O(n³ * log2(n) gates = 12 * 68 Billion = 824,633,720,832 = 825 BILLION total gates and a circuit depth of n³ = 68,719,476,736 = 68 BILLION gates! Really?!*Circuit for Shor’s algorithm using 2n+3 qubits* - What is the smallest integer for which the total number of operations (both classical and quantum) needed to factor the integer using Shor’s algorithm is indeed less that the total number of operations needed to factor that integer using a classical computer (single CPU processor) alone? Presumably this is the same as asking what the largest integer is that can be factored more efficiently using a classical computer alone — especially considering the overhead of transitioning between the classical and quantum portions of Shor’s algorithm. Alternative answers are possible for variations and derivations from Shor’s original algorithm. Presumably multiprocessor and distributed classical computing clusters would reduce the elapsed time to factor larger numbers using classical computation alone, but would not reduce the total number of operations. Presumably one could also inquire about cost in terms of hardware acquisition cost, but that has no effect on time required to factor very large numbers given only a reasonable hardware budget, such as the cost of the single quantum computer against which the distributed cluster must compete.
- Since quantum computing is represented as being
*probabilistic*rather than*deterministic*, is the quantum portion of Shor’ algorithm — order finding — really*guaranteed*to produce the correct result (if that is possible), or will*repetition*and*statistical evaluation*of the results be*required*is arrive at the order of the modular exponentiation? The original algorithm paper does not explicitly state that such repetition is*required*, nor even imply it, but reading between the lines and comprehension of the nature of quantum computing*seems to strongly suggest*that repetition and statistical evaluation is required. Further, there is no clear statement or guidance as to how to calculate how many repetitions might be needed, or what statistical criteria to use to evaluate the results. As written, the algorithm doesn’t clearly indicate how many discrete results might be produced and need to be*checked*to determine which is indeed the order of the modular exponentiation. And I don’t read any comforting language or proof that the number of repetitions will necessarily be*polynomial*for the very, very, very large numbers that we’re talking about for cracking (factoring) 2048-bit and 4096-bit keys. - What exactly happens when the quantum circuit for order finding raises a relatively large number to a large number of very large powers, each of which would produce a result which is far greater than the number of qubits in the output register? What happens to all of the overflow bits? Is there any wraparound which might interfere with legitimate qubits of the results? For example, for input N of n bits, an ansatz of N-1 will effectively square N and still fit in 2*n bits, but any powers higher than 2 will result in overflows — N-1 raised to the 2^n power would product many times as many result bits as there are result register qubits. Even an ansatz of SQRT(N) will quickly produce results with more than 2n bits.
- What exactly happens when the quantum circuit for order finding raises a relatively large number to a large number of very large powers and then attempts to take the MOD N of the truncated result? Won’t such results be invalid since many of the bits of X^R will have gotten discarded (or inappropriately wrapped) before the MOD N calculation step?

TBD: To be continued.

# References

The reference list has been moved to a standalone paper:

For historical purposes, the original reference list remains intact below, but the above list is more current.

These are online, open source references which I have consulted in my quest to deeply comprehend Shor’s algorithm for factoring large semiprimes.

I haven’t found any of these references to be truly comprehensive. They each have bits and pieces, rarely the same bits and pieces, and always failing to present the entire, complete picture.

Maybe to an expert in *number theory* the gaps between the bits and pieces are *obvious*, but I’m no expert in number theory.

Generally, textbooks will not be cited as references since then tend not to be online and open source. I personally require online, open source materials.

This list is not intended to necessarily be exhaustive or comprehensive. I will add to it as I discover additional references.

At present, they are in no particular order.

- Scott Aaronson’s blog post —
— https://www.scottaaronson.com/blog/?p=208.*Shor, I’ll do it* - Wikipedia
article — https://en.wikipedia.org/wiki/Shor%27s_algorithm.*Shor’s algorithm* - Wikipedia
article — https://en.wikipedia.org/wiki/Integer_factorization.*Integer factorization* - Wikipedia
article — https://en.wikipedia.org/wiki/Semiprime.*Semiprime* - Wolfram MathWorld definition of
— http://mathworld.wolfram.com/Semiprime.html.*Semiprime* - Shor’s original paper —
— https://arxiv.org/abs/quant-ph/9508027.*Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer* — https://math.nist.gov/quantum/zoo/.*Quantum Algorithm Zoo*- Wikipedia
article — https://en.wikipedia.org/wiki/Chinese_remainder_theorem.*Chinese remainder theorem* by N. David Mermin — http://www.lassp.cornell.edu/mermin/factoring-princeton1.pdf.*What Has Quantum Mechanics to Do With Factoring? — Things I wish they had told me about Peter Shor’s algorithm*by Neal Young of Dartmouth — http://www.cs.ucr.edu/~neal/1996/cosc185-S96/shor/high-level.html.*The Quantum States of Shor’s Algorithm*by Lavor, Manssur, and Portugal — https://arxiv.org/abs/quant-ph/0303175.*Shor’s Algorithm for Factoring Large Integers*by Lomonaco — https://arxiv.org/abs/quant-ph/0010034.*Shor’s Quantum Factoring Algorithm*by Arora and Barak — http://theory.cs.princeton.edu/complexity/quantumchap.pdf.*Computational Complexity: A Modern Approach — Chapter 20 Quantum Computation*by Braunstein — https://www-users.cs.york.ac.uk/~schmuel/comp/comp.html.*Quantum computation: a tutorial*paper by Beauregard — https://arxiv.org/abs/quant-ph/0205095.*Circuit for Shor’s algorithm using 2n+3 qubits*paper by Pavlidis and Gizopoulos — https://arxiv.org/abs/1207.0511. Requires 9n + 3 qubits, but fewer gate executions.*Fast Quantum Modular Exponentiation Architecture for Shor’s Factorization Algorithm*paper by Dattani and Bryans — https://arxiv.org/abs/1411.6758.*Quantum factorization of 56153 with only 4 qubits*— http://blendmaster.github.io/ShorJS/. “*Shor’s Algorithm Simulator**This page simulates Shor’s Algorithm for integer factorization with a quantum computer. Since this page runs in javascript on your non-quantum browser, the quantum part of the algorithm is simulated using probabilities.*”— https://inst.eecs.berkeley.edu/~cs191/fa05/lectures/lecture19_fa05.pdf.*Shor’s order (period) finding algorithm and factoring — C/CS/Phys 191 11/01/05 — Fall 2005 Lecture 19 notes*paper by Lavor, Manssur, and Portugal — https://arxiv.org/abs/quant-ph/0303175. “*Shor’s Algorithm for Factoring Large Integers**This work is a tutorial on Shor’s factoring algorithm by means of a worked out example. Some basic concepts of Quantum Mechanics and quantum circuits are reviewed. It is intended for non-specialists which have basic knowledge on undergraduate Linear Algebra.*”paper by Grosshans, Lawson, Morain, and Smith — https://arxiv.org/abs/1511.04385. “*Factoring Safe Semiprimes with a Single Quantum Query**With just one call to QOFA*[quantum order finding algorithm]*, our algorithm almost always factors safe semiprimes.*”- Alas, I am unable to give the
book by Nielsen and Chuang as a reference for my work since I am limiting myself to online open source material. See http://www.michaelnielsen.org/qcqi/ and https://www.amazon.com/Quantum-Computation-Information-10th-Anniversary/dp/1107002176.*Quantum Computation and Quantum Information* - IBM Q Experience documentation —
— https://quantumexperience.ng.bluemix.net/proxy/tutorial/full-user-guide/004-Quantum_Algorithms/110-Shor's_algorithm.html.*Shor’s algorithm* - NIST Post-Quantum Cryptography effort — https://csrc.nist.gov/projects/post-quantum-cryptography.
- Wikipedia
article — https://en.wikipedia.org/wiki/Post-quantum_cryptography.*Post-quantum cryptography* paper by Wang, Jiang, Mu, and Fan — https://arxiv.org/abs/1707.06430.*A quantum algorithm for greatest common divisor problem*— https://qudev.phys.ethz.ch/content/QSIT15/Shors%20Algorithm.pdf.*Shor’s Algorithm presentation by Bäumer, Sobez, and Tessarini*by Miller (1976) — https://www.cs.cmu.edu/~glmiller/Publications/Papers/Mi76.pdf. Referenced by Shor’s original paper — “*Riemann’s Hypothesis and Tests for Primality**It is known that using randomization, factorization can be reduced to finding the order of an element [Miller 1976]*”. Assumes the Extended Riemann Hypothesis (ERH).by Fang Xi Lin of McGill University. An alternative description of Shor’s algorithm, but still doesn’t answer most of my questions. I used it to try to understand the quantum Fourier transform a little better, but my questions persist.*Shor’s Algorithm and the Quantum Fourier Transform*by Nam and Blümel. An alternative description of Shor’s algorithm as well as an adaptation using a*Scaling laws for Shor’s algorithm with a banded quantum Fourier transform**banded*QFT for much better performance.by Nam and Blümel. Yet another adaptation of Shor’s algorithm to boost performance.*Symmetry boosts quantum computer performance*by Mark Oskin of University of Washington. Yet another description of Shor’s algorithm, order-finding, QFT, phase estimation, and quantum computing in general.*Quantum Computing Lecture Notes*by Vandersypen, Steffen, Breyta, Yannoni, Cleve, and Chuang of IBM Almaden Research Center and Stanford. Early implementation (2000) of order-finding algorithm using a nuclear magnetic resonance (NMR) quantum computer. A custom molecule implemented a 5-qubit quantum computer.*Experimental Realization of an Order-Finding Algorithm with an NMR Quantum Computer*by Coles, Eidenbenz, Pakin, et al. Includes a reasonably succinct description of Shor’s algorithm including an implementation on a 5-qubit IBM quantum computer. Still leaves many questions unanswered.*Quantum Algorithm Implementations for Beginners*by Cooper. Critical examination of Shor’s algorithm. But still leaves many questions and issues unanswered. Vintage 2006.*A Re-evaluation of Shor’s Algorithm*by Alec Brickner. Student project for*Shor’s Algorithm in Pyquil*course at Stanford.*CS 269Q: Quantum Computer Programming*by William Burton. Student project for*Implementation of Shor’s algorithm in pyQuil*course at Stanford.*CS 269Q: Quantum Computer Programming*by Monz, Nigg, Martinez, Brandl, Schindler, Rines, Wang, Chuang, and Blatt. Factored 15 using a trapped-ion quantum computer with seven qubits. Technically, this is a derivative of Shor’s algorithm, not the exact same algorithm as published by Shor.*Realization of a scalable Shor algorithm*by Anschuetz, Olson, Aspuru-Guzik, and Cao. An alternative approach using QAOA (Quantum Approximate Optimization Algorithm), which is not based on Shor’s approach. On December 11, 2019 Zapata Computing and IBM announced (via Twitter) that they had factored 1099551473989 (1048589 x 1048601) using three qubits with a solution found 30% of the time, but few details given — not clear if they used the same exact approach of that paper.*Variational Quantum Factoring*

# What’s next?

I am continuing to slog through the material in those references, and I’ll continue to expand the set of references as well.

At some point I’ll have enough comprehension and certainty to provide deep answers in the form of an FAQ for Shor’s algorithm.

My focus is on informality and capturing raw information, not producing a formal paper. That said, at some stage the information contained herein may become suitable for something at least a little closer to a semi-formal paper suitable for posting on arxiv.org. Maybe. No commitments.

As soon as I complete a comparable list of questions for the inner, quantum portion of Shor’s algorithm, my intent is to post a separate informal paper which summarizes Shor’s algorithm at a basic introductory level, even if I still lack an absolute comprehension of all of the finer nuances.

An intermediate effort on my part is my paper ** Ingredients for Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer**, which is also a work in progress.

What I’m really waiting for is that *Aha!* moment when somebody announces a real breakthrough. At this stage, factoring even an 8-bit semiprime would be a big breakthrough. I would say that factoring a 32-bit semiprime would be the primary goal to claim that serious progress is being made. In theory, if my current understanding is correct, that would require four times 32 or 128 qubits, plus the required connectivity, coherence time, and tolerance for quantum noise. But I don’t have sufficient confidence in even my basic understanding of the basic algorithm and its derivatives at this stage to fully assert that as a specific goal.

It will also be a significant milestone when a quantum computer can factor a prime larger than the largest prime which has been factored using a classical computer — 768 bits, corresponding to RSA-768 keys. But, that required a large number of classical computers operating in parallel. Still, the claimed theory is that a quantum computer effectively computes in parallel, so that additional computers should not be needed. In theory. But not yet proven in practice.

# Conclusion

There is no strict conclusion yet, just lots of questions to be answered.

Maybe someday Shor’s algorithm will indeed be able to reliably break strong public-key encryption, in theory, but not soon.