# Questions About Quantum Computing

This informal paper catalogs many of my questions about quantum computing. It’s not a true FAQ since it has little in the way of definitive answers or even any answers in many cases, at this early stage. Its primary purpose is to track my questions and guide my pursuit of deeper knowledge about quantum computing. Eventually it may (hopefully) evolve into more of a true FAQ with definitive answers, but not in the very near future.

This will be a work in progress with new questions and sometimes even provisional answers added as my exploration of the field progresses.

The initial collection of questions were extracted from informal papers which I had previously posted, including:

*What Knowledge Is Needed to Deeply Comprehend Quantum Computing?**Quantum Computing Glossary — Introduction**Some Preliminary Questions About Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer*

There is no particular order to the questions. They were generally written in the order they popped into my head. Generally, new questions will be added to the end of the list, unless I happen to identify some significant relationship to an existing question.

In many cases I may already think that I have the answer to a question, but I’m reluctant to claim a definitive answer until I gain more confidence with the field.

# Glossary

Quite a few questions about quantum computing from this list and in general are already answered to at least some degree in my ** Quantum Computing Glossary**, with over 2,800 entries.

Nonetheless, this list of questions will remain open and eventually evolved into a true FAQ with definitive answers.

Unfortunately, quite a few of the glossary entries are marked fully or partially TBD (*To Be Determined*), so they should be viewed as open as any questions on the list in this paper.

# Plain English, please

Quantum computing is fraught with very quirky jargon, mostly inherited from physics and quantum mechanics in particular. That’s all fine and good, but one of my goals is to demystify quantum computing, attempting to rephrase jargon in plain English to the extent possible.

Or at least something closer to classical computing.

# Quantum Computing Stack Exchange

Yes, there is already a thriving community for questions and answers about quantum computing on *Stack Exchange* — ** Quantum Computing Stack Exchange**, but the quality of both questions and answers varies so widely that I find it to be of limited value. Sometimes I myself do find answers there, or at least enough hints to give me at least a greater appreciation of the nature of a question, but the net result is that I find the overall effect somewhat unsatisfying.

I also find that many of the respondents on QCSE are more prone to stick with the ultra-esoteric jargon and symbolism of quantum computing rather than attempting to translate to plain English. That’s fine for its purpose, but a big part of my purpose is to demystify the field, translating to either plain English or the jargon of classical computing as often as possible.

And, generally speaking, questions on Stack Exchange do need to be reasonably specific rather than vague, general, abstract, or open-ended, as many of my own questions are.

Nonetheless, it is a decent resource to consult, especially since I currently have so few answers in my own list of questions.

And, many of the question on StackExchange are even much more specific and detailed — and directly useful — than many on my own list.

*Quora* also has a thriving community for *Quantum Computing* questions and answers, although the quality is probably even more variable than on Stack Exchange.

# The questions

So, here are my questions, again in no particular order:

- What is a quantum computer? Basic definition, but technically robust and free of hype or vagueness. See
.*Quantum Computing Glossary* - What is quantum computing? Basic definition, but technically robust and free of hype or vagueness. See
.*Quantum Computing Glossary* - What is quantum computation? Basic definition, but technically robust and free of hype or vagueness. See
.*Quantum Computing Glossary* - How is quantum computing distinct from traditional digital computing?
- What can a quantum computer compute that a traditional digital computer cannot?
- What can a quantum computer do better than a traditional digital computer?
- Is speed the only truly significant advantage of a quantum computer?
- Is the concept of
*digital*even relevant to quantum computing? - Do quantum and traditional digital computers have more in common or more that differentiates them from each other?
- Is a qubit still considered
*digital*? - Is a qubit still considered
*discrete*? - Is a quantum computer still a
*digital*computer? - What precisely does
*digital*mean? - What precisely does
*digital computing*mean? - Is a quantum computer still a discrete computer (ala digital) or can it compute continuous data as an analog computer does?
- Can a quantum computer compute directly from continuous values from analog sensors, such as a voltage, current, temperature, audio, and video, or is an intermediate conversion to a discrete or digital value needed?
- How does a quantum computer handle analog to digital and digital to analog conversions?
- What operations can a quantum computer perform compared to operations that a traditional digital computer can perform?
- What data types, operators, and functions does a quantum computer support, compared to a traditional digital computer?
- Does a quantum computer perform Boolean logic (AND, OR, NOT with
*true*and*false*— not to be confused with binary digits of 0 and 1), such as evaluating complex conditions, comparable to a traditional digital computer? - Does a quantum computer have a processing unit comparable to the central processing unit (CPU) of a traditional digital computer?
- Does a quantum computer have an arithmetic and logic processing unit comparable to the arithmetic and logic unit (ALU) of a traditional digital computer?
- Does a quantum computer have registers for small amounts of data comparable to a traditional digital computer?
- Does a quantum computer have memory (for large volumes of data) comparable to a traditional digital computer?
- How is memory of a quantum computer (for large volumes of data) organized?
- Do quantum computers support virtual memory and paging?
- Does a quantum computer have a byte or word size or data path width comparable to a traditional digital computer (8, 16, 32, 64, or 128)?
- Does a quantum computer have addresses or pointers width comparable to a traditional digital computer? What width or range?
- Can a quantum computer compute values which cannot be represented on a traditional digital computer?
- How does a quantum computer represent real numbers (non-integers)?
- What are the largest and smallest real numbers (non-integers) that a quantum computer can represent?
- How many digits of precision can a quantum computer represent and compute for real numbers (non-integers)?
- What number base does a quantum computer use for real numbers (non-integers), 2, 10, or what?
- What does a quantum computer compute for 1.0 divided by 3.0, which has an infinite number of repeating digits?
- What does a quantum computer compute for 1.0 divided by 3.0 times 3.0–1.0 or 0.9999…?
- Does a quantum computer use so-called floating point arithmetic for real numbers like a traditional digital computer? Base 2, or what?
- How does a quantum computer compute infinite Taylor series expansions, compared to a traditional digital computer?
- What will a quantum computer compute for SQRT(2.0), which is an irrational number with infinite digits? How will it compare to a traditional digital computer?
- Can a quantum computer calculate SQRT(-1), otherwise known as
*i*, since quantum mechanics is based on complex and imaginary numbers? - Do quantum computers have more than one precision (bit width) for representing real numbers (non-integers)?
- Does a quantum computer compute with complex numbers more efficiently than a traditional digital computer, especially since complex numbers are the basis for quantum mechanics, which quantum computers are supposedly based on?
- How much formal knowledge of quantum mechanics does one need to fully and deeply comprehend to fully and deeply grasp all nuances of quantum computing?
- Can a quantum computer compute values which cannot be comprehended by a human being?
- How are quantum algorithms different from or similar to comparable algorithms of traditional digital computing?
- Can all, some, or no quantum algorithms be simulated (albeit much more slowly) on a traditional digital computer? What factors are involved in whether or how effectively any simulation can be performed?
- What slowdown factor can or should be expected when simulating a quantum computer (or quantum algorithm) on a traditional digital computer?
- What are the more common
*design patterns*for algorithms for quantum computing? - How practical is a quantum computer?
- How expensive is a quantum computer for a given task, especially compared to the cost of traditional digital computing?
- How much power (energy) does a quantum computer require for a given task?
- How large is a quantum computer for a given task?
- What technologies and knowledge are needed to design and produce a quantum computer?
- What physics knowledge is needed to design and produce a quantum computer?
- What physics knowledge is needed to understand how to use a quantum computer?
- What physics knowledge is needed to understand how to program a quantum computer?
- What mathematics knowledge is needed to design and produce a quantum computer?
- What mathematics knowledge is needed to understand how to use a quantum computer?
- What mathematics knowledge is needed to understand how to program a quantum computer?
- How much knowledge and skill with linear algebra (eigenfunctions, eigenvalues, Fourier transformations) is needed to be very skilled with quantum computing?
- How much knowledge and skill with vector spaces and quantum operators is needed to be very skilled with quantum computing?
- What kind of operating system is needed to run a quantum computer? What are the major components and features of a quantum operating system?
- Is a quantum computer more of a co-processor to be associated with a traditional general purpose digital computer, or can a quantum computer fully replace a traditional general purpose traditional digital computer?
- Does it make sense to speak of a grid of interoperating quantum computers or even a distributed cloud of quantum computers, or is each quantum computer a world of its own and unable to interact with another quantum computer except through an intermediary traditional digital computer or other custom traditional digital electronic or optical circuitry?
- Can quantum computers be networked comparable to local, wide area, and Internet networking of traditional digital computers?
- Does the concept of a website make sense for quantum computing?
*[imaginary or complex web pages??!! Just kidding.]* - Does the concept of networking protocols make sense for quantum computing?
- Would traditional network routers be relevant to networking of quantum computers?
- Can two or more quantum computers directly exchange qubits via quantum communication, or is some translation to and from digital format required to make the transition?
- Is coherence (technically,
*quantum decoherence*) a fundamental limit or upper bound to quantum computing or simply a short-term technical matter that will be resolved soon enough? - What degree of coherence can be expected over the next few to five to ten years?
- Is room temperature quantum computing even theoretically practical? How soon?
- What temperature of quantum computing will be practical over the next few to five to ten years?
- How much data can a quantum computer process, such as a large database or search engine, compared to the disk, flash memory, and main memory of a traditional digital computer, now and for the next few to five to ten years?
- What applications are most suitable for quantum computing?
- Are all applications suitable for quantum computing?
- Are any applications particularly
*unsuitable*for quantum computing? - What specific criteria can be used to determine the degree of suitability of an application for quantum computing?
- Can an automated profiling tool be used to determine the degree of suitability of a particular application or algorithm for quantum computing?
- What programming languages can be used for quantum computing?
- What programming languages are best or optimal for quantum computing?
- Is there a machine language and assembly language for quantum computing?
- How similar or dissimilar are quantum computers from different labs, designers, or vendors?
- What components are standard across all or most quantum computers?
- Can quantum computers run on relatively small batteries, or do they need a robust AC power source?
- Do quantum computers use direct current (DC)?
- What voltage levels do quantum computers operate on?
- Is statistical processing the same or different for quantum computing in contrast with traditional digital computing?
- How would a quantum computer compute the median (
*not*mean or average, although those are of interest too) of a very large set of numbers or character strings? How would the performance differ from a traditional digital computer? - Are all aspects of mathematics equally applicable to quantum computing and traditional digital computing?
- Does cybersecurity apply equally to quantum computing as to traditional digital computing?
- What is the quantum computing equivalent of a traditional digital Turing machine?
- Would a quantum computer perform natural language processing (NLP) in a qualitatively better way than a traditional digital computer?
- What specific aspects of artificial intelligence is quantum computing especially well-adapted for?
- What debugging and testing features and tools does a quantum computer provide?
- Can a quantum program be single-stepped to see all state changes and logic flow? Or, can this at least be done in a simulator for the quantum computer? Presumably not for the former, hopefully so for the latter.
- Can the full state of a quantum computer be dumped or otherwise captured for examination, analysis, debugging, and testing, or does the Heisenberg uncertainty principle and superposition preclude this?
- Can a quantum algorithm be interrupted and its state saved and later restored to resume where it left off?
- How does fabrication of chips and circuits differ for quantum computing compared to a traditional digital computer?
- How is color represented in a quantum computer, compared to RGB and other color models used by traditional digital computing?
- Would quantum computers still use pixels for representing images and video?
- How would audio be represented and processed in a quantum computer?
- Is there a decent and comprehensive glossary for quantum computing?
*Update:**I’ve compiled an initial draft for such a glossary:**Quantum Computing Glossary — Introduction**.* - Is there a decent and comprehensive glossary for all aspects of quantum mechanics that is needed to fully comprehend quantum computing?
*Update:**I’ve compiled an initial draft for such a glossary:**Quantum Computing Glossary — Introduction**.* - Is there a decent and robust introductory overview of quantum computing? Puff pieces and hype not welcome. Wikipedia entry is weak. Dense, academic textbooks have their place, but the question here is a decent introductory overview that is free of hype and adequately explains the technical differences from traditional digital computing.
- How much of quantum computing applications can be cost-effectively addressed using massively parallel grids of very small and very cheap traditional digital computers?
- Which is more cost effective, owning or leasing quantum computers?
- Is time-sharing and on-demand shared online access to quantum computers more cost effective and more efficient than either owning or leasing?
- How can the computational complexity of quantum computers best be described — polynomial (P), nondeterministic polynomial (NP), NP-Complete, NP-Hard, or what?
- What thought processes are needed to solve problems with a quantum computer, and how do they compare or contrast with the thought processes for solving problems with traditional digital computers?
- Is the concept of a user interface or user experience (UX) relevant to quantum computing?
- What logic gates are needed for quantum computing and how do they compare to the logic gates of traditional digital computing?
- What knowledge of logic gates is needed to develop algorithms and program applications for a quantum computer? Or are there more programmer-friendly higher-level language operators?
- What is the smallest quantum computing gate possible compared to the smallest traditional digital computing gate?
- What is the smallest qubit possible compared to the smallest traditional digital computing bit or memory cell?
- What is the fastest quantum computing gate possible compared to the fastest traditional digital computing gate?
- What is the fastest qubit possible compared to the fastest traditional digital computing bit or memory cell?
- What is the shortest quantum computing gate connection possible compared to the shortest traditional digital computing gate connection?
- What is the thinnest quantum computing gate connection possible compared to the thinnest traditional digital computing gate connection?
- Are there frameworks for quantum computing applications?
- Does the concept of software still apply for quantum computing? How might it differ from traditional digital computing?
- Does the concept of the software development life cycle (SDLC) (or systems software development life cycle) still apply to quantum computing? How might it differ from traditional digital computing?
- Does the concept of software engineering still apply to quantum computing? How might it differ from traditional digital computing?
- What problems are easiest to formulate for a quantum computer?
- What’s the preferred abbreviation for the term quantum computer?
- Is qbit still an acceptable variant of the term qubit?
- Is there a precise, correct technical term for each of the superposition terms or states in the value of a qubit? What’s a piece of a qubit?
- What is the precise technical term for a memory cell that holds a qubit as opposed to the value which is held in such a memory cell?
- How much energy is needed to represent a single qubit?
- How much energy is needed to represent a single value of a superposition in a qubit?
- Is quantum computing inherently a lot more energy efficient than traditional digital computing?
- Is superposition essentially free and cheap, or inherently costly?
- What is the technical definition of a quantum processor? How does that differ from a full quantum computer? Block diagram please.
- Is the cloud the best place for a quantum computer to live, as opposed to the office, your desk, or your hand? What are reasonable places for a quantum computer to live?
- Any github repositories for open source quantum computing work? Algorithms, code, libraries, etc.
- Are there any open source quantum computer designs?
- What are some examples of what a quantum computer could do given Hubble space telescope data (or other space telescopes)? Would they work best with the raw data or would some preprocessing be needed? Would one or more quantum computers be needed for all processing of space telescope data?
- Who are the top quantum computing vendors?
- Who are the hot startups in the quantum computing space?
- What are the use cases for quantum computing? Types, categories, or styles of applications.
- Who might use a quantum computer?
- How might someone use a quantum computer?
- Are quantum computers really probabilistic rather than deterministic? What does that actually mean? How does that limit or expand capabilities and applications?
- Is general-purpose quantum computing an oxymoron? Can it ever be achieved? Will it never be achieved? Based on what thinking, rationale, or logic?
- Can a Turing machine be simulated (or just implemented) on a quantum computer?
- Can finite state automata and pushdown automata be simulated or just implemented on a quantum computer?
- What stage of development has quantum computing achieved? Compared to traditional digital computing, what year or decade is quantum computing at — 1950, 1960, 1940, 1930??
- What technological, theoretical, and physical issues are constraining quantum computing at this stage?
- In what areas are dramatic breakthroughs required before quantum computing can come out of the shadows?
- How deeply does one need to comprehend Bell’s inequality to deeply comprehend quantum computing?
- Are all forms of quantum computers based on spin states? Could a quantum computer be based on a quantum state other than spin? Is spin only one of many possibilities for quantum computing?
- How much does one need to know about magnetism to comprehend quantum computing?
- Is the knowledge needed to master quantum computing substantially less than the knowledge needed to master traditional digital computing?
- Would it be substantially
*easier*for young students to learn*only*quantum computing? - Might it be
*better*for young students to start with*only*quantum computing? - Is the reality that for the foreseeable future a hybrid world is the inevitable reality?
- Is a hybrid of quantum computing and traditional digital computing a really good thing, a really bad thing, or an indeterminate mixed bag?
- Is SQL and the relational database model equally relevant for quantum computing, less relevant, or even more relevant?
- Will quantum computing mean the end of strong encryption?
- How credible is most of the hype about quantum computing?
- How much of the narratives about the promise of quantum computing is credible or has been proven to be true?
- How large a program can a quantum computer accommodate? How many instructions or operations, or lines of code?
- How complex can a quantum algorithm be?
- Do quantum computers support function calls and recursion?
- Do quantum computers support arrays, structures, and objects?
- How is code represented in a quantum computer? Same as in a traditional digital computer (bits) or using quantum computing qubits?
- Does a quantum computer have the equivalent of processes and threads of traditional digital computing?
- Does or could a quantum computer have a file system?
- Does or could a quantum computer have mass storage? Is it persistent or transient (only while a program is running)?
- How fast can a quantum computer count?
- How fast can a quantum computer divide two numbers? Such as to verify whether X is a factor of Y.
- What arithmetic and mathematical operations are most natural for a quantum computer?
- Do quantum computers have any special advantage for arithmetic and mathematics, other than raw speed of basic operations?
- How much of information theory (Shannon, et al) is still relevant to quantum computing? Or how might it be different (e.g., quantum communication)? Consider
by Wilde.*From Classical to Quantum Shannon Theory* - How consequential is the length of interconnections between qubits? Does packing them much more densely yield a large benefit? Does separating them a modest amount have a large penalty?
- Might wafer-scale integration (WSI) have any significant benefit, performance or otherwise?
- Does a quantum computer have to operate within a Faraday cage, to shield from ambient electromagnetic radiation? Is just the core quantum computer itself, the qubits, enclosed within a Faraday cage, or does the whole operating environment or entire room have to be in a Faraday cage?
- Do qubits require Josephson junctions, or is that just one option?
- Pure vs. mixed states — is the latter superposition or not?
- Whether coupling and entanglement are exact synonyms or whether there are differences.
- Crispness and distinction of overlap between quantum noise and decoherence.
- Whether |0> is necessarily the ground state for a qubit, or whether that is merely a convention or whether it is a choice of the designer of a particular quantum computer.
- Whether measurement is guaranteed to collapse the wave function of a qubit, or just generally or only in some cases — the IBM Q documentation suggests that their readout resonator can in fact read the quantum state of a qubit without disturbing its state.
- Whether more than two qubits can be entangled together.
- How many independent pairs of qubits can be simultaneously entangled.
- Nuances of behavior which may exist on a simulator but not on a real quantum computer, and vice versa.
- Whether a wave function across all qubits makes sense, or just for individual and entangled qubits.
- Whether decoherence time is driven by dissipation of trapped microwave photon in resonator, at least on current IBM Q machines vs. environmental EMR noise.
- What “state of matter” does a single particle have, such as a free electron, single atom, trapped ion, or single molecule.
- Are probability amplitudes real or complex?
- What constitutes fully entangled, since not all qubits can be entangled on a particular quantum computer? Is it simply all pairs, or all permitted pairs, or… what?
- Specific meaning of each type of logic gate, including Pauli gates, and how to express how they can be used for real-world applications and algorithms.
- What precisely constitutes a Clifford gate and Clifford group, in plain English?
- Are rotation angles quantized or not?
- Are phase shift angles quantized or not?
- How precise a value of pi, pi/2, and pi/4 (number of digits) are needed to achieve correct results for logic gates?
- Is superposition represented in a Bloch sphere with a single vector, or two shorter vectors?
- Does a Bloch sphere represent only a single qubit, or can it be used to represent the state vectors of all qubits of a quantum computer in one sphere?
- How does a Bloch sphere represent the state of a pair of entangled qubits, or since their state is identical, do they share the same sphere, or have two separate but identical spheres?
- Is there a Planck-type limit to the amplitude of a state in the wave function of a quantum system, such that there is a maximum number of states in a quantum system?
- How isolated do two particles, waves, or quantum systems have to be to constitute isolated quantum systems, or how close can two qubits be placed and still be considered isolated?
- What does the T of T-gate stand for?
- Do the controlled logic gates require that the control qubit be in a pure |1> state, or some threshold of amplitude, or is control strictly probabilistic, such as equal probability of control if, for example, a Hadamard gate was just executed on the control qubit?
- Is there an actual MEASURE logic gate, or is there some other method for measuring the state of a qubit?
- Is there an actual RESET logic gate, or is there some other method for preparing the state of a qubit to be |0>?
- What does the S gate do (phase shift by 90 degrees — twice the rotation of a T gate, and half the rotation of a Z gate)?
- What constitutes a
*computational basis*? - Are the |0> and |1> basis states supposed to be orthonormal — in the Bloch sphere they are opposing vectors, |0> pointing to the north pole, and 1> pointing to the south pole, but NOT at right angles to each other, so are they really an “orthonormal basis”?
- Are there any current quantum computers which support multipartite entanglement — 3 or more qubits in a single entanglement? How common is it?
- Are there any current quantum computer simulators which support multipartite entanglement — 3 or more qubits in a single entanglement? How common is it?
- Are the eigenvalues (probability amplitudes) for the eigenvectors (basis states) of the quantum state of a qubit in a quantum computer ever complex, with an imaginary component, or always real? Where are some examples, and when does this occur?
- What is the effect, if any, of executing another Hadamard gate after a Hadamard gate — is it essentially an identity operation (no-op), or is there some effect?
- What is the effect, if any, of executing another CNOT gate after a CNOT gate — does it simply flip the target qubit back, or what? Does it retain entanglement? Doe it maintain the same Bell state, or does it flip to the complementary (?) Bell state?
- What does the Ising logic gate really do, in plain language? Is it supported by any transmon qubit machines, or does it apply only to trapped-ion machines?
- Are the probabilities for superposition states always identical (symmetric), 0.5, or can they be asymmetric, like 0.75 and 0.25? What logic gates would have such an effect?
- Are the initial quantum states of qubits always |0> by definition, forced by the hardware, or are they unpredictable or undefined, by definition, or does this vary depending on the whim of the designer of the machine, and is there a recommended best practice for initializing qubit state? Or is it up the the operating/control software or control firmware to artificially force the initial state before beginning execution of a new quantum program or circuit, and is there a standard convention and predictable expectation of initial state for a new program, especially for a quantum cloud service?
- Is there a logic gate which dis-entangles two qubits? Do they start with the same exact same state, or each take part of the state or does the state collapse into two distinct states, or what? Is there a different answer for each of the four Bell states?
- If the target qubit of CNOT is already entangled which a different qubit, what exactly happens, both with the target qubit and the other qubit it was already entangled with? Such as the sequence H(2), CNOT (2, 3), H(1), CNOT(1, 2).
- What are the standard, recommended logic gates for setting (preparing) a qubit for |0> and |1>?
- Is CNOT the only gate which creates an entanglement, or do all controlled gates create an entanglement, or any other gates? CCNOT?
- If all three qubits of a CCNOT (controlled-controlled-NOT) logic gate are entangled, but not with each other, what can be expected of the quantum state of the target qubit after the gate has been executed? And what state will the other qubits which had been entangled with the two control qubits be in after the CCNOT has been executed?
- Can you measure both states of a qubit using temporary entanglement — entangle to capture state, then disentangle to permit measurement without disrupting the original qubit? Even if that is so, can that work if the target qubit is already entangled and if only bipartite entanglement is supported?
- What is a universal quantum computer? Does it simply perform all of the operations defined for quantum computing (universal within quantum computing), or must it necessarily perform all operations of a classical computer as well?
- What is
*expectation value*and how does it factor into quantum computation? When is it used and how is it used? - Will quantum computers break bitcoin and other cryptocurrencies?
- How many qubits does a quantum computer need to factor a number of n bits which is the product of two prime numbers (a
*semiprime*)? What are the qubit requirements for 8-bit, 16-bit, 32-bit, 64-bit, 128-bit, 256-bit, 512-bit, 768-bit, 1024-bit, 2048-bit, and 4096-bit integers? And 8192-bit and 16384-bit integers as well. My preliminary understanding is that Shor’s original algorithm would require at least four times as many qubits as the input number, so that factoring a 4096-bit semiprime would require at least 16,384 qubits. Derivative algorithms could require significantly fewer qubits. One paper, Beauregard, reports requiring only 2n + 3 qubits — 8195 qubits for a 4096-bit semiprime. Another algorithm, by Pavlidis and Gizopoulos, requires a lot more qubits, 9n + 3, but fewer gates must be executed. [TBD: find paper for 3n qubits, which I recall seeing at some point, I think] - What connectivity of qubits (entanglement) is required by Shor’s algorithm? Current quantum computers have very limited connectivity.
- What coherence time is needed by Shor’s algorithm? Is it constant or does it vary by length of the input?
- What circuit depth is needed by Shor’s algorithm? Is it constant or does it vary by length of the input?
- What is the total count of quantum logic gates required based on the length of the input number?
- How many of the total quantum logic gates can be executed in parallel, reducing the circuit depth?
- How tolerant or susceptible to quantum noise is Shor’s algorithm? Is quantum error correction (QEC) ultimately needed, or can the full algorithm simply be run repeatedly to statistically assure a correct result?
- What running time can be expected based on length of input?
- Should input length be measured in decimal digits or binary bits? The algorithm uses the latter (binary), but some descriptions use the former (decimal). I’ll try to stick with binary bits in my own descriptions.
- Even if a quantum computer has enough qubits, are the total quantum logic gate count and circuit depth likely to be too large to be executed completely and successfully before the limit of quantum coherence time is reached?
- What is the trajectory of hardware advances which are required to solve factoring of each of the milestone key lengths, especially 64-bit, 256-bit, 512-bit, 768-bit, 1024-bit, 2048-bit, and 4096-bit?
- What is the expected trajectory of hardware advances for quantum computers over the next two, five, seven, ten, twelve, fifteen, and twenty years?
- What other characterizations for performance are needed for factorization, or any other quantum algorithm for that matter? For classical algorithms we normally focus on running time alone, and sometimes possibly memory, but for quantum computers we need to be concerned with qubit count, circuit depth, total quantum logic gates, parallel execution of gates, and quantum coherence time. A single Big-O is insufficient to adequately characterize either performance or the total hardware needed to execute an algorithm. We need a bunch of Big-O’s.
- What is the largest number which which has been factored by Shor’s algorithm to date? I’ve seen references to 15 and 21. There are references to factoring 143 and larger numbers, but they were not achieved using Shor’s algorithm, supposedly. Have there been any recent advances which I am not aware of?
- What is the largest number which can be factored by Shor’s algorithm on a quantum simulator? In say a second, minute, hour, two hours, four hours, eight hours, twelve hours, day, two days, one week?
- Can simulation of Shor’s algorithm be split and distributed to multiple classical computers? Given that the outer loop is based on generating random numbers, that would seem possible — give each distributed machine a different range of random numbers. That won’t ultimately compete with a quantum computer which is fully parallel, but would scale up simulation by a healthy fraction.
- Can a mere mortal code the general implementation of Shor’s algorithm for input of arbitrary size or some large but fixed size, or is a relatively sophisticated classical computer program needed to algorithmically generate the quantum logic circuit (quantum program)? Seems like the latter, at this stage.
- Is the same quantum logic circuit used for each iteration of the outer loop, or must it be regenerated for each iteration? Other than the
*preparation*logic gates since they depend on the specific random number being evaluated. - What are all of the derivative algorithms which are closely based on Shor’s algorithm? Some are listed in the
*References*section of this paper. - Are any of the derivative algorithms so superior that Shor’s original algorithm is now considered no longer relevant?
- Is a deep comprehension of Shor’s original algorithm essential to deeply comprehending each of the derivative algorithms? Is Shor’s still a reasonable
*benchmark*for starting any discussion of quantum factorization? - What (open source)
*reference implementations*of Shor’s algorithm and its derivatives exist? Have any of them been validated, by whom, and how? - Is there any
*definitive reference implementation*of Shor’s algorithm? - Are there (competent, industrial quality) implementations of Shor’s for any of the current 5-qubit, 8-qubit, 16-qubit, 20-qubit, 49-qubit, 50-qubit, and 72-qubit quantum computers — and their simulators?
- How well-behaved is Shor’s algorithm for pathological cases such as 0, 1, 2, even numbers, pure-prime numbers, prime powers, more than two factors, large n-bit values of 0, 1, 2, 2^n minus 1?
- Does Shor’s algorithm return both or all factors of the input number, or just one or the first factor discovered, with the expectation that the caller can either continue to factor the remainder of the number or simply divide the number for the one factor to get the other factor? The descriptions are somewhat vague in this area.
- How purely random do the random numbers used within the classical portion of the algorithm need to be? Is every classical pseudo-random number generator sufficient? Is a quantum computer needed to generate the random numbers? What techniques are recommended or required, particularly for the very large numbers of 1024-bit, 2048-bit, and 4096-bit length?
- Does Shor’s algorithm potentially suffer from the classical Turing halting problem — that the random number generation could presumably fail to hit on a number whose period leads to a factor?
- Can quantum noise cause the inner, quantum portion of the algorithm to fail in a way that causes the outer loop to continue past what should have been a solution, leading to the Turing halting problem? Might it be sufficient simply to rerun the inner, quantum logic circuit repeatedly to statistically assure a correct result? Might there be quantum noise failure modes of such complexity that even a large number of runs fails to statistically converge on the correct result for the order of a given random number?
- Could the outer loop of Shor’s algorithm ever be made to run directly on a quantum computer rather than requiring a classical computer as originally envisioned? Possibly, but that has yet to be demonstrated.
- Could the post-processing stage of Shor’s algorithm ever be made to run directly on a quantum computer rather than requiring a classical computer as originally envisioned? Shor does say “
*While this post-processing could in principle be done on a quantum computer, there is no reason not to use a classical computer if they are more efficient in practice*”, but doesn’t appear to have done the detailed analysis to show how this would be possible, especially with the GCD calculations, which require looping which quantum computers do not have, although Wang, Jiang, Mu, and Fan have a paper on quantum GCD. But there is more to the outer loop than the GCD calculations alone. It may well be possible, to do the entire algorithm in pure quantum processing, but nobody appears to have demonstrated that result, so far. - How efficient are classical algorithms for the GCD function for very large integers, such as 1024, 2048, 4096, and even 8192-bit numbers? As originally proposed, all GCD calculations are in the non-quantum classical stages of the algorithm.
- How many iterations of the outer classical loop can be expected for various input lengths? What does the distribution look like? What is the worst case — and is it even remotely possible that the algorithm might never halt? One paper, Grosshans, Lawson, Morain, and Smith, reports a derivative algorithm for “safe” semiprimes which requires only a single iteration.
- What prevents the outer loop from being exponential, some multiple or fraction of 2^n (for n-bit input numbers) iterations? How many random numbers must be generated to assure that one of the two factors is reached?
- What is the distribution of random numbers which can be raised to various powers such that the order (power) will test a multiple of one of the two prime factors of the input number? Many? A few? Only one? Is even one guaranteed (the traditional Turing halting problem)? Is there a theorem, proof, or principle for this belief?
- What is the gist of the rationale for believing that
*period finding*is the best way to factor large semiprimes? An expert in the field of number theory may know the answer, but it has yet to be articulated clearly and plainly in the papers on quantum factorization. - Could the period for a candidate random number be so large that the quantum probability might be too small to reliably detect? Could this happen only for large random numbers relatively close in value of the input number, or could it happen for more modest candidate random numbers less than a fraction of the size of the input number?
- Are there any mathematical proofs for any of these questions?
- Can Shor’s algorithm be split and distributed so that more than one quantum computer can work on the same input value?
- Is there a standard or convention for how to number the bits of the input number for Shor’s algorithm? Is the index zero for the low-order or the highest-order bit? Do the rules and conventions of classical computers apply? I did see at one paper which placed the least significant bit at the left end of the quantum representation, the opposite of the classical convention.
- Is the
*Chinese remainder theorem*required to comprehend Shor’s algorithm? Some descriptions refer to it, but some do not. - How much
*number theory*is needed to fully comprehend Shor’s algorithm? - What theorem, proof, or principle assures that X^R for some random number less than N of order R will fall between N² and 2 * N²? I don’t doubt that this may likely be true, but what guarantees it — since Shor’s algorithm requires it?
- How many random numbers will lead to a modular exponentiation period which results in one of the two factors of the input number? Exactly one? More than one? Potentially a significant number? Is it very rare or relatively common? What theorem, proof, or principle answers this question?
- What theorem, proof, or principle requires that at least one random number will lead to one of the two factors of the input number? Does a random number exactly one greater than either of the two factors guarantee success?
- Is there any significant distinction between the concept of
*order*and*period*or*order-finding*and*period-finding*? They seem to be used interchangeably, as if they were synonyms, but is there some technical distinction or nuance between them? Does it make sense to refer to the*order*of a*period*? - What is the distribution of values for
*order*(*period*) based on length of the input number? Mostly relatively short, small numbers? Relatively long, big numbers? - Will
*order*ever be so potentially large that a*quantum amplitude*is not large enough to measure in a detectable manner? - For any given iteration of the inner, quantum algorithm, what is the probability of hitting one of the factors of the input number?
- If the probability of hitting a factor using a given random number is as high as some claim (0.50 or 50%), why not just start with 2, 3, 4, …, as the trial “random” numbers and simply increment by 1 to iterate, or are some ranges in 1 to N-1 more fruitful or less fruitful than others? What is the proof, theorem, or principle that dictates that random numbers are needed and that sequential numbers up from 2 or down from N-1 are not equally sufficient?
- Given that quantum programs are probabilistic rather than deterministic, is there any absolute guarantee that the inner, quantum algorithm will always produce a value for
*order*such that GCD(x^(r/2) +/- 1, N) will result in one of the two factors? - If either GCD(x^(r/2) — 1, N) or GCD(x^(r/2) + 1, N) is not equal to 1, indicating that it is a factor, will the other GCD always also evaluate to be the other factor, or could only one of the two be not equal to 1? In other words, do GCD(x^(r/2) — 1, N) and GCD(x^(r/2) + 1, N) correspond precisely to p and q of p * q = N — when one of them does indeed correspond to p or q, that is?
- What range of values are generated for the pair of values for a public key of a given length? Is there a minimum value and a maximum value, such that the range of the two factors is significantly less than half the length of the input number for Shor’s algorithm?
- Is there any preferred usage for letters for variables in Shor’s algorithm? Shor used x for the random number and r for the period (order), but I’ve seen a and m for the random number and P for the period as well. It gets confusing. And then there’s n vs. N for the input number.
- What happens if the
*order*(*period*) for a random number is*odd*? The algorithm seems to simply skip such numbers. Is that simply confirming that some numbers have no chance of pointing towards a factor? What theorem, proof, or principle assures that such random numbers can be completely ignored, as opposed to them possibly requiring more specialized treatment? I do note that the algorithm does require*even*periods since it uses x^(r/2) so that (x^(r/2) — 1) * (x^(r/2) + 1) = x^r — 1, which requires r to be even. - What is the rationale, significance, and consequence of the algorithm skipping random numbers for which x^(r/2) = -1 (mod N)? None of the descriptions I’ve read of the algorithm are completely clear on this condition.
- I’ve seen (dubious) references suggesting that
*Grover’s algorithm*can also be used for factoring. Is that really true? - Can Shor’s algorithm be used or adapted easily for quantum computers which use more than two states for each quantum value, such as a three-state
*qutrit*or a ten-state*qudit*? - Can Shor’s algorithm be used or adapted easily for photonic quantum computers using
*qumodes*rather that two-state qubits? - What is the precise range for the random numbers to be generated by the outer, classical loop? 1 and N are not particularly useful. Ditto for 2, N-1, N-2, and possibly other cases. Is 3 to N/3 a reasonable range, or simply use 1 to N and skip 1, 2, and values greater than N/3?
- How long before a quantum computer is actually able to break strong encryption? Completely unknown. If 4n qubits are needed, that would require a 4096-qubit quantum computer to crack 1024-bit keys, an 8192-qubit quantum computer to crack 2048-bit keys, and a 16384-qubit quantum computer to crack 4096-bit keys. And that’s just the raw qubit count — coherence time, error rate, and entanglement requirements must also be met. Even at a Moore’s Law-like pace of doubling qubit-count every 1–2 years, the current size of 50–72 qubits (call it 64) would have to double to 128, 256, 512, 1024, 2048, 4096 to first crack 1024-bit keys. That would be six doublings, taking 6–12 years. Another doubling would crack 2048-bit keys in 7–14 years. And a final doubling would crack 4096-bit keys in 8–16 years. Again, all of that presumes that coherence time, error rate, and entanglement advance rapidly as well. In short, 2048-bit keys might be at risk in 7 years, 4096-bit keys in 8 years. Or using the averages of the ranges, 10 and 12 years, respectively for 2048 and 4096-bit keys. At this stage, nobody considers 1024-bit keys to be protected strongly, although technically they still are reasonably well-protected.
- How exactly does phase estimation fit into the calculation of the order of the random number in Shor’s algorithm, especially relative to modular exponentiation? What exactly is the phase that is being estimated? Trying to read Shor’s paper, these aspects are somewhat murky.
- The math for phase factors in the quantum Fourier transform in Shor’s algorithm seems to have an incredibly huge divisor, with q between N² and 2 * N² or roughly 2⁸¹⁹² for a 4096-bit encryption key, which seems virtually impossible to support on a real machine subject to real quantum physics. Is this really the case? Are these numbers which must be computed in the classical code and then passed to the quantum computer via a unitary matrix which will have to translate a very, very small number into a real microwave signal which is similarly incredibly small. Is that correct? Actually, e^ix = cos x + i sin x (Euler’s formula) which would be extremely close to 1.0 for any very tiny x, not allowing much discrimination between the 8192 columns of the Fourier transformation matrix. On the other hand, for exp(2 pi i a c / q), some fraction of products of “a” and “c” would probably be close enough to the range of q to be not as small as many of the values in the 0 to 2⁸¹⁹² range, but the exact distribution is unclear. The reasoning for why this would all work out is unclear. Also, see the question related to Coppersmith’s suggestion for dealing with this issue.
- Has Coppersmith’s suggestion for an approximate Fourier transform that sidesteps the prospect of the very tiny phase factors which may not be practical for real quantum computers been fully incorporated into Shor’s algorithm, or is the brief mention of Coppersmith by Shor merely an indication of possible future improvement? If the latter, has anyone done that enhancement?
- How should the value of the parameter “m” for the Coppersmith approximate Fourier transform be determined? How can we confirm that a proposed value for m is reasonable? Does the reasonableness of m depend critically on the value of the input number to be factored or is it independent of the magnitude of the input number?
- Will the value of the “m” parameter (maximum power of 2 for the denominator of the phase factors) for the Coppersmith approximate Fourier transform be dependent on the particular quantum hardware or is it a general, abstract quantity that is based on quantum mechanics or mathematics in general and not dependent on the particular physical realization of the quantum computer?
- Which is the greater limitation for the Coppersmith approximate Fourier transform: the need for a higher m to capture the period/order in sufficient fidelity, the need for a low enough m to keep gate count low enough to fit within the coherence time of the quantum computer, or the maximum m that aligns with the precision of the phase of a real physical qubit which can be resolved and maintained in coherence? In other words, what minimum delta of phase is required to accurately calculate the order/period for very large input numbers (1K to 4K bits)?
- Does Shor’s algorithm depend on the Extended Riemann Hypothesis (ERH) which Miller’s paper, referenced by Shor, claims to depend on? What does it really mean to depend on or assume the ERH and how can we confirm whether Shor’s algorithm will work with this assumption?
- Shor notes that “
*While the**Schönhage-Strassen algorithm**is the best multiplication algorithm discovered to date for large l, it does not scale well for small l. For small numbers, the best gate arrays for multiplication essentially use elementary-school longhand multiplication in binary*”, but he fails to delineate what constitutes a “small” versus a “large” number. Is small 4 bits, 16 bits, 64 bits, 256 bits, 1024 bits, or what? And he also fails to elaborate a quantum algorithm for*elementary-school longhand multiplication*. Presumably the classical code would conditionally generate the quantum logic circuits for one multiplication algorithm or the other based on the size of the number. Also, when using the multiplication algorithms to generate powers, would it be necessary to use both algorithms for the same base number, generating one algorithm for the smaller powers and then switch to generating the other algorithm for the larger powers of the same number, or should the classical code stick with the more sophisticated algorithm for all powers if it might be needed for any of the higher powers? - Does Shor’s algorithm produce the order, r, exactly, or only a number relatively near r, even if a classical implementation of the same modular exponentiation would produce an exact r? His paper says “
*we obtain r with probability at least φ(r)/3r*” (totient function of r divided by three times r) and he has a discussion about two techniques for trying numbers relatively near the number produced by the quantum order-finding algorithm. So, it’s not clear what the exact algorithm should be, or how successful it is likely to be. - In the
paper by Beauregard, adder is based on phase shift of exp(2*pi * i / 2^k), but how big can k be before the phase of a qubit no longer has the resolution to discriminate 1/2^k? Maybe this works for k=8 to k=20, but for k=4096 it seems absurdly unlikely to resolve such an incredibly small phase increment. In essence, what is the minimum phase angle which can be discriminated in a real qubit?*Circuit for Shor’s algorithm using 2n+3 qubits* - Am I correct when I read the
paper by Beauregard as indicating a gate count of O(n³ * log2(n)) gates and a circuit depth of O(n³) gates? For n=4096 this would be O(n³ * log2(n) gates = 12 * 68 Billion = 824,633,720,832 = 825 BILLION total gates and a circuit depth of n³ = 68,719,476,736 = 68 BILLION gates! Really?!*Circuit for Shor’s algorithm using 2n+3 qubits* - What is the smallest integer for which the total number of operations (both classical and quantum) needed to factor the integer using Shor’s algorithm is indeed less that the total number of operations needed to factor that integer using a classical computer (single CPU processor) alone? Presumably this is the same as asking what the largest integer is that can be factored more efficiently using a classical computer alone — especially considering the overhead of transitioning between the classical and quantum portions of Shor’s algorithm. Alternative answers are possible for variations and derivations from Shor’s original algorithm. Presumably multiprocessor and distributed classical computing clusters would reduce the elapsed time to factor larger numbers using classical computation alone, but would not reduce the total number of operations. Presumably one could also inquire about cost in terms of hardware acquisition cost, but that has no effect on time required to factor very large numbers given only a reasonable hardware budget, such as the cost of the single quantum computer against which the distributed cluster must compete.
- Since quantum computing is represented as being
*probabilistic*rather than*deterministic*, is the quantum portion of Shor’ algorithm — order finding — really*guaranteed*to produce the correct result (if that is possible), or will*repetition*and*statistical evaluation*of the results be*required*is arrive at the order of the modular exponentiation? The original algorithm paper does not explicitly state that such repetition is*required*, nor even imply it, but reading between the lines and comprehension of the nature of quantum computing*seems to strongly suggest*that repetition and statistical evaluation is required. Further, there is no clear statement or guidance as to how to calculate how many repetitions might be needed, or what statistical criteria to use to evaluate the results. As written, the algorithm doesn’t clearly indicate how many discrete results might be produced and need to be*checked*to determine which is indeed the order of the modular exponentiation. And I don’t read any comforting language or proof that the number of repetitions will necessarily be*polynomial*for the very, very, very large numbers that we’re talking about for cracking (factoring) 2048-bit and 4096-bit keys. - What exactly happens when the quantum circuit for order finding raises a relatively large number to a large number of very large powers, each of which would produce a result which is far greater than the number of qubits in the output register? What happens to all of the overflow bits? Is there any wraparound which might interfere with legitimate qubits of the results? For example, for input N of n bits, an ansatz of N-1 will effectively square N and still fit in 2*n bits, but any powers higher than 2 will result in overflows — N-1 raised to the 2^n power would product many times as many result bits as there are result register qubits. Even an ansatz of SQRT(N) will quickly produce results with more than 2n bits.
- What exactly happens when the quantum circuit for order finding raises a relatively large number to a large number of very large powers and then attempts to take the MOD N of the truncated result? Won’t such results be invalid since many of the bits of X^R will have gotten discarded (or inappropriately wrapped) before the MOD N calculation step?
- How precisely equal is the probability of measuring a 0 or 1 after executing a Hadamard gate since the amplitude, whose magnitude is the square root of the probability, would be 1/sqrt(2), which is an irrational number, having no exact representation as a finite real number? Presumably the two amplitudes sum to precisely 1.0, so is there a very slight asymmetry between the magnitudes of the two amplitudes? Or is there some other quantum magic that enables 1/sqrt(2) to be precisely exact at the raw physics level since it doesn’t have a digital representation per se. Even if so, this would still be an issue for a quantum simulator where a finite digital precision for the amplitude would be required.
- What is the precise role of interference in quantum computation? What interference is used in each type of gate? Which gates use interference and which do not? Is it only in 2-qubit vs. not in 1-qubit gates? How is it used in the design and coding of quantum algorithms? Do quantum programmers need to understand interference per se, or is that more of an under-the-hood implementation detail, analogous to how logic gates are implemented in a classical computer? Is interference used in the resonators used to couple (entangle) two qubits — is that the only or main usage of interference? Does superposition require interference?
- Do the coupling resonators used to entangle two qubits essentially create standing waves to accomplish the coupling (entanglement)? Is that the primary determinant of coherence time — how long the standing way can persist before it dissipates?
- Is there any other way to couple (entangle) two qubits other than using resonators?
- Is room temperature quantum computing a realistic eventual possibility or simply an impractical fantasy even 20–50 years from now? Is photonic computing a credible path to room temperature quantum computing? What is a realistic timeframe for room temperature quantum computing — 5 years, 10 years, 15 years, 20 years, 25 years, 35 years, 50 years?
- Why exactly do quantum logic gates and algorithms need to be reversible — in plain English? What exactly will happen if this is not the case? Or is it simple “good design” and “proper” rather than being absolutely mandatory?
- What does it mean for a function or gate to be unitary — in plain English, besides that its conjugate transpose is its inverse so that the matrix times its conjugate transpose is the identity matrix? What exactly will happen if a function or gate is not precisely unitary? Why is this so important, or is it just convention and “better” rather than being absolutely mandatory?
- When will quantum computing be fully ready for Big Data — process millions, billions, and trillions of data items at the same time rather than only very tiny amounts (tens, hundreds, thousands) of individual data bits at any one time?
- Is there a Planck unit for minimum (but non-zero) amplitude for a quantum state, as well as a minimum (but non-zero) difference between two amplitudes (are amplitudes quantized)? There must be some absolute limit for real machines, doesn’t there?
- If n qubits are each put into a superposition of states using a Hadamard gate on each qubit, is there some minimum (but non-zero) amplitude for each of the 2^n quantum states for those n qubits? Especially since 1/2^n can be very tiny for even a moderate numbers of qubits. For example, for n = 72, the amplitude for each of the 2⁷² states would be 1/2⁷², which seems like an impossibly small value to represent on a real machine. Or, does it maybe make no real sense to consider a joint amplitude for the quantum states of two or more isolated quantum systems (each an unentangled qubit)?
- Since quantum computing is represented as being
*probabilistic*rather than*deterministic*, it seems reasonable to assume that*any*quantum computation must be*repeated*some significant number of times and then*some sort of statistical analysis*performed on that set of results to*decide*which of the results is*the*result, but I don’t find a detailed treatment of this*repetition and evaluation process*anywhere in the literature. In particular, is there any*magic formula*or*guidance*for*calculating*how many repetitions are required to achieve some designated percentage of accuracy (80%, 90%, 95%, 98%, 99%, 99.9%, 99.9999%, etc.)? What statistical criteria should be used to use to evaluate the results? For some computations it might be sufficient to sequentially evaluate*all*results since a*correct*result would be a solution to an equation (e.g., order of a modular exponentiation), but other applications might require detailed statistical analysis of the results, such as looking for the top k peaks in a statistical distribution. A lot more discussion and guidance is needed. And*every*presentation (especially formal publication) of a quantum algorithm should be accompanied by such a discussion and guidance for calculating number of repetitions and what specific statistical analysis is required to evaluate the results. What might typical repetition counts be? 10? 20? 50? 100? 250? 500? 1,000? 10,000? 50,000? 1 million? 10 million? 100 billion?? And, in the context of NISQ quantum computers, how to validate whether the quantum computation may be completely failing every single time due to exceeding the coherence of the particular quantum computer, so that it will*never*return a valid result (except maybe by accident or chance.) How many failures of particular runs will indicate that the particular NISQ computer is*incapable*of delivering*any*correct result? Again, some formula, rule, or other guidance is needed for every algorithm and every machine. - Could string theory affect quantum computing at its limits, such as minimum size of a qubit, maximum number of qubits, minimum qubit spacing, maximum qubit density, maximum qubit connectivity, maximum speed of quantum logic gates, maximum coherence, or maximum coherence time?
- Is there a minimum qubit spacing necessary to maintain the isolation of unentangled qubits?
- Could you implement Mathematica on a quantum computer?
- How much of Mathematica could you implement on a quantum computer?
- Could you implement an operating system on a quantum computer?
- Could you implement a multitasking operating system on a quantum computer?
- Could you implement processes and threads on a quantum computer?
- What data structures are supported on a quantum computer?
- What data structures are more efficient on a quantum computer?
- Are qudits or other higher-order non-binary quantum “bits” needed to fully unlock the power of quantum computing — or are larger numbers of simple qubits as effective or even more effective since they will likely continue to be cheaper, more reliable, and easier to manufacture and operate? Will it be kind of like classical computers, where the earliest machine were decimal but were quickly eclipsed by binary.
- How much knowledge of quantum mechanics and quantum computing will be needed to “use” quantum applications vs. needed to develop algorithms for applications? Will the capabilities and quirks of quantum mechanics and quantum logic gates, especially their probabilistic and nondeterministic nature, be hidden under the hood or still be quite visible and a concern to users?
- What are the best measures of quantum performance — raw quantum logic gates per second, quantum states per second (for quantum parallelism), IBM’s concept of Quantum Volume, or some other measure? Should the measure be based on individual qubits, or aggregated for n-qubit registers, as if they were n-bit binary values/registers on a classical computer? There is also the issue of whether quantum state should be measured as individual qubit quantum state vs. qubits which have joint states.
- Are wave packets needed to comprehend quantum computing?
- Does the wave vs. particle duality/ambiguity impact quantum computing, in particular the design of quantum algorithms and programming quantum circuits? For example, phase and interference?
- What science fiction will quantum computing enable? Time machines and time travel? Matter transmission (true teleportation, of matter, not just communication of information)? Design of materials and energy systems which might enable space travel at relativistic (near light) speeds? Wormholes in space? Cure all disease? Fusion power (or something even better)? Understanding what’s below quantum mechanics? Integrating gravity into quantum mechanics?
- What is quantum algorithmic breakout and when will it occur? See
.*What Is Quantum Algorithmic Breakout and When Will It Be Achieved?* - At what qubit count will we hit “algorithmic breakout”? See
.*What Is Quantum Algorithmic Breakout and When Will It Be Achieved?* - At what coherence time will we hit “algorithmic breakout”? See
.*What Is Quantum Algorithmic Breakout and When Will It Be Achieved?* - When will quantum computing exit from being an emerging technology and enter the mainstream? See
.*What Is Quantum Algorithmic Breakout and When Will It Be Achieved?* - Could advanced quantum simulators accelerate algorithmic breakout? See
.*What Is Quantum Algorithmic Breakout and When Will It Be Achieved?* - What is
*expectation value*and how does it factor into quantum computation? When is it used and how is it used? - How are computational basis states represented at the hardware level? Is there some energy level distinction between them to distinguish them? For n unentangled but superposed qubits with 2^n distinct states, how are each of those distinct 2^n computational basis states distinguished from each other, at the hardware level? If not an energy or mass distinction between the states, what is different between them, at the hardware level?

# What’s next?

- Continue adding questions.
- Begin providing initial, provisional answers.
- Gradually provide more definitive answers.
- Consider linking questions and answers to
or*Quantum Computing Stack Exchange*when it seems to make sense.*Quantum Computing on Quora* - Eventually evolve into a true FAQ with definitive answers to all relevant questions about quantum computing.
- Select a top-10, 25, 50, and 100 list of questions — and answers for various audiences.