# What Is Quantum Algorithmic Breakout and When Will It Be Achieved?

I am proposing the new term ** quantum algorithmic breakout** to refer to the answer to a simple question:

**Quantum algorithmic breakout is the critical mass moment when there is a confluence of the hardware for quantum computers and our ability to develop effective algorithms which fully exploit that hardware which has reached the critical mass needed for development of relatively sophisticated quantum algorithms suitable for real-world, practical, production-quality, production-capacity applications so that their development becomes commonplace, even easy, as opposed to the heroic and severely limited efforts which are common today. It will be the moment of liberation from the tedium of agonizingly slow progress. No longer constrained by very limited hardware. No longer constrained by limited algorithmic building blocks. This informal paper outlines the requirements to achieve such a**

*When will quantum computing hardware and algorithms reach the stage where real-world, practical, production-quality, production-capacity applications can be readily produced without heroic levels of effort?**breakout*in the development of quantum algorithms suitable for real-world, practical, production-quality, production-capacity applications.

**In a nutshell:**

**Organizations need to be able to move beyond mere proof of concept projects to full-scale production projects****and to do so at a rapid pace.****This informal paper will focus on how to enable that transition.**

**The essence of the problem today is twofold:**

**Algorithms.**It is excessively difficult to design a quantum algorithm of any sophistication — even for an ideal quantum computer with unlimited resources.**Hardware.**It is extremely difficult, if not outright impossible to design a quantum algorithm for many potential applications given the extremely limited hardware resources available on current and near-term quantum computers.

This informal paper will cover the basics at a high level. For a deep dive, read my larger (but still informal) paper:

**Actually there are three main obstacles today:**

**Limited hardware.**See below.**Difficulty of designing algorithms**— for even the most elite scientists, mathematicians, computer scientists, and software developers.**Lack of sufficient technical talent and training for that talent**to meet the needs of designing, implementing, and deploying quantum algorithms in the coming years. Especially for individuals — and organizations — who are fully into the quantum mindset rather than remaining stuck in the classical mindset.

**The main hardware obstacles today are:**

- Too few qubits.
- Too little coherence time.
- Lack of error correction.
- Too little connectivity between qubits.
- Lack of high performance tight integration between classical and quantum computing for hybrid algorithms which require a significant degree of iteration or variational methods.

**The main algorithm obstacles today are:**

- Difficult to get beyond proof of concept.
- Difficult if not impossible to get to suitability for real-world, practical, production-quality, production-capacity applications.
- Even some well-known quantum algorithms are not feasible with today’s limited hardware. E.g., quantum phase estimation and Shor’s algorithm.
- Complex workarounds, tradeoffs, and compromises are needed to utilize current hardware.
- Too difficult to do even relatively modest quantum algorithms. Anything beyond the relatively trivial.
- Not feasible to easily translate existing classical algorithms into quantum algorithms.
- Too little of traditional design carries over from classical computing to quantum computing.
- Too difficult to get out of the classical mindset and into the quantum mindset.
- Too great a conceptual gap between application terminology and quantum computing terminology. A much wider gap than with classical computing. An impedance mismatch between the domain of the application and the realm of quantum computing. Except maybe for quantum simulation.
- Too difficult to reduce complex problems to simpler problems which can be more directly tackled by a quantum computer. For example, reduction to order-finding.
- Too much math. Linear algebra, physics, etc. Granted, each application category will have its own domain-specific math requirements, but too much of the “under the hood” math of quantum mechanics is surfaced — unnecessarily in my view — in quantum computing. The current state of affairs may be fine for quantum simulation, but there are plenty of application areas outside the realm of physics and computational chemistry.
- Too much matrix math. Quantum mechanics.
- Too many weird Greek symbols. Quantum mechanics.
- Too difficult to transform real-world problems into quantum parallelism.
- Even quantum systems are very difficult to simulate on a quantum computer. For example, simulating atoms and molecules. How could that be?!!
- Too primitive a programming model. Too much like programming in classical assembly language or machine language. Not enough intellectual leverage.
- Need for a true quantum compiler, and a true quantum programming language to drive it. There’s nothing even close today. Quantum circuits are like coding in assembly language or machine language on a classical computer.
- Too primitive a conception of logic. Classical computers made a clear distinction between Boolean logic and the raw zeroes and ones of the underlying electronic circuits, but programming of quantum computers does not yet have a similar level of abstraction above the |0> and |1> basis states.
- No support for rich data types. Just raw qubits. No integers, no real numbers, no strings, no text, no images, no graphs, no complex data structures.
- No support for determinism. Quantum computers are strictly probabilistic. Must manually work around this by executing a quantum circuit many times (sometimes called “shots”) and then examining the frequency distribution of the results to infer a quasi-deterministic result, if you’re lucky.
- Not possible to examine or save intermediate results. Only the final state of the qubits can be examined (measured), and even that collapses the complex vector state which may be a superposition (linear combination) of |0> and |1> into a simple binary 0 or 1. Not appropriate for implementation of “explainable AI” or similar audit logging of complex calculations. Complex and clever logic and hybrid of classical and quantum computing can approximate examination of state but only at much greater complexity and cost.
- Weak documentation. No excuse. We don’t even have a decent small collection of introductory but realistic algorithms which highlight the path to more sophisticated and more complex algorithms.
- Too much learning via folklore and on an ad hoc basis.
- Too much complexity for designing hybrid classical/quantum algorithms.
- Need to explicitly develop classical code which generates complex quantum circuits dynamically, on the fly, rather than hand-coded quantum circuits.
- No ability to use recurrence relations to describe complex and lengthy gate sequences. Must write explicit classical code to do all of that.
- Need for rich libraries of algorithmic building blocks — a level of abstraction well above the level of individual quantum logic gates. Lack of a standard set of quantum algorithmic building blocks.
- Need for rich algorithm design patterns. Too much must be designed from scratch or is ad hoc.
- Need for rich overall algorithm design approaches and methodologies. Too much must be designed from scratch or is ad hoc.
- Need for rich application frameworks. For example, for variational methods and other hybrid classical/quantum approaches.
- Need for reusable and parameterized algorithms. Too many algorithms designed from scratch.
- Big Data is not supported — must break data into small chunks. Even for D-Wave.
- No I/O, database access, or network access within quantum circuits. See Big Data, above. Any input data must be encoded within the gate structure of the quantum circuit. Measurement of qubits is the only output data.
- Access to results of quantum parallelism and use of interference is too difficult or even infeasible (for quantum phase estimation, today and in the near future.)
- Too long to develop.
- Too difficult to debug.
- Too much of an art and craft rather than a science and engineering discipline.

**Production-capacity applications:**

*Production-capacity*simply means capable of handling the workload requirements of a production application.- Both the raw amount of data and the rate at which it must be processed.
- The point is to move beyond mere proofs of concept and prototypes.

**The main training obstacles today are:**

- Too high a barrier to entry for many applications, if not an impossible barrier to entry for some.
- A quantum project requires an elite team, even the lunatic fringe. Can’t just assemble a team of average software developers.
- Too few people understand it all. Small pool of qualified individuals.
- Too steep a learning curve to understand it all. Too onerous.
- Understanding of quantum mechanics and linear algebra needed.
- Understanding it all is well-beyond the ability level of the talent traditionally available for classical computing.

**What about quantum error correction (QEC)?**

- Some believe that some important algorithmic approaches absolutely require quantum error correction (QEC). For example, quantum phase estimation, quantum Fourier transforms, and Shor’s algorithm for factoring large integers (or cracking strong public encryption keys.) But the hardware requirements are extreme and unlikely in the near future.
- Variational methods, such as variational quantum eigensolver (VQE), and other hybrid classical/quantum approaches dramatically reduce the size of quantum circuits to completely sidestep the issue — but at great cost and with great tradeoffs and limitations.
- There are error mitigation techniques, but they add complexity, making breakout once again problematic.
- Incremental advances and breakthroughs in coherence time improvement allow ever-larger quantum circuits without the need for QEC, but it may take too long until we hit a coherence time that enables a majority of desirable quantum algorithms.
- Wait for a breakthrough in alternative approaches to QEC that don’t require such large amounts of hardware. Hope does spring eternal! No hint of any imminent breakthrough, but novel technological breakthroughs are happening every year.
- In many cases, it may be sufficient to simply run a given quantum circuit a significant number of times (sometimes called “shots”) to get an average or approximate result. This should work if the error rate is not too high. But it won’t work in all cases.
- Regardless, my expectation is that reasonably long coherence times will be commonplace in the reasonably near future. Say, 200 gates within 2 years. Maybe doubling every 1–2 years. But, I could be wrong.
- In any case, either greater coherence time or error correction is definitely needed to achieve quantum algorithmic breakout. Or, some conceptual breakthrough on the use of variational methods.

**Will quantum algorithmic breakout be possible with NISQ hardware?**

- It isn’t clear whether quantum algorithmic breakout will require QEC or whether breakout might be possible using only NISQ hardware.
- But there will likely be niches or specialized categories where breakout is indeed practical using only NISQ hardware, albeit significantly less noisy and more capable than today’s very noisy and very limited devices.
- As coherence time is improved, it’s not clear at what threshold the term NISQ will cease to be relevant, short of full-scale QEC. If execution of hundreds of gates becomes possible in two to three years, will that still be NISQ or will it be post-NISQ?
- The bottom line is that we may be able to achieve some degree of breakout with an
*improved*NISQ, but probably not with*current*NISQ.

**Additional requirements:**

**Open source.**Freely available online quantum code repositories that facilitate sharing of algorithm knowledge, algorithm know-how, and to build communities of support.**Commenting conventions are needed.**Quantum code needs to be human readable!**Coding conventions.**Again, quantum code needs to be human readable.

**Who should care:**

**Researchers.**Physicists. Mathematicians. Computer scientists. And scientists who wish to use quantum computers to simulate quantum systems.**Developers.**Quantum algorithm designers focused on practical, real-world, production applications.**Technical team leaders.**Where quantum algorithms are at least one component.**Managers.**Responsible for projects with at least one quantum component.**Executives.**Where business goals are dependent on deploying quantum computing.**Investors.**Venture capital in the quantum space. Or for businesses which can exploit quantum-based applications.**Policymakers.**Availability of research grant funding. Budgeting for quantum deployments.

**Goals:**

- Reduce project startup time and costs, and risks.
- Reduce total project time and costs, and risks.
- Much less uncertainty.
- Increase reuse between projects.
- Much, much less risk.

The overarching goal is to get to a place that does not require the absolute top tier of technical staff for even relatively simple projects. Simple real-world, practical, production-quality, production-capacity application projects, that is. And certainly larger projects as well.

**Must be much easier in all ways:**

- Less like mountain climbing (Mt. Everest!) in difficult weather. Or fighting through a dense jungle.
- More like hiking a moderate trail on a pleasant day. Should feel exhilarating and absolutely liberating. Should feel refreshing, exhilarating, and absolutely liberating.
- Trivial to do proof of concept designs rather than an onerous journey and major accomplishment.
- Less daunting to envision even ambitious projects.
- Less daunting to decide on an approach and strategy.
- Less daunting to get started.
- Less daunting to dot all the i’s and cross all the t’s.
- Debugging must be a piece of cake.
- Less daunting to plan hardware needs.
- Less daunting to staff projects.

**How many qubits will be needed?**

Difficult to say…

- 50–55? Will the new 53-qubit machines do the job?
- 55–60? Will just a few more qubits do the trick?
- 60–64?
- 70–75?
- 100?
- 128? One would hope, but…
- 160?
- 200?
- 256? This has to be enough, doesn’t it?!
- 512?
- 1K?
- 2K?
- 4K?
- 8K?
- 12K?
- 16K?
- 20K?
- 32K?
- 64K?
- 128K?
- 256K?
- 512K?
- 1M?
- 2M?
- 4M?
- 8M?
- 16M?
- 32M?
- 64M?
- 128M?
- 256M?
- 512M?
- 1G?

And these are *logical* qubits. They happen to be *physical* qubits as well, today, but with quantum error correction (QEC) the number of physical qubits will jump dramatically to be a significant multiple of the number of logical qubits.

**How much coherence time is needed?**

- Generally, quantum circuits will not be very lengthy. Maybe a few dozen gates. That’s not a difficult need to satisfy in the long run, but with a lot of current NISQ devices it’s a heavy lift.
- Some algorithms and algorithmic building blocks such as quantum phase estimation, quantum Fourier transform (QFT), and Shor’s algorithm for factoring large numbers require much longer gate sequences — hundreds, maybe thousands, or even millions of gates (Shor’s for large public encryption keys.)
- It is not the total gate count that matters but the maximum circuit depth since a fair fraction of gates can be executed in parallel.
- I would say that 100–200 gates is a reasonable goal and expectation over the next two years.
- I would say that doubling circuit depth every one to two years is a reasonable goal and expectation over the next decade.
- Ion trap qubits have greater potential for much longer coherence time over the next few years. Superconducting transmon qubits look unlikely to be able to keep up.
- But, the trajectory of technical advances and breakthroughs is quite unpredictable, so the qubit technology landscape could be quite unrecognizable even just a few years from now.
- Photonic, semiconductor, and topological qubits seem promising, but uncertain.

**Paths to greater connectivity:**

- Trapped-ion quantum computers support any-to-any coupling of qubits, so they completely solve the problem, or so it is claimed.
- Superconducting transmon qubits support only nearest-neighbor coupling, but SWAP gates can be used to physically move the quantum state of one qubit to be adjacent to the desired qubit for coupling. But, this only works well if errors for SWAP gates are very low, which is problematic, so far.
- It’s not possible to predict what technological advances may impact qubit connectivity in the coming years and decades.
- Trapped-ion devices have a clear advantage, for now, but it is not possible to say whether not-yet-imagined technological breakthroughs might in fact leave trapped-ion technology in the dust in the coming years. Photonics? Semiconductor-based qubits? Who knows!

**How to achieve high performance tight integration for hybrid algorithms:**

- Ultimately, we need to achieve a true Universal Quantum Computer — a merged computer architecture which permits both classical and quantum computation in the same computer program, so that there is essentially zero latency when transitioning between the quantum and classical portions of the unified program for what is today a hybrid quantum/classical algorithm. Similar to the way a traditional coprocessor chip functions, or a GPU or FPGA solution. But this is not on the near-term horizon or even on anybody’s public roadmap.
- The easiest option is to place the classical computer closer to the quantum computer, such as on the same internal LAN network, but there is still significant overhead and latency compared to a coprocessor chip solution. At least one vendor is doing this and it is likely to become the norm or at least more common. It’s far from the ideal (true Universal Quantum Computer), but better than nothing.
- More complex but higher performance is to place a classical computer on the same internal data and control bus as the controller for the quantum computer. Much better than simply an internal LAN connection, but still nowhere near as good as a true Universal Quantum Computer.
- It will all depend on the nature of the particular hybrid algorithm, whether it needs to invoke quantum subroutines only dozens or hundreds or a few thousand times, or whether many thousands, millions, or even billions of quantum circuit invocations may be required.
- From the perspective of quantum algorithmic breakout, the ability to support a large number of quantum circuit invocations in a relatively short period of time (fraction of a second, seconds, or under a minute) can open up technical feasibility to a much broader range of algorithms.

**There may be degrees or stages of breakout:**

- Individual applications.
- Narrow niches.
- Broader niches.
- Whole application categories.
- Multiple application categories.
- All application categories.

**Degrees of quantum supremacy:**

- Not actually competitive with classical computing, but ancillary benefits.
- Barely competitive with classical computing, but ancillary benefits.
- Moderately competitive with classical computing. Say, 20–50% advantage.
- Easily competitive with classical computing. Say, 100–500% advantage.
- True exponential speedup advantage over classical computing.

**Need for tools and support software:**

- Plenty of tools and support software are needed, but the focus needs to be on the challenges of algorithm design itself.
- Besides, we already know how to do compilers, tools, operating systems, and support software, so they are not the gating factor for quantum algorithmic breakout.
- Tools or support software which merely paper over the algorithm challenges just add net complexity rather than solving the problem.
- Tools for visualization would be helpful. Especially when quantum state cannot be directly examined in a real quantum computer, but can be in a simulated quantum computer.
- High quality and high performance quantum computer simulators are needed. Including debugging and dynamic circuit analysis.
- Need for rich static circuit analysis tools. Catch problems before the circuit is even run, for both simulators and real devices. Especially for obscure edge cases and data-dependent cases.
- Need for standardized user-oriented metrics for quantum computing as well as tools to collect, analyze, and report them.
- Need for libraries and code repositories — and conventions for using them — that facilitate code sharing, code reuse, and building of communities.

**And when might all of this occur?**

- Being overly-optimistic, maybe within 18 months.
- Being moderately optimistic, maybe two years, or maybe 2–3 years.
- Being only slightly optimistic, maybe 3–5 years.
- Being less than optimistic, maybe 5–7 years.
- Being a little pessimistic, maybe 7–10 years.
- Being more cynical, maybe 10–15 years.
- Being rather cynical, 15–20 years.
- Being a real cynic, decades, if ever.

Breakout is likely to require getting beyond the lunatic fringe and achieving at least the ENIAC moment and may in fact require achievement of the FORTRAN moment. Or at least the latter will constitute confirmation that breakout has occurred.

The lunatic fringe being the too-rare super-elite technical staff who can do almost anything with the most limited of resources, well beyond the abilities of normal, mere-mortal technical staff.

The ENIAC moment being the point at which hardware finally has sufficient capabilities and the lunatic fringe are able to figure out how to get the first real-world, practical, production-quality, production-capacity application to run. Likely to require superhuman, heroic effort — even once the hardware is ready. This is an important stepping stone, but not the final hurdle for breakout per se.

The FORTRAN moment being the point at which we finally have language tools which enable mere-mortal staff to develop real-world, practical, production-quality, production-capacity applications. Quantum computing is now officially in the mainstream.

**Could advanced quantum simulators accelerate algorithmic breakout?**

Would a qubit count of 24 to 48 — and essentially infinite coherence time — on a quantum simulator running on classical hardware enable much more rapid algorithm development? One might be tempted to think so.

Even if it takes dozens, hundreds, or even thousands of classical processors to simulate 32 to 48-qubit quantum computers, that would aid algorithm development even while comparable qubit counts for physical quantum computers are still years out into the future.

A 53-qubit physical quantum computer might be better or at least much faster, but limited coherence time, lack of error correction, and limited connectivity could still give an advanced quantum simulator an edge for developing and testing algorithms which simply cannot yet be run on available quantum devices.

Or, are 54 to 128 qubits needed to enable algorithmic breakout, beyond the reach of classical simulation of a quantum computer? Possibly, but who can say at this juncture.

Still, better, more capable, and higher performance quantum simulators would generally be a good thing and generally help to accelerate quantum algorithmic breakout. Even if the resulting algorithms cannot be run in production at this time, they are at least ready when the comparable quantum devices do become available. And the existence of such algorithms can help to incentivize development of more advanced quantum devices.

**When did algorithmic breakout occur for classical computers?**

Cutting to the chase, the widespread availability of the FORTRAN programming language on transistor-based classical computers led to a dramatic increase in the ease of developing significant applications that solved production-scale real-world problems. The late 1950’s and into the early 1960’s. FORTRAN was introduced initially in 1957 and had achieved widespread usage by 1960 — take your pick as to which end of that period to use to mark algorithmic breakout.

To my mind, today’s quantum computers are not much more capable than the electromechanical relay-based computers of the early to mid 1940’s.

To summarize the advances leading to classical algorithmic breakout:

- Vacuum tube-based computers. Machine language.
- ENIAC. Running the first production-scale application.
- Increasing memory and storage capacity.
- More general-purpose computers. Still vacuum tubes. But still only for the lunatic fringe coding tediously in assembly language.
- FORTRAN programming language. Initial breakout.
- Transistor-based computers. New levels of performance.
- Even greater memory and storage. More general and proliferating.

Plenty of other developments accelerated and turbocharged the breakout process, but those were the key, essential factors.

Vacuum tube-based computers appeared in the late 1940’s and were quite successful but still difficult to program into the early to mid 1950’s. There were no significant high-level programming languages at that stage. Serious programming was accomplished with assembly language or machine language by a very elite staff, so there was no real chance for a dramatic algorithmic breakout. Only the most serious and elite scientists, engineers, and mathematicians could do any serious programming of these machines.

The FORTRAN programming language gradually appeared in the mid to late 1950’s, initially on vacuum tube-based computers such as the IBM 704, and then on the early transistor-based computers such as the IBM 7090. This was the moment of algorithmic breakout. Now, any competent scientist, engineer, or mathematician could fairly easily develop programs using math-like algebraic statements.

It took a few years from the initial introduction of FORTRAN until its widespread usage. And the language evolved during that period. Take your pick as to which end of that period to use to mark algorithmic breakout — 1957 or 1960.

Other high-level programming languages followed in the late 1950’s and early 1960s, with COBOL, ALGOL, PL/I, and BASIC, among others. Algorithmic breakout accelerated, especially as machines, memory, and storage proliferated.

Sure, more advanced operating systems and integrated circuits in the mid to late 1960’s and early 1970’s accelerated breakout, but breakout had already been reached by the end of the 1950’s and very early 1960’s with FORTRAN on both vacuum tube and transistor-based computers.

Advanced missile and aircraft guidance and control systems in the mid to late 1950’s and the space program in the late 1950’s through mid 1960’s certainly gave both an impetus to algorithmic breakout and benefited from breakout.

Commercial use of computers for business systems also provided an impetus to algorithmic breakout and benefited from breakout in the mid to late 1950’s and early to mid 1960’s.

For both government and commercial applications, early development was in assembly language by very elite staff, and only later transitioned to more modest staff facilitated by much more capable hardware, high-level languages, and a more manageable programming model.

**Category-specific breakout:**

Each distinct application category is likely to have its own unique quantum algorithmic breakout threshold.

Ditto for niches of each application category as well.

**How does quantum algorithmic breakout relate to quantum supremacy and quantum advantage?**

Quantum algorithmic breakout doesn’t directly relate to the performance of a quantum computer compared to a classical computer, but to:

- Whether it is even feasible to implement a quantum algorithm on existing quantum hardware, regardless of performance.
- Whether its is easy and cheap enough to design and implement quantum algorithms on a general basis, across a variety of application categories.

It is possible to achieve quantum supremacy and quantum advantage without actually achieving quantum algorithmic breakout, for example, if:

- It still takes an elite team.
- Costs are high.
- It takes too long.
- Changes to requirements require extensive, tedious, and costly redesign, debugging, and testing.
- Each new project means starting from scratch, or almost from scratch.
- Each new machine architecture means starting from scratch, or almost from scratch.

**Will quantum algorithmic breakout mark quantum computing’s exit from being an emerging technology?**

A technology is an *emerging technology* if its practical application is still *largely unrealized*, and not yet truly *mainstream* on a large scale.

Quantum algorithmic breakout would mark the *onset* of breakout and the *onset* of mainstream adoption.

It might take a few years after onset of quantum algorithmic breakout to *achieve the full potential* expected for the mainstream, and for deployments to become *commonplace*.

The point of quantum algorithmic breakout is to make development and deployment *easy* and to *enable* commonplace deployment.

*Early adopters* will be *pathfinders*, which is good, but still to early to mark the exit from *emerging technology*.

I would say that somewhere in the middle, as deployments are beginning to become quite common and as management gains the confidence to approve of deployment of quantum applications without it being a big, super-hard lift and onerous decision, then the label *emerging technology* will cease to be relevant.

People may continue to use the label emerging technology even after deployments are commonplace, but after a few years, the label will cease to have much potency and quickly fall from common use.

All of that said, personally, I would say that the emerging technology label will no longer be needed as soon as quantum algorithmic breakout is achieved *and* management seems to feel comfortable approving of production-scale deployments. I would expect that management won’t feel *comfortable* until more than a few production-scale deployments have proven their success, particularly for organizations which are not all-out *lunatic fringe*.

**In summary, we need a breakout from:**

- Limited quantity of hardware resources. More qubits.
- Limits of particular hardware resources. Coherence time and connectivity.
- Limited or nonexistent ability to move beyond proof of concept projects to full-scale, production-quality, production-capacity applications.
- Relatively low-performance integration of classical and quantum computing.
- Transition from classical to quantum mindset. From deterministic to probabilistic, from fine-granular methods to quantum parallelism.
- Lack of rich algorithm design patterns, algorithmic building blocks, and methods. Including application category-specific patterns and frameworks.
- Algorithm designers focusing on low-level gates rather than high-level logic abstractions.
- Being limited to the efforts of the lunatic fringe — significant projects are beyond the capabilities of normal mere-mortal technical staff.
- Onerous startup costs for projects.
- Onerous ongoing costs for projects.

Once we have overcome those limitations, a true ** quantum algorithmic breakout** will be at hand, and finally we will begin to see many real-world, practical, production-quality, production-capacity applications produced at a fairly rapid pace. Maybe in two years, maybe three to five years, or maybe longer. Okay, I’ll call it two to four years — but don’t hold me to that since it is all predicated on dramatic technological advances and breakthroughs.

Ultimately, we really do need to get to a true Universal Quantum Computer, especially since many of the more sophisticated algorithms will be a hybrid of classical and quantum computation no matter what, but that’s not happening soon. I won’t personally list a true Universal Quantum Computer as an absolute requirement for quantum algorithmic breakout, but some might argue that it really is an essential requirement for a true breakout.

**References**

For more depth on all of this, consult my larger (but still informal) papers:

For more of my writing: ** List of My Papers on Quantum Computing**.