Model for Pre-commercialization Required Before Quantum Computing Is Ready for Commercialization

Jack Krupansky
117 min readNov 3, 2021

--

Quantum computing is a very long way from being ready for commercialization. This informal paper lays out a proposed model of the main areas, main stages, major milestones, and toughest issues, and a process through which quantum computing must progress before commercialization can begin in earnest, when all significant technical uncertainties have been resolved and the technology is ready to be handed off to a product engineering team to develop a marketable and deployable product with minimal technical risk which will be ready for the development and production deployment of production-scale production-quality practical real-world quantum applications. This paper refers to all of this work in advance of commercialization as pre-commercialization — the work needed before a product engineering team can begin their work to develop a usable product suitable for production deployment with minimal technical risk.

There is a vast jungle, swamp, and desert of research, prototyping, and experimentation in front us us before quantum computing is finally ready for the final product engineering in preparation for commercialization for production deployment of quantum applications to solve production-scale practical real-world problems, but we’ll focus more on the big picture than the myriad of fine details.

Pre-commercialization is a vague and ambiguous term, ranging from everything that must occur before a technology is ready to be commercialized, to the final stage to get the technology ready for release for commercial use. This paper will focus on the former, especially research, prototyping, and experimentation, but the ultimate interest is in achieving the latter — commercial use.

For now and the indefinite future, quantum computing remains a mere laboratory curiosity, not yet ready for the development and deployment of production-scale quantum applications to solve practical real-world problems.

In fact, the critical need at this stage is much further and deeper research. Too many questions, uncertainties, and unknowns remain before there will be sufficient research results upon which to base a commercialization strategy.

The essential goal of pre-commercialization is simple:

  • Pre-commercialization seeks to eliminate all major technical uncertainties which might impede commercialization, thorough research, prototyping, and experimentation.

So that commercialization is a more methodical and predictable process:

  • Commercialization uses off the shelf technologies and proven research results to engineer low-risk commercial products in a reasonably short period of time.

To be clear, the term product is used in this paper to refer to the universe of quantum computing-related technology, from quantum computers themselves to support software, tools, quantum algorithms, and quantum applications — any technology needed to enable quantum computing. Many vendors may be involved, but this paper still refers to this universe of products as if it collectively were a single product, since all of them need to work together for quantum computing to succeed.

Topics to be covered in this paper:

  1. This paper is only a proposed model for approaching commercialization of quantum computing
  2. First things first — much more research is required
  3. The gaps: Why can’t quantum computing be commercialized right now?
  4. What effort is needed to fill the gaps
  5. Quantum advantage is the single greatest technical obstacle to commercialization
  6. Quantum computing remains a mere laboratory curiosity
  7. The critical need for quantum computing is much further and deeper research
  8. What do commercialization and pre-commercialization mean?
  9. Put most simply, pre-commercialization is research as well as prototyping and experimentation
  10. Put most simply, commercialization focuses on a product engineering team — commercial product-oriented engineers and software developers rather than scientists
  11. Pre-commercialization is the Wild West while commercialization is more like urban planning
  12. Most production-scale application development will occur during commercialization
  13. Premature commercialization risk
  14. No great detail on commercialization proper since focus here is on pre-commercialization
  15. Commercialization can be used in a variety of ways
  16. Unclear how much research must be completed before commercialization of quantum computing can begin
  17. What fraction of research must be completed before commercialization of quantum computing?
  18. Research will always be ongoing and never-ending
  19. Simplified model of the main stages from pre-commercialization through commercialization
  20. The heavy lift milestones for commercializing quantum computing
  21. Short list of the major stages in development and adoption of quantum computing as a new technology
  22. More detailed list of stages in development of quantum computing as a product
  23. Stage milestones towards quantum computing as a commercial product
  24. The benefits and risks of quantum error correction (QEC) and logical qubits
  25. Three stages of deployment for quantum computing: The ENIAC Moment, configurable packaged quantum solutions, and The FORTRAN Moment
  26. Highlights of pre-commercialization activities
  27. Highlights of initial commercialization
  28. Post-commercialization — after the initial commercialization stage
  29. Highlights for subsequent stages of commercial deployment, maybe up to ten of them
  30. Prototyping and experimentation — hardware and software, firmware and algorithms, and applications
  31. Pilot projects — prototypes
  32. Proof of concept projects (POC) — prototypes
  33. Rely primarily on simulation for most prototyping and experimentation
  34. Primary testing of hardware should focus on functional testing, stress testing, and benchmarking — not prototyping and experimentation
  35. Prototyping and experimentation should initially focus on simulation of hardware expected in commercialization
  36. Late in pre-commercialization, prototyping and experimentation can focus on actual hardware — once it meets specs for commercialization
  37. Prototyping and experimentation on actual hardware earlier in pre-commercialization is problematic and an unproductive distraction
  38. Products, services, and releases — for both hardware and software
  39. Products which enable quantum computing vs. products which are enabled by quantum computing
  40. Potential for commercial viability of quantum-enabling products during pre-commercialization
  41. Preliminary quantum-enabled products during pre-commercialization
  42. How much of research results can be used intact vs. need to be discarded and re-focused and re-implemented with a product engineering focus?
  43. Hardware vs. cloud service
  44. Prototyping and experimentation — applied research vs. pre-commercialization
  45. Prototyping products and experimenting with applications
  46. Prototyping and experimentation as an extension of research
  47. Trial and error vs. methodical process for prototyping and experimentation
  48. Alternative meanings of pre-commercialization
  49. Requirements for commercialization
  50. Production-scale vs. production-capacity vs. production-quality vs. production-reliability
  51. Commercialization of research
  52. Stages for uptake of new technology
  53. Is quantum computing still a mere laboratory curiosity? Yes!
  54. When will quantum computing cease being a mere laboratory curiosity? Unknown. Not soon.
  55. By definition quantum computing will no longer be a mere laboratory curiosity as soon as it has achieved initial commercialization
  56. Could quantum computing be considered no longer a mere laboratory curiosity as soon as pre-commercialization is complete? Possibly.
  57. Could quantum computing be considered no longer a mere laboratory curiosity as soon as commercialization is begun? Plausibly.
  58. Quantum computing will clearly no longer be a mere laboratory curiosity once initial commercialization stage 1.0 has been achieved
  59. Four overall stages of research
  60. Next steps — research
  61. What is product development or productization?
  62. Technical specifications
  63. Product release
  64. When can pre-commercialization begin? Now! We’re doing it now.
  65. When can commercialization begin? That’s a very different question! Not soon.
  66. Continually update vision of what quantum computing will look like
  67. Need to utilize off the shelf technology
  68. It takes significant time to commercialize research results
  69. The goal is to turn raw research results into off the shelf technology which can then be used in commercial products
  70. Will there be a quantum winter? Or even more than one?
  71. But occasional Quantum Springs and Quantum Summers
  72. Quantum Falls?
  73. In truth, it will be a very long and very slow slog
  74. No predicting the precise flow of progress, with advances and setbacks
  75. Sorry, but there won’t be any precise roadmap to commercialization of quantum computing
  76. Existing roadmaps leave a lot to be desired, and prove that we’re in pre-commercialization
  77. No need to boil the ocean — progression of commercial stages
  78. The transition from pre-commercialization to commercialization: producing detailed specifications for requirements, architecture, functions and features, and fairly detailed design
  79. Critical technical gating factors for initial stage of commercialization
  80. Quantum advantage is the single greatest technical gating factor for commercialization
  81. Variational methods are a short-term crutch, a distraction, and an absolute dead-end
  82. Exploit quantum Fourier transform (QFT) and quantum phase estimation (QPE) on simulators during pre-commercialization
  83. Final steps before a product can be released for commercial production deployment
  84. No further significant research can be required to support the initial commercialization product
  85. Commercialization requires that the technology be ready for production deployment
  86. Non-technical gating factors for commercialization
  87. Quantum computer science
  88. Quantum software engineering
  89. No point to commercializing until substantial fractional quantum advantage is reached
  90. Fractional quantum advantage progress to commercialization
  91. Qubit capacity progression to commercialization
  92. Maximum circuit depth progression to commercialization
  93. Alternative hardware architectures may be needed for more than 64 qubits
  94. Qubit technology evolution over the course of pre-commercialization, commercialization, and post-commercialization
  95. Initial commercialization stage 1 — C1.0
  96. The main criterion for initial commercialization is substantial quantum advantage for a realistic application, AKA The ENIAC Moment
  97. Differences between validation, testing, and evaluation for pre-commercialization vs. commercialization
  98. Validation, testing, and evaluation during pre-commercialization
  99. Validation, testing, and evaluation for initial commercialization stage 1.0
  100. Initial commercialization stage 1.0 — The ENIAC Moment has arrived
  101. Initial commercialization stage 1.0 — Routinely achieving substantial quantum advantage
  102. Initial commercialization stage 1.0 — Achieving substantial quantum advantage every month or two
  103. Okay, maybe a nontrivial fraction of minimal quantum advantage might be acceptable for the initial stage of commercialization
  104. Minimum Viable Product (MVP)
  105. Automatically scalable quantum algorithms
  106. Configurable packaged quantum solutions
  107. Shouldn’t quantum error correction (QEC), logical qubits, and The FORTRAN Moment be required for the initial commercialization stage? Yes, but not feasible.
  108. Should a higher-level programming model be required for the initial commercialization stage? Probably, but may be too much to ask for.
  109. Not everyone will trust a version 1.0 of any product anyway
  110. General release
  111. Criteria for evaluating the success of initial commercialization stage 1.0
  112. Quantum ecosystem
  113. Subsequent commercialization stages — Beyond the initial ENIAC Moment
  114. Post-commercialization efforts
  115. Milestones in fine phase granularity to support quantum Fourier transform (QFT) and quantum phase estimation (QPE)
  116. Milestones in quantum parallelism and quantum advantage
  117. When might commercialization of quantum computing occur?
  118. Slow, medium, and fast paths to pre-commercialization and initial commercialization
  119. How long might pre-commercialization take?
  120. What stage are we at right now? Still early pre-commercialization.
  121. When might pre-releases and preview releases become available?
  122. Dependencies
  123. Some products which enable pre-commercialization may not be relevant to commercialization
  124. Risky bets: Some great ideas during pre-commercialization may not survive in commercialization
  125. A chance that all work products from pre-commercialization may have to be discarded to transition to commercialization
  126. Analogy to transition from ABC, ENIAC, and EDVAC research computers to UNIVAC I and IBM 701 and 650 commercial systems
  127. Concern about overreach and overdesign — Multics vs. UNIX, OS/2 vs. Windows, IBM System/38, Intel 432, IBM RT PC ROMP vs. PowerPC, Trilogy
  128. Full treatment of commercialization — a separate paper, eventually
  129. Beware betaware
  130. Vaporware — don’t believe it until it yourself
  131. Pre-commercialization is about constant change while commercialization is about stability and carefully controlled and compatible evolution
  132. Customers and users prefer carefully designed products, not cobbled prototypes
  133. Customers and users will seek the stability of methodical commercialization, not the chaos of pre-commercialization
  134. Need for larger capacity, higher performance, more accurate classical quantum simulators
  135. Hardware infrastructure and services buildout
  136. Hardware infrastructure and services buildout is not an issue, priority, or worry yet since the focus is on research
  137. Factors driving hardware infrastructure and services buildout
  138. Maybe a surge in demand for hardware infrastructure and services late in pre-commercialization
  139. Expect a surge in demand for hardware infrastructure and services once The ENIAC Moment has been reached
  140. Development of standards for QA, documentation, and benchmarking
  141. Business development during pre-commercialization
  142. Some preliminary commercial business development late in pre-commercialization
  143. Preliminary commercial business development early in initial commercialization stage
  144. Deeper commercial business development should wait until after pre-releases late in the initial commercialization stage
  145. Consortiums for configurable packaged quantum solutions
  146. Finalizing service level agreements (SLA) should not occur until late in the initial commercialization stage, not during pre-commercialization
  147. IBM — Still heavily focused on research as well as customers prototyping and experimenting
  148. Oracle — No hint of prime-time application commercialization
  149. Amazon — Research and facilitating prototyping and experimentation
  150. Pre-commercialization is the realm of the lunatic fringe
  151. Quantum Ready
  152. Quantum Ready — The criteria and timing will be a fielder’s choice based on needs and interests
  153. Quantum Ready — Be early, but not too early
  154. Quantum Ready — Critical technical gating factors
  155. Quantum Ready — When the ENIAC Moment has been achieved
  156. Quantum Ready — It’s never too early for The Lunatic Fringe
  157. Quantum Ready — Light vs. heavy technical talent
  158. Quantum Ready — For algorithm and application researchers anytime during pre-commercialization is fine, but for simulation only
  159. Quantum Ready — Caveat: Any work, knowledge, or skill developed during pre-commercialization runs the risk of being obsolete by the time of commercialization
  160. Quantum Ready — The technology will be constantly changing
  161. Quantum Ready — Leaders, fast-followers, and laggards
  162. Quantum Ready — Setting expectations for commercialization
  163. Quantum Ready — Or maybe people should wait for fault-tolerance?
  164. Quantum Ready — Near-perfect qubits might be a good time to get ready
  165. Quantum Ready — Maybe wait for The FORTRAN Moment?
  166. Quantum Ready — Wait for configurable packaged quantum solutions
  167. Quantum Ready — Not all staff within the organization have to get Quantum Ready at the same time or pace
  168. Shor’s algorithm implementation for large public encryption keys? Not soon.
  169. Quantum true random number generation as an application is beyond the scope of general-purpose quantum computing
  170. Summary and conclusions

This paper is only a proposed model for approaching commercialization of quantum computing

To be clear, this paper is only a proposed model for approaching commercialization of quantum computing. How reality plays out is anybody’s guess.

That said, significant parts of this paper are describing reality as it exists today and in recent years.

And, significant parts of this paper are simply describing widely accepted prospects for the future of quantum computing.

But it is my model for pre-commercialization and how the model parses what goes into pre-commercialization and commercialization respectively, as well as my particular parsing and description of specific activities and their sequencing and timing that may be disputed by some — or not, as the case might be.

First things first — much more research is required

The most important takeaway from this paper is that the top and most urgent priority is much more research.

For detailed areas of quantum computing in which research is needed, see my paper:

The gaps: Why can’t quantum computing be commercialized right now?

Even if we had a vast army of 10,000 hardware and software engineers available, these technical obstacles would preclude those engineers from developing a commercial quantum computer which can achieve dramatic quantum advantage for production-scale production-capacity production-quality practical real-world quantum applications:

  1. Haven’t achieved quantum advantage. No point in commercializing a new computing technology which has no substantial advantage over classical computing.
  2. Too few qubits.
  3. Gate error rate is too high. Can’t get correct and reliable results.
  4. Severely limited qubit connectivity. Need full connectivity or very low error rate for SWAP networks, or some innovative connectivity scheme.
  5. Very limited coherence time. Severely limits circuit depth.
  6. Very limited circuit depth. Limited by coherence time.
  7. Insufficiently fine granularity for phase and probability amplitude. Precludes quantum Fourier transform (QFT) and quantum phase estimation (QPE).
  8. No support for quantum Fourier transform (QFT) and quantum phase estimation (QPE). Needed for quantum computational chemistry and other applications.
  9. Measurement error rate is too high.
  10. Need quantum error correction (QEC) or near-perfect qubits. 99.99% to 99.999% fidelity.
  11. Need modular quantum processing unit (QPU) designs.
  12. Need higher-level programming models. Current programming model is too primitive.
  13. Need for high-level quantum-native programming languages. Currently working at the equivalent of classical assembly language or machine language — individual quantum logic gates.
  14. Too difficult to transform an application problem statement into a quantum algorithmic application. Need for methodology and automated tools.
  15. Too few applications are in a form that can readily be implemented on a quantum computer. Need for methodology.
  16. Lack of a rich library of high-level algorithmic building blocks.
  17. Need for quantum algorithm and application metaphors, design patterns, frameworks, and libraries.
  18. Inability to achieve dramatic quantum advantage. Or even a reasonable fraction.
  19. No real ability to debug complex quantum algorithms and applications.
  20. Unable to classically simulate more than about 40 qubits.
  21. No conceptual models or examples of automatically scalable 40-qubit quantum algorithms.
  22. No conceptual models or examples of realistic and automatically scalable quantum applications.
  23. No conceptual models or examples of complete and automatically scalable configurable packaged quantum solutions.
  24. Not ready for production deployment in general. Still a mere laboratory curiosity.

Those are the most critical gating items, all requiring significant research and innovation.

For more on the concept of dramatic quantum advantage, see my paper:

For more on the concept of fractional quantum advantage, see my paper:

What effort is needed to fill the gaps

A brief summary of what’s needed to fill the gaps so that quantum computing can be commercialized:

  1. Much more research. Enhancing existing technologies. Discovery of new technologies. New architectures. New approaches to deal with technologies. New programming models.
  2. Much more innovation.
  3. Much more algorithm efforts.
  4. Much greater reliance on classical quantum simulators. Configured to act as realistic quantum computers, current, near-term, medium-term, and longer-term.
  5. Robust technical documentation.
  6. Robust training materials.
  7. General reliance on open source, transparency, and free access to all technologies.
  8. Comprehensive examples for algorithms, metaphors, design patterns, frameworks, applications, and configurable packaged quantum solutions.
  9. Resistance to urges and incentives to prematurely commercialize the technology.
  10. Many prototypes, experimentation, and much engineering evolution. Who knows what the right mix is for successful quantum solutions to realistic application problems.

Quantum advantage is the single greatest technical obstacle to commercialization

The whole point of quantum computing is to offer a dramatic performance advantage over classical computing. There’s no point in commercializing a new computing technology which has no substantial performance advantage over classical computing.

For a general overview of quantum advantage and quantum supremacy, see my paper:

For more detail on dramatic quantum advantage, see my paper:

As well my paper on achieving at least a fraction of dramatic quantum advantage:

Quantum computing remains a mere laboratory curiosity

For now and the indefinite future, quantum computing remains a mere laboratory curiosity, not yet ready for the development and deployment of product-scale quantum applications to solve real-world problems.

For more on laboratory curiosities, read my paper:

So, when will quantum computing be able to advance beyond mere laboratory curiosity? Once all of the major technology gaps have been adequately addressed. When? Not soon.

The critical need for quantum computing is much further and deeper research

The critical need at this stage is much further and deeper research for all aspects of quantum computing, on all fronts. Too many questions, uncertainties, and unknowns remain before there will be sufficient research results upon which to base a robust and realistic commercialization strategy.

For more on the many areas of quantum computing needing research, read my paper:

What do commercialization and pre-commercialization mean?

Commercialization is a vague and ambiguous term, but the essence is a process which culminates in a finished product which is marketed to be used to solve practical, real-world problems for its customers and users.

Pre-commercialization is also a vague and ambiguous term, but the essence is that it is all of the work which must be completed before a serious attempt at commercialization can begin.

Product innovation generally proceeds through three stages:

  1. Research. Discovery, understanding, and invention of the science and raw technology. Some products require deep research, some do not.
  2. Prototyping and experimentation. What might a product look like? How might it work? How might people use it? Try out some ideas and see what works and what doesn’t. Some products require a lot of trial and error, some do not.
  3. Productization. We know what the product should look like. Now build it. Product engineering. Much more methodical, avoiding trial and error.

Pre-commercialization will generally refer to either:

  1. Research alone. Prototyping and experimentation will be considered part of commercialization. Common view for a research lab.
  2. Research as well as prototyping and experimentation. Vague and rough ideas need to be finalized before commercialization can begin. Commercialization, productization, product development, and product engineering will all be exact synonyms.

The latter meaning is most appropriate for quantum computing. Quantum computing is still in the early stage of product innovation, with:

  1. Much more research is needed. Lots of research.
  2. No firm vision or plan for the product. What the product will actually look like, feel like, and how it will actually be used. So…
  3. Much prototyping and experimentation is needed. Lots of it. This will shape and determine the vision and details of the product.

So, for quantum computing:

  1. Pre-commercialization means research as well as prototyping and experimentation. This will continue until the research advances to the stage where sufficient technology is available to produce a viable product that solves production-scale practical real-world problems. All significant technical issues have been resolved, so that commercialization can proceed with minimal technical risk.
  2. Commercialization means productization after research as well as prototyping and experimentation are complete. Productization means a shift in focus from research to a product engineering team — commercial product-oriented engineers and software developers rather than scientists.

Put most simply, pre-commercialization is research as well as prototyping and experimentation

Raw research results can be impressive, but it can take extensive prototyping and experimentation before the raw results can be transformed into a form that a product engineering team can then turn into a viable commercial product.

Putting the burden of prototyping and experimentation on a product engineering team is quite simply a waste of time, energy, resources, and talent. Prototyping and experimentation and product engineering are distinct skill sets — expecting one to perform the tasks of the other is an exercise in futility. Sure, it can be done, much as one can indeed forcefully pound a square peg into a round hole or vice versa — it can be done, but the results are far from desirable.

Put most simply, commercialization focuses on a product engineering team — commercial product-oriented engineers and software developers rather than scientists

Prototyping and experimentation may not require the same level of theoretical intensity as research, but it is still focused on posing and answering hard technical questions rather than the mechanics of conceptualizing, designing, and developing a commercially-viable product.

But a product engineering team consists of commercial product-oriented engineers and software developers focused on applying all of those answers towards the mechanics of conceptualizing, designing, developing, and deploying a commercially-viable product.

Pre-commercialization is the Wild West while commercialization is more like urban planning

By definition, nobody really knows the answers or the rules or even the questions during pre-commercialization. It has all the rules of a knife fight — there are no rules in a knife fight. It really is like the Wild West, where anything goes.

In contrast, commercialization presumes that all the answers are known and is a very methodical process to get from zero to a complete commercial product. It is just as methodical as urban planning.

Most production-scale application development will occur during commercialization

Although organizations can certainly experiment with quantum computing during pre-commercialization, it generally won’t be possible to develop production-scale applications until commercialization. Too much is likely to change, especially the hardware and details such as qubit fidelity.

The alpha testing stage of the initial commercialization stage is probably the earliest that production-oriented organizations can begin to get serious about quantum computing. Any work products produced during pre-commercialization will likely need to be redesigned and reimplemented for commercialization. Sure, some minor amount of work may be salvageable, but that should be treated as the exception rather than the rule.

The beta testing stage of initial commercialization is probably the best bet for getting really serious about design and development of production-scale quantum algorithms and applications.

Besides, quite a few organizations will prefer to wait for the third or fourth stages of commercialization before diving too deep and making too large a commitment to quantum computing. Betting on the first release of any new product is a very risky proposition, not for the faint of heart.

All of that said, I’m sure that there will be more than a few hearty organizations, led by The Lunatic Fringe of early adopters, who will at least attempt to design and develop production-scale algorithms and applications during pre-commercialization, at least in the later stages, once the technology has matured to a fair degree and technical risk seems at least partially under control. After all, somebody has to be the first, the first to achieve The ENIAC Moment, with the first production-scale practical real-world quantum application. But, such cases will be the exception rather than the rule, for most organizations.

Premature commercialization risk

A key motivation for this paper is to attempt to avoid the incredible technical and business risks that would come from premature commercialization of an immature technology — trying to create a commercial product before the technology is ready, feasible, or economically viable.

Quantum computing has come a long way over several decades and especially in the past few years, but still has a long way to go before it is ready for prime-time production deployment of production-scale practical real-world applications.

So, this paper focuses on pre-commercialization — all of the work that needs to be completed before a product engineering team can even begin serious, low-risk planning and development of a commercial product.

No great detail on commercialization proper since focus here is on pre-commercialization

Commercialization itself is discussed in this paper to some degree, but not as a main focus since the primary intent is to highlight what work should be considered pre-commercialization vs. product engineering for commercialization.

This paper will provide a brief summary of what happens in product development or productization or commercialization, but only a few high-level details since the primary focus here is on pre-commercialization — the need for deep research, prototyping, and experimentation which answers all of the hard questions so that product engineering can focus on engineering rather than research — and on being methodical rather than experimentation and trial and error.

The other reason to focus on commercialization to any extent in this paper is simply to clarify what belongs in commercialization rather than during pre-commercialization, to avoid premature commercialization — to avoid the premature commercialization risk mentioned in the preceding section.

All of that said, the coverage of commercialization in this paper should be sufficient until we get much closer to the end of pre-commercialization and the onset of commercialization is imminent. In fact, consider the coverage of commercialization in this paper to be a draft paper or framework for full coverage of commercialization.

Commercialization can be used in a variety of ways

Technically, commercialization nominally means for profit, for financial gain. Although quantum computing will generally be used for financial gain, it may be used, for example, in government applications, which of course are not for commercial gain, although the vendor supplying the quantum computers to the government is in fact seeking financial gain, so even government fits the technical definition. Although, the government could in theory design and build its own quantum computer, with all of the qualities of a vendor-supplied quantum computer, but without intention of financial gain, so that technically would not constitute a commercial deployment per se.

Technically, it might be better if we referred to practical adoption rather than commercialization, but commercialization has so many positive and clear connotations that it’s worth the minimal inaccuracy of special cases such as the proprietary government case just mentioned.

An academic research lab might build and operate a quantum computer for productive application, such as to process data from a research experiment. Once again, no direct financial motive, so calling it commercial seems somewhat misleading. But, once again, the focus this paper has is whether the quantum computer has the capabilities and capacity for practical application. The lab-oriented quantum computer will have to meet quite a few of the same requirements as a true commercial product for financial gain. So, once again, I see no great loss to applying the concepts of commercialization to even a lab-only but practical quantum computer.

Another use for the term commercialization is simply the intention to seek practical application for a new technology still in the research lab. The technology may be ready for practical application, but it simply needs to be commercialized.

Another use of the team is the final stage of development of a new product, immediately after validation, but before it is ready to be polished, packaged, and released as a commercial product.

From a practical perspective, I see two elements of the essential meaning of commercialization as used in this paper:

  1. It is no longer a mere laboratory curiosity.
  2. It is finally ready for production deployment. Production-scale, production-capacity, production-reliability, production-quality practical real-world applications.

Sure, some applications will not be for commercial gain. That’s okay.

Sure, some quantum computers will be custom developed for the organizations using them, neither acquired as a commercial product of a vendor, nor offered as a commercial product to other organizations. That’s okay too.

Unclear how much research must be completed before commercialization of quantum computing can begin

How much research must be completed before quantum computing is ready to be commercialized? Unfortunately, the simple and honest answer is that we just don’t know.

Technically, the answer is all the research that is needed to support the features, capabilities, capacity, and performance of the initial stage of commercialization and possibly the first few subsequent stages of commercialization. That’s the best, proper answer.

Actually, there are a lot of details of at least the areas or criteria for research which must be completed before commercialization — listed in the section Critical technical gating factors for initial stage of commercialization. There are also non-technical factors, listed in the section Non-technological gating factors for commercialization.

And to be clear, all of those factors are required to be resolved prior to the beginning of commercialization. That’s the whole point of pre-commercialization — resolving all technical obstacles so that product engineering will be a relatively smooth process.

But even as specific as that is, there is still the question of how much of that can actually be completed and whether a more limited, stripped-down vision of quantum computing is the best that can be achieved in the medium-term future.

What fraction of research must be completed before commercialization of quantum computing?

Even that question has an unclear, vague, and indeterminate answer. The only answer that has meaning is that all of the research needed to achieve the technical gating factors listed in the section Critical technical gating factors for initial stage of commercialization must be completed, by definition.

So, 100% of the research underpinning the technical gating factors must be completed before initial commercialization can begin. Granted, there will be ongoing research needed for subsequent commercialization stages.

Research will always be ongoing and never-ending

As already noted, research must continue even after all of the technical answers needed for initial commercialization have been achieved.

Each of the subsequent commercialization stages after the initial commercialization stage will likely require its own research, which must be completed well in advance of the use and deployment of technology dependent on the results of the research.

Simplified model of the main stages from pre-commercialization through commercialization

The simplest model is the best place to start, evolving from raw idea to finished product line:

  1. Laboratory curiosity. Focus on research. Many questions and technological uncertainties to be resolved.
  2. The ENIAC moment. Finally achieve production-scale capabilities. But still for elite teams only.
  3. Initial Commercialization. An actual commercial product. No longer a mere laboratory curiosity.
  4. The FORTRAN moment. Beginning of widespread adoption. No longer only for elite teams.
  5. Mature commercialization. Widespread adoption. Methodical development and release of new features, new capabilities, and improvements in performance, capacity, and reliability.

The heavy lift milestones for commercializing quantum computing

Many milestones must be achieved, but these are the really big ones:

  1. At least a few dramatic breakthroughs. Incremental advances will not be enough.
  2. Near-perfect qubits. Well beyond noisy. Four or five nines of fidelity.
  3. High intermediate-scale qubit capacity. 256 to 768 high-fidelity qubits.
  4. High qubit connectivity. If not full any to any, then at least enough that most algorithms won’t suffer. May (likely) require a more advanced quantum processor architecture.
  5. Extended qubit coherence and deeper quantum circuits. At least dozens, 100, 250, and preferably 1,000 gates.
  6. The ENIAC Moment. First credible application.
  7. Higher-level programming model and high-level programming language.
  8. Substantial libraries of high-level algorithmic building blocks.
  9. Substantial metaphors and design patterns.
  10. Substantial algorithm and application frameworks.
  11. Error correction and logical qubits.
  12. The FORTRAN Moment. Non-elite teams can develop quantum applications.
  13. First configurable packaged quantum solution.
  14. Significant number of configurable packaged quantum solutions.
  15. Quantum networking. Entangled quantum state between separate quantum computers.
  16. The BASIC Moment. Anybody can develop a practical quantum application.

Short list of the major stages in development and adoption of quantum computing as a new technology

  1. Initial conception.
  2. Theory fleshed out.
  3. Rudimentary lab experiments.
  4. First real working lab quantum computer.
  5. Initial experimentation.
  6. Incremental growth of initial capabilities coupled with incremental experimentation, rinse and repeat.
  7. First hints of usable technology coupled with experimental iteration.
  8. Strong suggestion that usability is imminent with ongoing experimental iteration.
  9. The ENIAC moment. Still just a laboratory curiosity. Still not quite ready for commercialization.
  10. Multiple ENIAC moments.
  11. Flesh out the technology.
  12. Enhance the usability of the technology with experimental iteration. Occasionally elite deployments.
  13. The FORTRAN moment. The technology is finally usable and ready for mass adoption.
  14. Widen applicability and adoption of the technology, iteratively. Production deployments are common.
  15. Successive generations of the technology. Broaden adoption. Develop standards.

More detailed list of stages in development of quantum computing as a product

Of course the reality will have its own twists and turns, detours, setbacks, and leaps forward, but these items generally will need to occur to get from theory and ideas to a commercial product:

  1. Initial, tentative research. Focusing on theory.
  2. More, deeper research. Focusing on laboratory experimentation.
  3. Even more research. Honing in on solutions and applications.
  4. Peer-reviewed publication.
  5. Trial and error.
  6. Simulation.
  7. Hardware innovation.
  8. Hardware evolution.
  9. Hardware maturity.
  10. Hardware validation for production use.
  11. Algorithm support innovation.
  12. Algorithm design theory innovation.
  13. Support software and tools conceptualization.
  14. Support software and tools development. Including testing.
  15. Technology familiarization.
  16. Documentation.
  17. Full technology training.
  18. Experimental algorithm design.
  19. Prototyping stage.
  20. Prototype applications.
  21. Production prototype algorithm design.
  22. Production algorithm design.
  23. QA methodology development.
  24. QA tooling prototyping.
  25. QA tooling production design.
  26. QA tooling production development.
  27. QA tooling production checkout and validation. Ready for use for validating production applications.
  28. Training curriculum. Syllabus.
  29. Marketing literature. White papers. Brochures. Demonstration videos. Podcasts.
  30. Standards development.
  31. Standards adoption.
  32. Standards adherence.
  33. Standards adherence validation.
  34. Regulatory approval.

Stage milestones towards quantum computing as a commercial product

Actual stages and milestones will vary, but generally these are many of the major milestones on the path from theory to a commercial product for quantum computing.

  1. Trial and error.
  2. Initial milestone. Something that seems to function.
  3. Evolution. Increasing functions, features, capabilities, capacity, performance, and reliability.
  4. Further functional milestones. Incremental advances. Occasional leaps.
  5. Initial belief in usability. “It’s ready!” — we think.
  6. Feedback from initial trial use. “Okay, but…”
  7. Further improvements. “This should do it!”
  8. Rinse and repeat. Possibly 4–10 times — or more.
  9. Questions about usability. Belief that “Okay, it’s not really ready.”
  10. A couple more trials and refinements.
  11. General acceptance that now it actually does appear ready.
  12. General release.
  13. 2–10 updates and refinements. Fine tune.
  14. Finally it’s ready. General belief that the technology actually is ready for production-scale development and deployment.
  15. Sequence of stages for testing for deployment readiness.
  16. SLA development. Service level agreement. Actual commitment, with actual penalties.
  17. Initial production deployment.
  18. Some possible — likely- hiccups.
  19. Further refinements based on reality of real-world production deployment.
  20. Final refinement that leads to successful deployment.
  21. Rinse and repeat for further production deployments. Further refinements.
  22. Acceptance of success. General belief that a sufficient number of successful production deployments have been made that production deployment is a low-risk proposition.
  23. Rinse and repeat for several more production deployments.
  24. Minimal hiccups and refinements needed.
  25. We’re there. General conclusion that production deployment is a routine matter.

The benefits and risks of quantum error correction (QEC) and logical qubits

Noisy NISQ qubits are too problematic for widespread adoption of quantum computing. Sure, elite teams can work around the problems to some extent, but even then it can remain problematic and an expensive and time-consuming proposition. Quantum error correction (QEC) fully resolves the difficulty of noisy qubits, providing perfect logical qubits which greatly facilitate quantum algorithm design and development of quantum applications.

Unfortunately, there are difficulties achieving quantum error correction as well. It will happen, eventually, but it’s just going to take time, a lot of time. Maybe two to three years, or maybe five to seven or even ten years.

Short of full quantum error correction and perfect logical qubits, semi-elite teams can probably make do with so-called near-perfect qubits which are not as perfect as error-corrected logical qubits, but close enough for many applications.

For more discussion of quantum error correction and logical qubits, see my paper:

Three stages of deployment for quantum computing: The ENIAC Moment, configurable packaged quantum solutions, and The FORTRAN Moment

A new technology cannot be deployed until it has been proven to be capable of solving production-scale practical real-world problems.

The first such deployable stage for quantum computing will be The ENIAC Moment, which is the very first time that anyone is able to demonstrate that quantum computing can achieve a substantial fraction of substantial quantum advantage for a production-scale practical real-world problem.

The second deployable stage, beyond additional ENIAC moments for more applications, is the deployment of a configurable packaged quantum solution, which allows a customer to develop a new application without writing any code, simply by configuring a packaged quantum solution for a type of application.

A configurable packaged quantum solution is essentially a generalized quantum application which is parameterized so that it can be customized for a particular situation using only configuration parameters rather than writing code or designing or modifying quantum algorithms directly.

The third and major deployable stage is The FORTRAN Moment, enabled in large part by quantum error correction (QEC) and logical qubits, as well as a higher-level programming model and high-level programming language which allow less-elite technical teams to develop quantum applications without the intensity of effort required by ENIAC moments, which require very elite technical teams.

Highlights of pre-commercialization activities

Just to summarize the more high-profile activities of pre-commercialization, before the technology is even ready to consider detailed product planning and methodical product engineering for a commercial product (commercialization):

  1. Deeper research. Both theoretical and deep pure research, as well as basic and applied research. Plodding along, incrementally and sometimes with big leaps, increasing qubit capacity, reducing error rates, and increasing coherence time to enable production-scale quantum algorithms.
  2. Support software and tools for researchers. For researchers, not to be confused with what a commercial product of customers and users will need.
  3. Support software and tools conceptualization. What might be needed in a commercial product for customers and users.
  4. Support software and tools development. Including testing.
  5. Initial development of quantum computer science. A new field.
  6. Initial development of quantum software engineering. A new field.
  7. Initial development of a quantum software development life cycle (QSDLC) methodology. Similar to SDLC for classical computing, but adapted for quantum computing.
  8. Initial effort at higher level and higher performance algorithmic building blocks.
  9. Initial efforts at algorithm and application libraries, metaphors, design patterns, and frameworks.
  10. Initial efforts at configurable packaged quantum solutions.
  11. Development of new and higher-level programming models.
  12. Development of high-level quantum programming languages.
  13. Develop larger capacity, higher performance, more accurate classical quantum simulators. Enable algorithm designers and application developers to test out ideas without access to quantum hardware, especially for quantum computers expected over the next few years.
  14. Need for very elite staff. Until the FORTRAN Moment is reached in a post-initial stage of commercialization.
  15. Need for detailed and accessible technical specifications. Even in pre-commercialization.
  16. Basic but reasonable quality documentation.
  17. Basic but reasonable quality tutorials. Freely accessible.
  18. Some degree of technical training. Freely accessible.
  19. Initial efforts at development of standards for QA, documentation, and benchmarking.
  20. Preliminary but serious effort at testing. QA, unit testing, subsystem testing, system testing, performance testing, and benchmarking.
  21. Need to reach the ENIAC Moment. Something closely resembling a real, production-scale application that solves a practical real-world problem. There’s no point commercializing if this has not been achieved.
  22. Focus on near-perfect qubits. Good enough for many applications.
  23. Research in quantum error correction (QEC). Still many technical uncertainties. Much research is required.
  24. Defer full implementation of quantum error correction until a subsequent commercialization stage. It’s still too hard and too demanding to expect in the initial commercialization stage.
  25. Focus on methodology and development of scalable algorithms. Minimal function on current NISQ hardware, but automatic expansion of scale as the hardware improves.
  26. Focus on preview products. Usable to some extent — by elite staff, for evaluation of the technology, but not to be confused with viable commercial products. Not for production use. No SLA.
  27. No SLA. Service level agreements — contractual commitments and penalties for function, availability, reliability, performance, and capacity. No commercial production deployment prior to initial commercialization stage anyway.
  28. Minimal hardware infrastructure and services buildout.
  29. Significant intellectual property creation. Hopefully mainly open source.
  30. Some degree of intellectual property protection. Hopefully minimal exclusionary patents.
  31. Some degree of intellectual property licensing.
  32. Possibly some intellectual property theft.
  33. Only minimal focus on maintainability. Generally, work produced during pre-commercialization will be rapidly superseded by revisions. The focus is on speed of getting research results and speed of evolution of existing work, not long-term product maintainability.
  34. End result of pre-commercialization: sufficient detail to produce detailed specifications for requirements, architecture, functions and features, and fairly detailed design to kick off commercialization.

Highlights of initial commercialization

Once pre-commercialization has been completed, a product engineering team can be chartered to develop a deployable commercial product. Just to summarize the more high-profile activities of commercialization:

  1. Formalize results from pre-commercialization into detailed specifications for requirements, architecture, functions and features, and fairly detailed design to kick off commercialization. Chartering of a product engineering team.
  2. Focus on a minimum viable product (MVP) for initial commercialization.
  3. Substantial quantum advantage. 1,000,000X performance advantage over classic solutions. Or, at least a significant fraction — well more than 1,000X. Preferably full dramatic quantum advantage — a one-quadrillion performance advantage. This can be clarified as pre-commercialization progresses. At this moment we have no visibility.
  4. Significant advances in qubit technology. Mix of incremental advances and occasional giant leaps, but sufficient to move well beyond noisy NISQ qubits to some level of near-perfect qubits, a minimum of four to five nines of qubit fidelity for two-qubit gates and measurement.
  5. Focus on near-perfect qubits. Good enough for many applications.
  6. Research in quantum error correction (QEC). Still many technical uncertainties. Much research is required.
  7. Defer full implementation of quantum error correction until a subsequent commercialization stage. It’s still too hard and too demanding to expect in the initial commercialization stage.
  8. Serious degree of support software and tools development. Including testing.
  9. Further development of quantum computer science.
  10. Further development of quantum software engineering.
  11. Further development of a quantum software development life cycle (QSDLC) methodology.
  12. Some degree of higher level and higher performance algorithmic building blocks.
  13. Some degree of algorithm and application libraries, metaphors, design patterns, and frameworks.
  14. Some degree of configurable packaged quantum solutions.
  15. Some degree of example algorithms and applications. That reflect practical, real-world applications. And clearly demonstrate quantum parallelism.
  16. Development of new and higher-level programming models.
  17. May or may not include the development of a high-level quantum programming language.
  18. Continue to expand larger capacity, higher performance, more accurate classical quantum simulators. Enable algorithm designers and application developers to test out ideas without access to quantum hardware, especially for quantum computers expected over the next few years.
  19. Focus on methodology and development of scalable algorithms. Minimal function on current limited hardware, but automatic expansion of scale as the hardware improves.
  20. Need for higher quality documentation.
  21. Need for great tutorials. Freely accessible.
  22. Serious need for technical training. But still focused primarily on elite technical staff at this stage.
  23. Serious efforts at development of standards for QA, documentation, and benchmarking.
  24. Robust efforts at testing. QA, unit testing, subsystem testing, system testing, performance testing, and benchmarking.
  25. SLA. Service level agreements — contractual commitments and penalties for function, availability, reliability, performance, and capacity. Essential for commercial production deployment.
  26. Minimal interoperability. More of an aspiration than a reality.
  27. Sufficient hardware infrastructure and services buildout. To meet initial expected demand. And some expectation for initial growth.
  28. Increased intellectual property issues.
  29. Increased focus on maintainability. Work during commercialization needs to be durable and flexible. Reasonable speed to evolve features and capabilities.

Post-commercialization — after the initial commercialization stage

Post-commercialization simply refers to any of the commercialization stages after the initial commercialization stage — also referred to here as subsequent commercialization stages.

Highlights for subsequent stages of commercial deployment, maybe up to ten of them

Achieving the initial commercialization stage will be a real watershed moment, but still too primitive for widespread deployment. A long sequence of subsequent commercialization stages will be required to achieve the full promise of quantum computing. Some highlights:

  1. Multiple ENIAC Moments.
  2. Multiple configurable packaged quantum solutions.
  3. Consortiums for configurable packaged quantum solutions.
  4. Ongoing focus on methodology and development of scalable algorithms. Limited function on current hardware, but automatic expansion of scale as the hardware improves.
  5. Dramatic quantum advantage. A one-quadrillion performance advantage over classical solutions. May take a few additional commercialization stages for the hardware to advance.
  6. Ongoing advances in qubit technology. Mix of incremental advances and occasional giant leaps.
  7. Maturation of support software and tools development. Including testing.
  8. Maturation of quantum computer science.
  9. Maturation of quantum software engineering.
  10. Maturation of a quantum software development life cycle (QSDLC) methodology.
  11. Extensive higher level and higher performance algorithmic building blocks.
  12. Extensive algorithm and application libraries, metaphors, design patterns, and frameworks.
  13. Extensive configurable packaged quantum solutions. Very common. The common quantum application deployment for most organizations.
  14. Excellent collection of example algorithms and applications. That reflect practical, real-world applications. And clearly demonstrate quantum parallelism.
  15. Full exploitation of new and higher-level programming models.
  16. Development of high-level quantum programming languages.
  17. Full quantum error correction (QEC) and perfect logical qubits. Will take a few stages beyond the initial commercialization.
  18. The FORTRAN Moment is reached. Widespread adoption is now possible.
  19. Training of non-elite staff for widespread adoption. Gradually expand from elite-only technical staff during the initial stage to non-elite technical staff as the product gets easier to use with higher-level algorithmic building blocks and higher-level programming models and languages.
  20. Increasing interoperability.
  21. Dramatic hardware infrastructure and services buildout. To meet demand. And anticipate growth.
  22. Increased intellectual property issues.
  23. The BASIC Moment is reached. Anyone can develop a practical quantum application.
  24. Quantum networking.
  25. Quantum artificial intelligence.
  26. Universal quantum computer. Merging full classical computing features.

Prototyping and experimentation — hardware and software, firmware and algorithms, and applications

In this paper, prototyping and experimentation include:

  1. Hardware.
  2. Firmware.
  3. Software.
  4. Algorithms.
  5. Applications.
  6. Test data.

So this includes working quantum computers, algorithms, and applications.

Prototypes won’t necessarily include all features and capabilities of a production product, but at least enough to demonstrate and evaluate capabilities and to enable experimentation with algorithms and applications.

Pilot projects — prototypes

A pilot project is generally the same as a prototype. Although sometimes prototyping is used at the outset of any project, while a pilot project is usually a single, specific project chosen to prove an overall approach to solving a particular application problem rather than a general approach to all application problems. In any case, references in this paper to prototypes and prototyping apply equally well to pilot projects.

Proof of concept projects (POC) — prototypes

A proof of concept project (POC) is generally the same as a prototype, or even a pilot project. Although, a POC project tends to focus on a very narrow, specific concept (hence the name), whereas a prototype tends to be a larger scale than POC, and a pilot project tends to be a full-scale project. In any case, references in this paper to prototypes and prototyping apply equally well to proof of concept projects.

Rely primarily on simulation for most prototyping and experimentation

Early hardware can be limited in availability, limited in capacity, error-prone, and lacking in critical features and capabilities. Life is so much easier using a clean simulator for prototyping and experimentation with algorithms and applications — provided that it is configured to match target hardware, primarily what is expected at commercialization, not hardware that is not ready for commercialization.

Primary testing of hardware should focus on functional testing, stress testing, and benchmarking — not prototyping and experimentation

Testing of hardware should be very rigorous and methodical, based on carefully defined specifications. During both pre-commercialization and commercialization.

Refrain from relying on prototyped algorithms and applications and ad hoc experimentation for critical testing of hardware, especially during pre-commercialization when everything is in a state of flux.

Prototyping and experimentation should initially focus on simulation of hardware expected in commercialization

Algorithm designers and application developers should not attempt to prototype or experiment with new hardware until it has been thoroughly tested and confirmed to meet its specifications.

Try to avoid hardware which is not ready for commercialization — focus on simulation at all stages.

Running on real hardware is more of a final test, not a preferred design and development method. If the hardware doesn’t meet its specifications, wait for the hardware to be improved rather than distort the algorithms and applications to fit hardware which simply isn’t ready for commercialization.

Late in pre-commercialization, prototyping and experimentation can focus on actual hardware — once it meets specs for commercialization

Only in the later stages of pre-commercialization, when the hardware is finally beginning to stabilize and approximate what is expected at commercialization will it make sense to do any serious prototyping and experimentation directly on real hardware. Even then, it’s still best to develop and test first under simulation. Running on real hardware is mostly as a test rather than for algorithm design and application development.

Prototyping and experimentation on actual hardware earlier in pre-commercialization is problematic and an unproductive distraction

Working directly with hardware, especially in the earlier stages of pre-commercialization can be very problematic and more of a distraction than a benefit. Remain focused on simulation.

Worse, attempts to run on early hardware frequently required distortions of algorithms and applications to work with hardware which is not ready for commercialization. Focus on correct results using simulation, and wait for the hardware to catch up with its specifications.

Products, services, and releases — for both hardware and software

This paper treats products and services the same regardless of whether they are hardware or software. And in fact, it treats hardware and software the same — as products or services.

Obviously hardware has to be physically manufactured, but with cloud services for accessing hardware remotely, even hardware looks like just another service to be accessed over the Internet.

Eventually hardware vendors will start selling and shipping quantum computer hardware to data centers and customer facilities.

There are of course time, energy, and resources required to manufacture or assemble a hardware device, but frequently that can be less than the time to design the hardware plus the time to design and develop the software needed to use the hardware device once it is built.

A release simply means making the product or service available and accessible. Both software and hardware, and products and services, can be released — made available for access.

Of course there are many technical differences between hardware and software products, but for the most part they don’t impact most of the analysis of this paper, which treats hardware as just another type of product — or service in the case of remote cloud access.

Products which enable quantum computing vs. products which are enabled by quantum computing

There are really two distinct categories of products covered by this paper:

  1. Quantum-enabled products. Products which are enabled by quantum computing. Such as quantum algorithms, quantum applications, and quantum computers themselves.
  2. Quantum-enabling products. Products which enable quantum computing. Such as software tools, compilers, classical quantum simulators, and support software. They run on classical computers and can be run even if quantum computing hardware is not available. Also includes classical hardware components and systems, as well as laboratory equipment.

The former are not technically practical until quantum computing has exited from the pre-commercialization stage and entered (or exited) the commercialization stage. They are the main focus of this paper.

The latter can be implemented at any time, even and especially during pre-commercialization. Some may in fact be focused on pre-commercialization, such as lab equipment and classical hardware used in conjunction with quantum lab experiments.

The point is that some quantum-related (quantum-enabling) products can in fact be commercially viable even before quantum computing has entered the commercialization stage, while any products which are enabled by quantum computing must wait until commercialization before they are commercially viable.

Potential for commercial viability of quantum-enabling products during pre-commercialization

Although most quantum-related products, including quantum applications and quantum computers themselves, have no substantive real value until the commercialization stage of quantum computing, a whole range of quantum-enabling products do potentially have very real value, even commercial value, during pre-commercialization and even during early pre-commercialization. These include:

  1. Quantum software tools.
  2. Compilers and translators.
  3. Algorithm analysis tools.
  4. Support software.
  5. Classical quantum simulators.
  6. Hardware components used to build quantum computers.

Organizations involved with research or prototyping and experimentation may be willing to pay not insignificant amounts of money for such quantum-enabling products, even during pre-commercialization.

Preliminary quantum-enabled products during pre-commercialization

Some vendors may in fact offer preliminary quantum-enabled products — or consulting services — during pre-commercialization, strictly for experimentation and evaluation, but with no prospect for commercial use during pre-commercialization. Personally, I would refer to these as preview products, even if offered for financial compensation. These could include algorithms and applications, as well as a variety of quantum-enabling tools as discussed in the preceding section, but again, focused on experimentation and evaluation, not commercial use.

How much of research results can be used intact vs. need to be discarded and re-focused and re-implemented with a product engineering focus?

Research has a distinct purpose from commercial products. The purpose of research is to get research results — to answer questions, to discover, and to learn things. In contrast, the purpose of a commercial product is to solve production-scale practical real-world problems.

By research work, I include outright research results as well as any of the prototyping and experimentation covered by pre-commercialization.

Research work needs to be robust, but not needing to address all of the concerns of a commercial product, many short-cuts can be taken and many gaps and shortfalls may exist that make it unsuitable for a commercial product as-is.

Sure, sometimes research work products can in fact be used as-is in a commercial product with no or only minimal rework, but more commonly substantial re-work or in fact a complete redesign and reimplementation may be necessary to address the needs of a commercial product.

In any case, be prepared for substantial re-work or outright, from-scratch design and reimplementation. If less effort is required, consider yourself lucky.

The whole point of research and pre-commercialization in general is to answer hard questions and eliminate technological risk. Generally, the distinct professional skills and expertise of a product engineering team will be required to produce a commercial-grade product.

Some of the factors which may not be addressed by any particular work performed in research or the rest of pre-commercialization include:

  1. Features.
  2. Functions.
  3. Capabilities.
  4. Performance.
  5. Capacity.
  6. Reliability.
  7. Error handling.
  8. QA testing.
  9. Ease of use.
  10. Maintainability.
  11. Interoperability.

I wouldn’t burden the research staff or advanced development staff with the need to worry or focus on any of those factors. Leave them to the product engineering team — they live for this stuff!

That said, some of those factors do overlap with research questions, but usually in different ways.

In any case, let the research, prototyping, and experimentation staff focus on what they do best, and not get bogged down in the extra level of burden of product engineering.

Hardware vs. cloud service

Although there are dramatic differences between operating a computer system for the sole benefit of the owning organization and operating a computer system as a cloud service for customers outside of the owning organization, they are roughly comparable from the perspective of this paper — they are interchangeable and reasonably transparent from the perspective of a quantum algorithm or a quantum application.

A quantum algorithm or quantum application will look almost exactly the same regardless of where execution occurs.

The economics and business planning will differ between the two.

The relative merits of the two approaches are beyond the scope of this paper.

From the perspective of hardware infrastructure and services buildout, computers are computers regardless of who owns and maintains them.

Prototyping and experimentation — applied research vs. pre-commercialization

When technical staff prototype products and experiment with them, should that be considered applied research or part of commercialization and productization? To some extent it’s a fielder’s choice, but for the purposes of this paper it is considered pre-commercialization, a stage between research and commercialization.

From a practical perspective, it’s a question of whose technical staff are doing the work. If the research staff are doing the work, then it’s more clearly applied research. But if engineering staff, or a product incubation team do the work, then it’s not really applied research per se since researchers are not doing the work.

To be more technically accurate, the way the work is carried out matters as well. If it’s methodical experiments with careful analysis, it could be seen as research, but if it’s trial and error and based on opinion, intuition, and input from sales, marketing, users, product management, and product engineering staff, then it wouldn’t be mistaken as being true applied research.

At the end of the day, and from the perspective of commercialization, prototyping and experimentation can be seen as more closely aligned with applied research than commercialization.

Still, it’s clearly in the middle, which is part of the motivation for referring to it as pre-commercialization — work to be completed as an extension of research before true commercialization can properly begin.

Prototyping products and experimenting with applications

Although prototyping and experimentation can be viewed together, as if they were siamese twins joined at the hip, they are somewhat distinct:

  1. Products are prototyped.
  2. Applications are experimented with.

Of course, some may view the distinction differently, but this is how I am using it here in this paper.

Prototyping and experimentation as an extension of research

It’s quite reasonable to consider prototyping (of products) and experimentation (with applications) as an extension of research. After all, they are intermediate results, designed to generate feedback, rather than intended to become actual products and applications in a commercial sense.

That said, calling research, prototyping, and experimentation pre-commercialization makes it more clear that they are separate from and must precede and be completed before proper commercialization can begin.

Trial and error vs. methodical process for prototyping and experimentation

Research programs tend to be much more methodical than prototyping and experimentation, which tend to be more trial and error, based on opinion, intuition, and input from sales, marketing, users, product management, and product engineering staff.

Alternative meanings of pre-commercialization

I considered a variety of possible interpretations for pre-commercialization. In the final analysis, it’s a somewhat arbitrary distinction, so I can understand if others might choose one of the alternative interpretations:

  1. Research alone. Anything after research is commercialization.
  2. Final stage just before commercialization, culminates in commercialization. Product engineering and development complete. Internal testing complete. Documentation complete. Training development complete. Ready for alpha and beta testing.
  3. Everything before production product engineering. Preliminary lab research — demonstrate basic capability. Full-scale research. Prototype products, evolution. Development of product requirements. Final prototype product. Essentially what I settled on — research and prototyping and experimentation, with the intention of answering all the critical technical questions, ready to hand off to the product engineering team.
  4. Everything before a preliminary vision of what a production product would look like. Lab experimentation. Iterative research, searching for the right technology capable of satisfying production needs. Iterative development of production requirements. Finally have a sense of what a functional production product would look like — and how to get there within a relatively small number of years. Iterate until final production requirement specification ready — and the research to support it.
  5. Everything after all research for the initial production product is complete. Sense that no further research is needed for initial production product release
  6. Everything after The ENIAC Moment. But then what nomenclature should be used for the stages before the ENIAC moment?
  7. Everything until The ENIAC Moment is behind us. A good runner-up candidate for pre-commercialization.

Requirements for commercialization

Before a product or technology idea can be commercialized, several criteria must be met:

  1. Research must be complete. Not necessarily all research for all of time, but sufficient research to design and build the initial product and a few stages after that. Enough research to eliminate all major scientific and technological uncertainties for the next few stages of a product or product line.
  2. Vague or rough product ideas must be clarified or rejected. A clear vision of the product is needed.
  3. A product plan must be developed. What must the product do and how will it do it?
  4. Budgeting and pricing. How much will it cost to design and develop the product? How large a team will be required, for how long, and how much will they be paid? How much equipment and space will be needed and how much will it cost? How much will it cost to build each unit of the product? How much will the product be sold for? How much will it cost to support, maintain, and service the product? How much of that cost can be recouped from recurring revenue as opposed to factoring it into the unit pricing?

Once those essentials are known and in place, product development can occur — the product engineering team can get to work.

Production-scale vs. production-capacity vs. production-quality vs. production-reliability

The focus of a commercial product is production deployment. Also known as operating the product or technology at scale.

There are four related terms:

  1. Production-scale. Capable of addressing large-scale practical real-world problems. The combination of input capacity, processing capacity, and output capacity.
  2. Production-capacity. Focusing specifically on the size of data to be handled.
  3. Production-quality. All of the functions and features required for addressing real-world problems — and it all works flawlessly, with no bugs, and resilience — handling a wide range of anomalous situations.
  4. Production-reliability. Focus on flawless and resilient operation.

Generally, production-scale will imply production-capacity, production-quality, and production-reliability.

Commercialization of research

There are really two distinct use cases for commercialization of research:

  1. Pure academic research. Capitalizing on the results of academic research and recouping research expenses. Generally focused on licensing of intellectual property.
  2. Pure corporate research. Refocusing pure research results on practical customer problems.

Stages for uptake of new technology

Technology innovation progresses through a number of stages. Finally, when it’s ready for adoption, it will go through another series of stages of uptake:

  1. Awareness. Letting people know that it exists.
  2. Basic conceptual understanding.
  3. Learning the technology details. Including formal training.
  4. Prototyping and experimentation.
  5. Evaluation of initial efforts.
  6. Planning for development projects.
  7. Architect for production design.
  8. Production design.
  9. Production development and implementation.
  10. Production QA testing.
  11. Production testing.
  12. Production stress testing, load testing.
  13. Production testing validation.
  14. Production deployment.
  15. Production operation.
  16. Monitoring.
  17. Updates to the technology. Including big fixes, security vulnerabilities, performance and capacity improvements, and functional enhancements.

As presented, these stages primarily represent commercialization of the technology, but most of those stages apply to pre-commercialization as well, except for production deployment and production operation.

Is quantum computing still a mere laboratory curiosity? Yes!

At present, quantum computing is a mere laboratory curiosity, meaning that although the technology can be demonstrated in the lab or controlled environments, it is not ready for production deployment of production-scale practical real-world applications.

My technical definition of laboratory curiosity:

  • A laboratory curiosity is a scientific discovery or engineering creation which has not yet found practical application in the real world.

For more on this concept of laboratory curiosity in general, see my paper:

For more on the concept applied to quantum computing, see my paper:

When will quantum computing cease being a mere laboratory curiosity? Unknown. Not soon.

As discussed in my paper, it will take quite some time and quite a few technological advances before quantum computing is even ready to be considered for advancing beyond being a mere laboratory curiosity.

In the context of this paper, once quantum computing has entered the commercialization stage, by definition, it will no longer be a mere laboratory curiosity. Especially once it has achieved The ENIAC Moment.

Or, alternatively, once quantum computing has reached the initial commercialization stage, with a commercial product released or imminent, and having achieved The ENIAC Moment, then clearly it is no longer a mere laboratory curiosity.

By definition quantum computing will no longer be a mere laboratory curiosity as soon as it has achieved initial commercialization

Regardless of how long it might take, it will be clear that quantum computing is no longer a mere laboratory curiosity once commercialization has been achieved, by definition, since commercialization implies suitability for practical application to real-world problems.

Could quantum computing be considered no longer a mere laboratory curiosity as soon as pre-commercialization is complete? Possibly.

Granted, full commercialization will make it abundantly clear that quantum computing is no longer a mere laboratory curiosity, but what about at the very beginning of commercialization, which is essentially the very end of pre-commercialization? Possibly.

It would of course be better to complete initial commercialization before safely declaring that quantum computing is no longer a mere laboratory curiosity.

But if engineers have completed prototyping and experimentation and have conclusively shown that quantum computing is in fact ready for production-scale practical real-world applications — and a substantial fraction of quantum advantage has been demonstrated, as well as having achieved The ENIAC Moment, it would seem reasonable to assert that even if technically still a laboratory curiosity, the technology is at least ready to be considered as something more than a mere laboratory curiosity.

If all of the technical challenges have been addressed, some realistic applications have been attempted, and all that is left is merely product engineering using off the shelf technology, that would seem sufficient to conclude that the technology is no longer a mere laboratory curiosity.

Could quantum computing be considered no longer a mere laboratory curiosity as soon as commercialization is begun? Plausibly.

Personally, I’d wait for an official decision to at least commence commercialization before officially declaring that quantum computing is no longer a mere laboratory curiosity.

Sure, technically, we should wait until we get to at least the alpha test stage and people are actually using the productized technology to develop production-scale algorithms and applications.

Quantum computing will clearly no longer be a mere laboratory curiosity once initial commercialization stage 1.0 has been achieved

Once it really is official that quantum computing has reached the initial commercialization stage — so that people can begin developing production-scale quantum applications in earnest, then and only then is it actually safe to declare that quantum computing is no longer a mere laboratory curiosity.

Four overall stages of research

Again, not all research must be completed for commercialization, but sufficient research to design and build the initial product and a few stages after that. We can identify four distinct stages of research:

  1. Basic and deep research needed before commercialization can even be considered.
  2. Cumulative research needed for the initial commercialization.
  3. Incremental research needed for the next few stages of commercialization.
  4. Longer term research needed for commercialization five to ten years or more in the future.

The first stage of research must be completed before commercialization can even be planned.

The second stage of research can be completed during the remainder of the long, slow pre-commercialization process.

It is probably reasonable to expect that it could take another two to five years to achieve initial commercialization even once that second stage of research has been completed.

The third stage of research can (and must) be completed during those two to five years while initial commercialization is being completed. This is urgent since it is highly desirable to rapidly commercialize that third stage of research over the two to three years after initial commercialization.

It is expected that longer term research should incrementally result in research results suitable to be considered for subsequent commercialization stages on a two to three year basis. It may take five to seven years, or even ten years to complete each of those research stages, so it is important that the research be completed five to ten years in advance of when it might be desired in a commercialization stage.

Next steps — research

There are three bottom lines for research here:

  1. Substantial research on all fronts.
  2. Incremental improvements on all fronts.
  3. Dramatic advances on all fronts.

All three are needed.

What is product development or productization?

Once the essentials of foundational research, product planning, and budgeting and pricing are in place, product development can begin.

Engineering teams work through a series of stages:

  1. Requirements. What functions and capabilities are required? What problems or applications does the product address? What are the performance and capacity requirements?
  2. Functional specifications. What are the details of those functions?
  3. Architectural design. How will the overall product be structured — its architecture and major components or subsystems.
  4. Detailed design. All details from the overall architecture, major subsystems, minor subsystems, and decomposed down to the smallest components.
  5. Development of all levels of the product. Electrical and mechanical engineering. Software engineering. Coding.
  6. Documentation. All details that anyone with an interest in the product will or might need to know.
  7. Testing. Development of a testing plan. Development of testing infrastructure. Development of individual tests. Test automation. Test result evaluation. Unit testing, subsystem testing, system testing, performance testing, stress testing.
  8. Packaging. Putting the product in a form that is ready to be delivered to customers and ready for users.
  9. Benchmarking. Testing to compare the performance and capacity of the product to other products in the same market or existing solutions to the same problems that the new product addresses.
  10. Regulatory requirements. Address government regulations or licensing requirements.
  11. Release engineering. Getting ready for shipment to customers, distribution, installation, shakedown testing, and deployment.

At this point the product is now ready for release to customers and users, and actual use.

Technical specifications

There are a number of distinct categories of technical specifications:

  1. Requirements. What the product or technology must, must not, should, and should not do.
  2. Architecture. Overall structure of the system. The major subsystems and how they interact.
  3. Design. How each subsystem or component works.
  4. Functional. What functions the product performs, that are visible to users.
  5. Implementation details. All fine technical details that may be needed to understand how the system operates. Including numerical parameters and expected performance data.

Product release

For physical products or software which is to be distributed:

  1. Turn the product designs over to manufacturing. Ready to replicate the design.
  2. Marketing of the product. Promotion. Building awareness of the product.
  3. Sales. Selling the product.
  4. Distribution. Actual delivery of the product to customers and users.
  5. Support. Assisting customers and users with issues related to use of the product.

For products which are services:

  1. Plan for deployment.
  2. Acquire the necessary infrastructure for deployment.
  3. Plan all details of deployment. Including operational details. And cybersecurity plans.
  4. Deployment.
  5. Deployment testing. Confirm that the service performs as expected. Including stress testing and load testing.
  6. Advance marketing of the service. Awareness of the service before it becomes available.
  7. Advance sales of the service.
  8. Service rollout. Make the service available to customers.
  9. Ongoing marketing of the service. Ongoing building of awareness.
  10. Ongoing sales of the service.
  11. Monitoring. Detect and address anomalies. Cybersecurity monitoring. Load balancing. Reporting, such as utilization. Analytics.
  12. Support. Addressing all customer and user issues.
  13. Upgrades. Update software and occasionally hardware.

When can pre-commercialization begin? Now! We’re doing it now.

Commercialization has a very high bar — because it requires actually solving production-scale practical real-world problems and addressing all conceivable technological obstacles, but pre-commercialization is simply all of the work which must precede commercialization. Since we’re already progressing with at least some of that preliminary work, then by definition we are already in pre-commercialization.

It is indeed great that we are in pre-commercialization, but that is only a minor solace for the fact that we are quite a distance from actual commercialization.

Pre-commercialization doesn’t mean that we have finished the work needed for commercialization, but simply that at least some of the work that must precede commercialization is underway.

What people really want to know is when pre-commercialization will be finished, which is the stage at which the long path to actual commercialization can begin.

When can commercialization begin? That’s a very different question! Not soon.

With pre-commercialization underway, it’s only natural to wonder when commercialization itself can begin. Great question, but with no simple and comforting answer. The honest answer: Not soon.

I can’t give you a timeframe, not even a rough timeframe, but there are a lot of details of at least the areas or criteria for research which must be completed before commercialization can begin listed in the section Critical technical gating factors for initial stage of commercialization. There are also non-technical factors, listed in the section Non-technological gating factors for commercialization.

And to be clear, all of those factors are required to be resolved prior to the beginning of commercialization.

Continually update vision of what quantum computing will look like

As research, prototyping, experimentation, and even product development progress, our notions and visions of what quantum computing will ultimately look like will evolve and have to be continually updated.

If anybody seriously imagines that they know what a quantum computer will look like in five to seven or ten years, they are seriously mistaken. Expect radical change.

Need to utilize off the shelf technology

Existing quantum computers need to be custom-built from scratch, using very little in the way of off the shelf technology for major subsystems. Individual components are typically off the shelf, but mostly smaller components, with the exception of maybe the cryostat. The goal is that virtually any technically-sophisticated company should be able to build a quantum computer using off the shelf technology, without the need to invest in scientists and a research staff to figure out how to build any of the components that are not available off the shelf.

The point is that designing and producing a new quantum computer should not require a research staff and five or more years of research and design.

Designing and producing a new quantum computer should be a product engineering task, not a research program.

Commercialization should focus 99% on off the shelf research and no more than 1% on fresh and deep research. The goal is to minimize technical risk. And maximum leverage of existing knowledge and expertise.

It takes significant time to commercialize research results

As spectacular as science discoveries and breakthroughs can be, it usually takes significant time, energy, and resources to transform raw research results into practical technology which can actually be used in commercial products. Sometimes years. Sometimes even decades, as we are seeing with quantum computing research from the 1980’s and 1990’s.

It’s generally unwise to expect that research results can be commercialized in less than two to five years, and even that is optimistic and aggressive.

The goal is to turn raw research results into off the shelf technology which can then be used in commercial products

The real goal is simply to turn research results into off the shelf technology. That’s when the technology is ready to be used and deployed in commercial products.

Will there be a quantum winter? Or even more than one?

Technological progress is really hard to predict. Sometimes progress is rapid, sometimes not so much. Sometimes there’s a long period of progress, and sometimes long periods of seeming lack of progress.

Quantum computing has made great progress in recent years, and much more progress is expected, but breakthroughs are hard to predict on a schedule. And sometimes the most promising technologies just don’t work out.

There are several areas were dramatic advances are needed which may take much longer than expected:

  1. Sufficient qubit fidelity to support efficient quantum error correction (QEC).
  2. Achieving quantum error correction efficiently and accurately — even for a relatively small number of qubits.
  3. Achieving enough physical qubits to achieve a sufficient capacity of logical qubits through quantum error correction.
  4. Achieving sufficiently fine granularity of quantum phase and probability amplitude to enable quantum Fourier transform (QFT) and quantum phase estimation (QPE) for a reasonably large number of qubits sufficient to enable quantum computational chemistry.
  5. Conceptualizing — and developing — a diverse and robust library of algorithmic building blocks sufficient to enable complex quantum algorithms and applications.
  6. Achieving The ENIAC Moment for even a single production-scale application.
  7. Achieving The ENIAC Moment for multiple production-scale applications.
  8. Conceptualizing — and developing — sufficiently high-level programming models to make quantum algorithm design practical for non-elite teams.
  9. Conceptualizing — and developing — high-level quantum-native programming languages to support the high-level programming models using high-level algorithmic building blocks to make it easier for non-elite teams to design algorithms and develop production-scale quantum applications.
  10. Achieving The FORTRAN Moment for a single production-scale application.
  11. Achieving The FORTRAN Moment for multiple production-scale applications.
  12. Opening the floodgates for exploiting The FORTRAN Moment for widespread production-scale applications.

Each and every step in the progression could present its own quantum winter. Not necessarily, but it’s possible.

The only sure defense against such quantum winters is to sufficiently fund quantum computing research to enable enough redundancy and competition so that inevitable stumbling blocks and setbacks don’t bring all progres to a screeching halt as each roadblock is painstakingly analyzed and worked around, and sometimes even significant backtracking to find alternative routes. Better to pre-fund a plethora of alternative routes in advance so scientists and engineers always have choices available to them if promising approaches simply don’t work out, or don’t work out on a desired schedule.

But occasional Quantum Springs and Quantum Summers

Even if we do indeed see one or more Quantum Winters, it’s just as likely that we will also see Quantum Springs and Quantum Summers as well, where despair turns around into hope and even euphoria.

In fact, we need to prepare in advance, not for the Quantum Winters, when things slow down, but for the Quantum Springs and Summers when we need to redouble our efforts to take advantage of the sun when it is shining. Or, as the old saying goes, Make hay when the sun shines, since the sunny days can pass by a lot faster than we might hope. So we need to have money, people, and resources in place well in advance of when the sun is about to start shining. In particular, a well-funded research program so that research ideas are ready to go and be turned over to product engineering when the sun starts shining.

Quantum Falls?

Uhhh… yeah, the downside of a Quantum Summer is a quantum fall setting us up for a Quantum Winter.

The only real defense for a Quantum Fall is to try to make the best of the downtime and available resources and talent to get ready for the next Quantum Spring and Summer.

Quantum Winters are a great time to double down on theory and basic fundamental research, which take a lot of time anyway. And a Quantum Fall is the best time to do the preparatory work for a Quantum Winter.

In short, plan ahead and minimize the chance that you will be caught unawares.

In truth, it will be a very long and very slow slog

In truth, quantum computing is actually advancing at a relatively slow pace — it’s the hype that is zooming ahead, breaking all speed records.

Sure, occasionally we will see dramatic breakthroughs, but usually we will see slow, steady incremental progress interspersed with modest to moderate-sized deserts or jungles that slow progress even further, even if on more rare occasions we see those amazing breakthroughs.

So, the basic reality is that we all need to be prepared for a very long and very slow haphazard slog of steady but uneven progress, even as we need to remain prepared for occasional bouts of euphoria… and despair.

No predicting the precise flow of progress, with advances and setbacks

Sure, everybody wants a schedule, a timeframe, and a detailed roadmap indicating when each milestone will be reached, but that just isn’t possible. Oh, sure, you can actually do that, but it will be obsolete before the ink is dry.

Some progress will be quite rapid, even while at other times progress will move at a snail’s pace, and with lots of occasional setbacks interspersed with occasional big advances.

Sorry, but there won’t be any precise roadmap to commercialization of quantum computing

The best we will be able to do is outline milestones and technical criteria for progress, and then the order and timing of progress will be what it is rather than on a planned sequence.

This deficit is epitomized by the fact that even now there is no clear winner as to a preferred qubit technology. Or as I put it, the ideal qubit technology has yet to be discovered or invented.

Existing roadmaps leave a lot to be desired, and prove that we’re in pre-commercialization

Some vendors have presented varying degrees of roadmaps for their quantum computers over the next few years, but they invariably have left a lot to be desired.

Their lack of detail and limited scope emphasize that they are looking at only part of pre-commercialization, and don’t even begin to address commercialization.

In short, they end up showing that we’re still in the early stages of pre-commercialization.

For example, I’ve commented on IBM’s published quantum hardware roadmap in my paper:

The bottom line is that none of the existing published vendor quantum computing roadmaps even hint at when commercialization will begin, let alone when the initial commercialization stage, C1.0, will occur, when customers can begin serious production-scale application development and deployment and expect a substantial quantum advantage over classical computing solutions.

No need to boil the ocean — progression of commercial stages

There is a tremendous — and unknown — amount of effort needed to get quantum computing to the stage where it is capable and easy to use for production-scale production-quality practical real-world applications. But there is no need to try to bring all of that together in one humongous single release — to boil the ocean — as scientists like to say. Instead a progression of incremental stages can be pursued.

The first stage is of course the initial stage, 1.0, but that won’t do everything that everybody expects. Successive stages will do more for more people.

The transition from pre-commercialization to commercialization: producing detailed specifications for requirements, architecture, functions and features, and fairly detailed design

Commercialization will require detailed specifications of all aspects of the commercial product. This won’t be possible until the very end of pre-commercialization, by definition. Once all of these details are known, commercialization can begin.

In essence, commercialization will start with producing detailed specifications for requirements, architecture, functions and features, and fairly detailed design.

In practice, maybe pre-commercialization ends with a series of draft specifications for requirements, architecture, functions and features, and fairly detailed design, and that drafting process — and pre-commercialization overall — continues until there is general agreement that the draft is good enough and complete enough to hand off to the product engineering team who can then turn that draft into a polished set of preliminary specifications which will guide methodical commercialization.

Whether those draft specifications are formally considered as part of pre-commercialization or part of commercialization is a fielder’s choice. They could be considered the final report for pre-commercialization, or the initial work for commercialization. And maybe even both, where the documents produced at the end of pre-commercialization are transformed into a somewhat different set of documents to be used as the basis for commercialization.

Critical technical gating factors for initial stage of commercialization

Again, not all desired features or promised capabilities will be available in the early stages of commercialization, but some technological capabilities are absolutely mandatory:

  1. Near-perfect qubits. At least four nines of qubit fidelity — 99.99%. Possibly five nines — 99.999%.
  2. Circuit depth. Generally limited by coherence time. No clear threshold at this stage but definitely going to be a critical gating factor. Whether it is 50, 100, 500, or 1,000 is unclear. Significantly more than it is now.
  3. Qubit coherence time. Sufficient to support needed circuit depth.
  4. Near-full qubit connectivity. Either full any to any qubit connectivity or higher qubit fidelity to permit SWAP networks to simulate near-full connectivity.
  5. 64 qubits. Roughly. No precise threshold. Maybe 48 qubits would be enough, or maybe 72 or 80 qubits might be more appropriate. Granted, I think people would prefer to see 128 to 256 qubits, but 64 to 80 might be sufficient for the initial commercialization stage.
  6. Alternative architectures may be required. Especially for more than 64 qubits. Or even for 64, 60, 56, 50, and 48 qubits in order to deal with limited qubit connectivity.
  7. Fine phase granularity to support quantum Fourier transform (QFT) and quantum phase estimation (QPE). At least 20 or 30 qubits = 2²⁰ to 2³⁰ gradations — one million to one billion gradations. Even 20 qubits may be a hard goal to achieve.
  8. Quantum Fourier transform (QFT) and quantum phase estimation (QPE). Needed for quantum computational chemistry and other applications. Needed to achieve quantum advantage through quantum parallelism. Relies on fine granularity of phase.
  9. Conceptualization and methods for calculating shot count (circuit repetitions) for quantum circuits. This will involve technical estimation based on quantum computer science coupled with engineering process based on quantum software engineering. See my paper below.
  10. Moderate improvements to the programming model. Unlikely that a full higher-level programming model will be available soon (before The FORTRAN Moment), but some improvements might be possible.
  11. Moderate library of high-level algorithmic building blocks.
  12. The ENIAC Moment. A proof that something realistic is possible.
  13. Substantial quantum advantage. Full, dramatic quantum advantage is not so likely, but an advantage of at least a million or a billion is a reasonable expectation — much less will be seen as not really worth the trouble. This will correspond to roughly 20 to 30 qubits in a single Hadamard transform — 2²⁰ = one million, 2³⁰ = one billion. An advantage of one trillion — 2⁴⁰ may or may not be reachable by the initial stage of commercialization. Worst case, maybe minimal quantum advantage — 1,000X to 50,000X — might be acceptable for the initial stage of commercialization.
  14. 40-qubit quantum algorithms. Quantum algorithms utilizing 32 to 48 qubits should be common. Both the algorithms and hardware supporting those algorithms. 48 to 72-qubit algorithms may be possible, or not — they may require significantly greater qubit fidelity.
  15. Classical quantum simulators for 48-qubit algorithms. The more the better, but that may be the practical limit in the near term. We should push the researchers for 50 to 52 or even 54 qubits of full simulation.
  16. Overall the technology is ready for production deployment.
  17. No further significant research is needed to support the initial commercialization product. Further research for subsequent commercialization stages, but not for the initial commercialization stage. The point is that research belongs in the pre-commercialization stage.

There will likely be many smaller technical gating factors, but many will be subsidiary to those major gating factors.

For more on circuit repetitions (shot count), see my paper:

Quantum advantage is the single greatest technical gating factor for commercialization

The whole point of quantum computing is to offer a dramatic performance advantage over classical computing. There’s no point in commercializing a new computing technology which has no substantial advantage over classical computing.

For a general overview of quantum advantage and quantum supremacy, see my paper:

For more detail on dramatic quantum advantage, see my paper:

As well my paper on achieving at least a fraction of dramatic quantum advantage:

Variational methods are a short-term crutch, a distraction, and an absolute dead-end

Variational methods are quite popular right now, particularly for such applications as quantum computational chemistry, primarily because they work, in a fashion, on current NISQ hardware, but they are far from ideal. In fact, they are a distraction and an absolute technical dead end — variational algorithms will never achieve dramatic quantum advantage, just by the nature of how they do work. They work by breaking a larger problem down into smaller problems, but that gives up the true power of quantum parallelism — the ability to examine a very large solution space in a single quantum computation. This is essential to fully exploit quantum computing for application categories such as quantum computational chemistry.

Variational methods are a short-term crutch, a stopgap measure, designed to compensate for the inability of current hardware to support the desired algorithmic approach of quantum Fourier transform (QFT) and quantum phase estimation (QPE), which can in fact examine a very large solution space in a single quantum computation, rather than incrementally attempt heuristic selection of much smaller subsets of the full solution space, which is problematic for larger problems even if it does work for the kinds of smaller problems currently being implemented on current NISQ hardware.

Quantum Fourier transform and quantum phase estimation are not supported on current NISQ hardware due to the following limitations:

  1. Low qubit fidelity.
  2. Lack of fine granularity of phase.
  3. Low gate fidelity.
  4. Limited circuit depth.
  5. Low measurement fidelity.

The preferred alternative at this stage should be to refrain from trying to implement algorithms on current hardware in favor of implementing them on classical quantum simulators. Granted, that limits implementations to 32 to 40 or maybe 50 qubits, but this puts more emphasis on designing automatically scalable algorithms, so that the algorithms can use the best and optimal technical approach and be ready for the day when the hardware really is ready for commercialization.

Simulation gives the best of both worlds — design and develop algorithms during pre-commercialization while hardware is lacking, and being ready to run optimally on the eventual hardware when it indeed is ready for commercialization, achieving quantum advantage as soon as the hardware is capable. This will greatly facilitate advancement of quantum computing at a much more rapid pace.

Exploit quantum Fourier transform (QFT) and quantum phase estimation (QPE) on simulators during pre-commercialization

Just to re-emphasize the point of the preceding section, that the advancement of quantum computing, will be greatly facilitated by skipping variational methods on current NISQ hardware during pre-commercialization in favor of relying on simulation of 32 to 40 or even 50 qubits to support quantum Fourier transform and quantum phase estimation as rapidly as the hardware allows, and achieving quantum advantage as rapidly as possible.

Final steps before a product can be released for commercial production deployment

Many details must be finalized to release a product for commercial production deployment, but some special items include:

  1. Packaging as a deployable product.
  2. Service level agreement (SLA). Contractual commitment, with penalties.
  3. Cybersecurity.
  4. Scheduling tasks for execution.
  5. Load balancing.
  6. Monitoring quantum cloud.
  7. Automated software and firmware updates.

Details are beyond the scope of this paper, which is focused on pre-commercialization anyway.

Some degree of these items may be needed during pre-commercialization as well, such as to enable access to preview products, which look and act as if they were commercial products.

No further significant research can be required to support the initial commercialization product

By definition, commercialization cannot commence until all of the necessary research has been completed — during pre-commercialization.

Further research is to be expected for subsequent commercialization stages, but not for the initial commercialization stage.

The point is that research belongs in the pre-commercialization stage.

Commercialization requires that all of the significant technical questions and uncertainties have been resolved.

Commercialization requires that the technology be ready for production deployment

There are far too many technical details which are simply not ready for production deployment at present. Basically, any factor which impacts quantum advantage impacts production deployment.

Non-technical gating factors for commercialization

Many factors influence commercialization of quantum computing, many of which involve specific technologies, but many of which are non-technical in nature. They may be technical in nature or use technology but not necessarily involve research or engineering to complete them.

Some of the factors are:

  1. Development of quantum computer science as a new field of study.
  2. Development of quantum software engineering as a new field of study.
  3. Development of a quantum software development life cycle (QSDLC) methodology.
  4. Development of a quantum technical talent pool.
  5. Development of a quantum ecosystem.

Quantum computer science

Most of quantum computing is done in a rather ad hoc manner at present. What’s most needed at this stage is the development of the whole new discipline and field of quantum computer science.

Details… remain to be discovered and formulated.

Some discussion of quantum computer science can be found in my research topics paper:

Quantum software engineering

Along with the need for the new discipline and field of quantum computer science, there is also a need for the development of the whole new discipline and field of quantum software engineering.

Details… remain to be discovered and formulated.

Some discussion of quantum software engineering can be found in my research topics paper:

No point to commercializing until substantial fractional quantum advantage is reached

Commercialization should be predicated on delivering substantial value, not merely minimal value. Quantum advantage is really the only benefit of quantum computing — it’s the whole point of the interest in quantum computing. Full, dramatic quantum advantage (one quadrillion X classical computing performance) is unlikely to be achieved by the time people feel that quantum computing is ready for commercialization, but substantial quantum advantage (minimum of 1,000,000X) is a realistic possibility.

On the other hand, merely achieving a modest advantage over classical computers, such as 1,000 X (minimum quantum advantage), is rather unlikely to make a big splash commercially.

Of course, there’s a wide range of both possibility and acceptability for some applications, such as:

  1. 10,000X — 1%
  2. 100,000X — 10%
  3. 250,000X — 25%
  4. 500,000X — 50%
  5. 750,000X — 75%

It will be a fielder’s choice (vendor’s choice) whether to commercialize a quantum computer with only 10% or 1% of substantial quantum advantage, but I would still suggest that the target goal should be full 100% substantial quantum advantage — 1,000,000 X.

It is also very possible that a vendor might offer a pre-release of their intended commercial offering which offers say only 10% substantial quantum advantage while at the same time setting the expectation that further pre-releases in the near-future will offer further increments of progress culminating in full 100% substantial quantum advantage for their full C1.0 official initial commercial offering.

For more on quantum advantage and fractional quantum advantage in particular, see my paper:

Fractional quantum advantage progress to commercialization

Reemphasizing what was just said, vendors may opt to produce a progression of pre-releases or preliminary versions of their product offerings that constitute a path to full commercialization, from a quantum advantage perspective.

A possible progression to offering full 100% substantial quantum advantage — 1,000,000 X classical computing performance — might be:

  1. 10,000X — 1%
  2. 100,000X — 10%
  3. 250,000X — 25%
  4. 500,000X — 50%
  5. 750,000X — 75%
  6. 1,000,000X — 100%

A more ambitious vendor might offer a progression to 1,000,000,000 X (a billion X) classical computing performance:

  1. 100,000X — 10%
  2. 500,000X — 50%
  3. 750,000X — 75%
  4. 1,000,000X — 100%
  5. 10,000,000X — 1,000X
  6. 100,000,000X — 10,000X
  7. 1,000,000,000X — 10,000X

An even more ambitious vendor may offer a progression to one trillion X — 0.1% of full dramatic quantum advantage.

Progress to full dramatic quantum advantage, an advantage of one quadrillion X, will remain out of reach for some time until the hardware, particularly qubit fidelity, advances substantially.

Qubit capacity progression to commercialization

64 qubits may be the optimal qubit capacity for initial commercialization. Or maybe not. Some possibilities:

  1. 48 qubits. A bare minimum.
  2. 50 qubits.
  3. 54 qubits.
  4. 56 qubits.
  5. 60 qubits.
  6. 64 qubits. A fairly reasonable start.
  7. 72 qubits. A more reasonable start..
  8. 80 qubits. A good start.
  9. 96 qubits. A fairly strong start.
  10. 128 qubits. A strong start.
  11. 256 qubits. A clear sweet spot. A very strong start.

It all depends not just on hardware development but also on algorithms that can effectively utilize that number of qubits.

The hardware people and the algorithm people need to stay in close contact and keep each other apprised of their respective plans and needs.

Maximum circuit depth progression to commercialization

Maximum quantum circuit depth is generally limited by coherence time. There is no clear threshold at this stage for what maximum circuit depth must be achieved for commercialization of quantum computing to be reasonably successful and reasonably compelling.

Some possible milestones on the way to commercialization:

  1. 25 gates. Absolute worst case minimum for product.
  2. 50 gates.
  3. 75 gates.
  4. 100 gates. A reasonable expectation for the initial product.
  5. 250 gates. Plausible for the initial product.
  6. 500 gates.
  7. 1,000 gates. May be a decent goal for the initial product.
  8. 2,500 gates.
  9. 5,000 gates.
  10. 7,500 gates.
  11. 10,000 gates.

I don’t imagine that support for more than 10,000 gates will be a requirement for initial commercialization.

Granted, eventually, support will be needed for much larger quantum circuits, but this won’t be an issue for initial commercialization.

Some future milestones for post-commercialization:

  1. 100 gates.
  2. 250 gates.
  3. 500 gates.
  4. 1,000 gates. Hopefully within a few releases after initial commercialization.
  5. 2,500 gates.
  6. 5,000 gates.
  7. 10,000 gates.
  8. 25,000 gates.
  9. 50,000 gates.
  10. 75,000 gates.
  11. 100,000 gates.
  12. 250,000 gates.
  13. 500,000 gates.
  14. 750,000 gates.
  15. 1,000,000 gates.
  16. 10,000,000 gates.
  17. 100,000,000 gates.

At this stage, I can’t envision a need for more than 100,000,000 gates in a single quantum circuit. More complex algorithms might utilize modular and distributed solutions.

These are all very rough numbers and purely speculative. The reality remains to be seen.

Alternative hardware architectures may be needed for more than 64 qubits

Once we get to 72, 80, 96, and 128 qubits and beyond, alternative architectures for hardware may be required. Connectivity between qubits is likely to become problematic and require innovative solutions.

Qubit technology evolution over the course of pre-commercialization, commercialization, and post-commercialization

Qubit technology is probably the single most central aspect of quantum computing, but the technology is evolving both dramatically and incrementally and will continue to evolve both dramatically and incrementally for the indefinite future, for the remainder of pre-commercialization, for the entirety of the initial commercialization stage, as well as post-commercialization stages after the initial commercialization.

Current qubit technology is neither ideal, nor is there an ideal qubit technology on or near the horizon.

This will not be unlike classical computing, where basic bit technology evolved dramatically from punched cards and paper tape, electrical relays, vacuum tubes, discrete transistors, and numerous stages of integrated circuits until the current CMOS semiconductor technology for very large scale integrated circuits, as well as memory and storage evolving through punched cards, paper tape, display storage tubes, mercury delay lines, rotating magnetic drums, core memory, rotating magnetic disks, magnetic tape, semiconductor memory, and now flash memory chips.

Currently, superconducting transmon qubits and trapped-ions are the leading qubit technologies. Ongoing incremental advances will likely continue for the indefinite future. Additional innovative qubit technologies are likely to arrive in the coming years and decades, as was the case with classical computing.

There is no way to predict what qubit technology advances are likely over the next two to five to seven years as we proceed through pre-commercialization, the initial commercialization stage, and the subsequent few early commercialization stages, other then incremental qubit capacity and fidelity improvements over time.

Whether we see any further dramatic qubit technology advances during pre-commercialization and the initial commercialization stage is unclear. We may or may not require one or more dramatic advances, beyond a series of incremental advances.

That said, I suspect that it’s a coin-flip whether the initial commercialization stage will rely on superconducting transmon qubits or trapped-ion qubits, or both.

Whether we will still rely on superconducting transmon qubits and trapped-ion qubits five to ten to fifteen years from now is anybody’s guess. I suspect not.

That said, it is probably safe to say that at least a few dramatic qubit technology advances will need to occur before we get to mature commercialization, with capabilities such as quantum error correction (QEC) with a significant capacity of logical qubits.

In any case, it remains essential and urgent to maintain and increase research efforts in qubit technology. My personal view is that we should be investing in at least 15 to 25 additional approaches to qubit technology. Even if none of that additional research bears fruit in time for the initial commercialization stage, it could still be very beneficial for subsequent commercialization stages.

Initial commercialization stage 1 — C1.0

I’ll use the shorthand notation C1.0 to refer to the initial commercialization stage.

Subsequent commercialization stages will have incremental numbers, possibly with incremental fractional numbers for intermediate or minor releases, such as C1.5 and C2.0.

The main criterion for initial commercialization is substantial quantum advantage for a realistic application, AKA The ENIAC Moment

There’s no point to commercialization of quantum computing until something useful can be accomplished, epitomized as The ENIAC Moment, which is defined as two criteria:

  1. A production-scale practical real-world application.
  2. Substantial quantum advantage has been achieved. A quantum advantage of roughly 1,000,000X a classical computing solution.

Differences between validation, testing, and evaluation for pre-commercialization vs. commercialization

Validation, testing, and evaluation are required during both pre-commercialization and commercialization. The overall process is the same, but the goals are somewhat different.

  1. During pre-commercialization. The goal is to evaluate raw technology components for suitability for use in a product. May or may not involve getting feedback from potential customers and users.
  2. During commercialization. The goal is to evaluate the packaged product for suitability for use in customer applications.

Validation, testing, and evaluation during pre-commercialization

Technically, traditionally, pre-commercialization is a strictly in-house affair which is unknown to and inaccessible by outsiders such as customers and users. But, occasionally there are new technologies, such as quantum computing, which are so new and novel that it is virtually impossible to finish pre-commercialization without feedback from prospective customers and users.

In such cases, the technology can be packaged in the form of a pre-product or preview product or preview releases, which may look and feel like a product, except for the fact that it isn’t an actual commercial product.

For such cases, the validation, testing, and evaluation process normally associated with a full commercial product can be adapted to function in a very similar manner, but with some differences.

  1. Some degree of quality assurance (QA) testing. Including unit testing, subsystem testing, system testing, and interoperability testing.
  2. Significant degree of documentation development. How to properly use the product. How to troubleshoot problems.
  3. Periodic preview releases. The evolving product ideas are treated as if a product — a preview product, packaged and distributed or otherwise made available to organizations and users, as if they were customers. The notions of alpha, beta, and pre-releases are not so relevant.

It is vitally important to make it clear to anyone viewing, accessing, or using such preview products or preview releases that they are not actual products and cannot be used in production deployment. Even though the final commercialized product may in fact look fairly similar to the final preview product, there is no guarantee that any particular preview product will look anything like the eventual commercialized product.

Validation, testing, and evaluation for initial commercialization stage 1.0

Even after intensive development is complete for the initial commercialization stage 1.0, validation, testing, and evaluation will be required before this stage is actually ready for use and deployment for actual production-scale practical real-world applications.

  1. Extensive quality assurance (QA) testing. Including unit testing, subsystem testing, system testing, and interoperability testing.
  2. Extended design and code reviews. Both hardware and software. Look very carefully for any hidden flaws rather than wait for them to be reported as “bugs.”
  3. Documentation development and review. How to properly use the product. How to troubleshoot problems.
  4. Extended sequence of alpha, beta, and pre-releases. To enable evaluation by real potential customers and users and to get their feedback. May require incremental improvements to the product.
  5. Culminates (or at least climaxes) with the arrival of The ENIAC Moment. Finally a real customer can achieve a production-scale practical real-world application.
  6. Benchmarking. Determine and document how the hardware, algorithms, and key applications really perform.

Although validation is critical as the final step at the end of initial commercialization, it is equally important at all stages of commercialization, and at all stages of pre-commercialization as well.

Initial commercialization stage 1.0 — The ENIAC Moment has arrived

In preparation for the first real commercialization stage, C1.0, hardware vendors, algorithm designers, and application developers will be all focused on trying to achieve The ENIAC Moment for quantum computing — achieving a significant fraction of dramatic quantum advantage for a production-scale practical real-world application.

This will require:

  1. Elite teams.
  2. A lot of hard work.
  3. Significant hardware advances.
  4. Significant research.

Once The ENIAC Moment has been achieved, it is then only an engineering task to finalize and polish the hardware, algorithms, and software to achieve product quality. Of course that could take another year or two, but the technological uncertainty will have been removed from the release equation.

But even the commercialization stage is not the end — in fact it is only the beginning, with the initial commercialization stage only satisfying the needs of a relatively limited audience. A progression of subsequent commercial release stages will incrementally evolve the technology, making it more useful by more people as each stage comes to market.

Initial commercialization stage 1.0 — Routinely achieving substantial quantum advantage

Achieving The ENIAC Moment is certainly the key gating criterion for initial commercialization, but one application is not enough. C1.0 won’t be terribly useful if only one application achieves substantial quantum advantage but no other applications do. What we really need to see is that numerous production-scale practical real-world applications have achieved The ENIAC Moment, so that production-scale practical real-world applications routinely achieve substantial quantum advantage.

Initial commercialization stage 1.0 — Achieving substantial quantum advantage every month or two

Further it is safe to say that we still haven’t achieved full commercialization until we are seeing at least one new production-scale practical real-world application coming online- and achieving substantial quantum advantage — every month or two, six to twelve in a single year.

Then we can confidently proclaim that commercialization has been achieved.

Okay, maybe a nontrivial fraction of minimal quantum advantage might be acceptable for the initial stage of commercialization

Achieving dramatic quantum advantage (one quadrillion X) or even substantial quantum advantage (1,000,000X) is no easy feat, so it might be marginally acceptable to have an initial commercialization stage which achieves only minimal quantum advantage — a factor of 1,000X a classical computing solution — or even a fraction of minimal quantum advantage — a factor of 50X, 100X, 250X, or maybe even only 10X or 4X — for high-value applications which consume dramatic amounts of classical computing resources.

For example, if a very high-value classical application takes ten hours to run but business requirements are for a solution in two hours, then a factor of 50X, 10X, or even 5X would actually be a fairly big deal, delivering a viable solution while classical computing is unable to deliver a solution at all.

In any case, it should be clear that routinely achieving a nontrivial fraction of minimal quantum advantage should be an absolute requirement for initial commercialization of quantum computing.

Minimum Viable Product (MVP)

Some vendors might opt to take an even more minimal approach than the full initial commercialization configuration suggested here. They or their prospective customers might have specialized or limited needs that require less hardware than suggested here.

Such a Minimum Viable Product (MVP) is certainly feasible, although somewhat less desirable and less applicable to a wider range of applications and situations.

This paper won’t suggest possible MVP configurations, taking the position that the suggested C1.0 configuration is already tantamount to a minimum viable product.

But, here are some realistic possible configuration variations for a MVP:

  1. Less than four nines of qubit fidelity. Such as 3, 3.25, 3.5, or 3.75 nines. Three nines might be reasonable in conjunction with full connectivity so that SWAP networks are not needed.
  2. More-limited qubit connectivity. Not full connectivity. But possibly with greater qubit fidelity to support longer SWAP networks.
  3. Fewer qubits. Maybe 48 or even only 44 or 40. Less than 40 is rather unlikely. Possibly in conjunction with great qubit fidelity.
  4. Limited granularity of phase and probability amplitude. May not support larger applications in terms of how many qubits can participate in a quantum Fourier transform (QFT) or quantum phase estimation (QPE). But still needs to support some significant granularity, enough to enable key applications such as quantum computational chemistry.
  5. Smaller fractional quantum advantage. Significantly less than full significant quantum advantage (1,000,000X), such as 10,000X to 100,000X.

Some key capabilities not present in such an MVP:

  1. Quantum error correction (QEC) and logical qubits. Too much of a stretch. Focus on near-perfect qubits.
  2. The FORTRAN Moment. Requires QEC. Essential for widespread adoption, but not for initial adoption.

Automatically scalable quantum algorithms

A key opportunity and important capability for exploiting quantum computing is the notion of an automatically scalable quantum algorithm. More specifically, parameterized algorithms, where the quantum logic circuit is generated automatically by classical software which customizes the quantum logic circuit based on the input parameters, so that larger input data would generate a larger quantum circuit and smaller input data would generate a smaller quantum circuit.

A second capability is automatic quantum algorithm analysis software which checks the quantum logic to detect the use of any features which might not be scalable.

The combination of these two features will facilitate the design and development of scalable quantum algorithms, so that an algorithm which is developed and tested on a near-term quantum computer with limited qubits — or on a classical quantum simulator limited to 40 or so qubits — will have an excellent chance of scaling up to 64, 80, 96, 128, 256, or more qubits.

To be clear, automatically scalable quantum algorithms are critical to the future of quantum computing.

For more on scaling of quantum algorithms, see my paper:

Configurable packaged quantum solutions

Another key opportunity and important capability for exploiting quantum computing is the notion of a configurable packaged quantum solution, which is a generalized quantum application, which solves a particular but general class of application problem, but can easily be tailored to a particular situation through a sophisticated set of input parameters which permit the application to be configured to meet the needs of the particular situation. This effectively allows the user to adapt the generalized application to different problems (of the same general class) without having to write or modify quantum algorithms or quantum application code.

I predict that most organizations will utilize configurable packaged quantum solutions rather than design and develop custom quantum algorithms and applications.

This is a whole new area. No work has been done in this area yet, but it holds great promise.

Shouldn’t quantum error correction (QEC), logical qubits, and The FORTRAN Moment be required for the initial commercialization stage? Yes, but not feasible.

That would be great, but it’s too much of a stretch. Even as estimated here, QEC wouldn’t be available until C3.0, and The FORTRAN Moment not until C4.0. And even that might be overly-optimistic.

The hope expressed here is that near-perfect qubits coupled with reasonably high-level algorithmic building blocks will enable ENIAC Moments, which will be a significant milestone for quantum computing.

Should a higher-level programming model be required for the initial commercialization stage? Probably, but that is probably too much to ask for.

It really would be highly advantageous for the initial commercialization stage to include a higher-level programming model. No question about that. But making that a reality in the very near-term may be too much to ask for at this stage.

Most likely, it will probably have to wait for The FORTRAN Moment, which will require a higher-level programming model anyway.

But, I’ll leave it open as to whether a higher-level programming model does manage to find its way into C1.0.

It may just be that some improvements to the current low-level programming model or some useful higher-level algorithmic building blocks are sufficient to fill the gap for C1.0.

Not everyone will trust a version 1.0 of any product anyway

In truth, many sophisticated organizations know better than to rely on the first release of any computer product, preferring to wait until the third or fourth release when more of the initial kinks have been worked out.

General release

When a commercial product is finally ready for use by paying customers, after having been through a process of validation, testing, and evaluation, with a series of alpha, beta, and pre-releases, the product is packaged for release and becomes a general release.

A general release may be the result of any number of release candidates, each packaged as if ready for release, then put through a final validation and testing phase to confirm that it really is ready. Once there are no remaining significant concerns, general release can occur.

The product is then ready to be distributed and installed and accessed by customers and users.

The customer IT team may have their own validation, testing, and evaluation process before they permit in-house users from accessing the new product.

Criteria for evaluating the success of initial commercialization stage 1.0

There is really only one critical criterion for evaluating the success of initial commercialization stage 1.0 (C1.0):

  • A substantial number of organizations are able to proceed with design, development, and deployment of production-scale practical real-world quantum applications that will fulfill the grand promise of quantum computing — quantum advantage.

That’s it.

Of course there will be further, subsequent stages of commercialization, but the essence of initial commercialization stage, C1.0, is that the ball is finally rolling and people feel that they finally have a technology that they can work with rather than a vague promise of a bright but distant future.

Quantum ecosystem

A successful commercial product requires a thriving, vibrant, and mutually-supportive ecosystem, which consists of:

  1. Hardware vendors.
  2. Software vendors. Tools and support software
  3. Consulting firms.
  4. Algorithms.
  5. Applications.
  6. Open source whenever possible. Algorithms, applications and tools. Hardware and firmware as well. Freely accessible plans so that anyone could build a quantum computer. Libraries, metaphors, design patterns, application frameworks, and configurable packaged quantum solutions. Training materials. Tutorials. Examples. All open source.
  7. Community. Including online discussion and networking. Meetups, both in-person and virtual.
  8. Analysts. Technical research as well as financial markets.
  9. Journalists. Technical and mainstream media.
  10. Publications. Academic journals, magazines, books. Videos and podcasts.
  11. Conferences. Presentation of papers, tutorials, and trade show exhibits. Personal professional networking opportunities.
  12. Vendors. Hardware, software, services, algorithms, applications, solutions, consulting, training, conferences.

Subsequent commercialization stages — Beyond the initial ENIAC Moment

The ENIAC Moment will be simply a single application. Much more needs to follow, all of which is beyond the scope of this paper, but a preview is enlightening:

  1. C1.0 — Reached The ENIAC Moment. All of the pieces are in place.
  2. C1.5 — Reached multiple ENIAC Moments.
  3. C2.0 — First configurable packaged quantum solution.
  4. C2.5 — Reached multiple configurable packaged quantum solutions. And maybe or hopefully finally achieve full, dramatic quantum advantage somewhere along the way as well.
  5. C3.0 — Quantum Error Correction (QEC) and logical qubits. Very small number of logical qubits.
  6. C3.5 — Incremental improvements to QEC and increases in logical qubit capacity.
  7. C4.0 — Reached The FORTRAN Moment. And maybe full, dramatic quantum advantage as well.
  8. C4.5 — Widespread custom applications based on QEC, logical qubits, and FORTRAN Moment programming model. Presumption that full, dramatic quantum advantage is the norm by this stage.
  9. C5.0 — The BASIC Moment. Much easier to develop more modest applications. Anyone can develop a quantum application achieving dramatic quantum advantage.
  10. C5.5 — Ubiquitous quantum computing ala personal computing.
  11. C6.0 — More general AI, although not full AGI.
  12. C7.0 — Quantum networking. Networked quantum state.
  13. C8.0 — Integration of quantum sensing and quantum imaging with quantum computing. Real-time quantum image processing.
  14. C9.0 — Incremental advances along the path to a mature technology.
  15. C10.0 — Universal quantum computer. Merging full classical computing.

Vendor product releases won’t necessarily sync up with commercialization stages. Some will lead and others will follow. Some may get into the game early and participate in all stages, while others may be late to the game and skip many of the earlier stages.

Post-commercialization efforts

There are plenty of research efforts that will likely not have produced results ready for the initial commercialization stage or the first few subsequent stages, such as:

  1. Maturity of quantum error correction and logical qubits. Initial support likely will be for a fairly low capacity of logical qubits.
  2. Finer granularity for phase to support larger quantum Fourier transform (QFT) and quantum phase estimation (QPE). 30 to 50 qubits, and beyond. 30 qubits = 2³⁰ = one billion quantum states, 40 qubits = 2⁴⁰ = one trillion quantum states, 40 qubits = 2⁵⁰ = one quadrillion quantum states. Eventually enable achievement of full, dramatic quantum advantage.
  3. Higher-level programming model(s).
  4. High-level algorithmic building blocks.
  5. Conceptualization and development for The FORTRAN Moment. Complex applications can be developed by non-elite teams.
  6. Conceptualization and development of problem statement languages. Shorthand notations for declaring the needs of the application problem which can then be fully and automatically be transformed into quantum algorithms and applications.
  7. Conceptualization and development for The BASIC Moment. Much easier simple applications. Anyone can develop quantum applications which achieve dramatic quantum advantage.
  8. Alternative qubit technologies.
  9. Alternative hardware architectures may be required. Especially for more than 64 qubits. Or even for 64, 60, 56, 50, and 48 qubits in order to deal with limited qubit connectivity. But 128, 256, 1K, and more qubits are likely to require innovative hardware architectures.
  10. Advances in quantum parallelism and quantum advantage. Initial commercialization, such as The ENIAC Moment may not achieve full, dramatic quantum advantage. Increasing fractional quantum advantage as qubit fidelity and phase granularity increase.
  11. Achieve full, dramatic quantum advantage. A one quadrillion advantage.
  12. Wider range of configurable packaged quantum solutions.
  13. Integration of quantum sensing and quantum imaging with quantum computing. Real-time quantum image processing.
  14. Quantum multiprocessing. Multiple quantum processors in the same quantum computer system. Coordinated operation.
  15. Quantum networking. Networked quantum state. At any distance.
  16. Full artificial general intelligence (AGI). Far beyond current state of the art AI.
  17. Room temperature quantum computing.
  18. Photonic quantum computing.
  19. Quantum minicomputers. May have less performance and less capacity, but cheaper, smaller, and more accessible.
  20. Universal quantum computer. Merging full classical computing.

Milestones in fine phase granularity to support quantum Fourier transform (QFT) and quantum phase estimation (QPE)

The key and greatest potential for quantum computing is quantum parallelism. Key techniques for achieving quantum parallelism are quantum Fourier transform (QFT) and quantum phase estimation (QPE). But in order for them to function properly and with accuracy, they require fine granularity for the phase portion of quantum state — the imaginary part of the complex number representing quantum state.

There is no certainty as to what degree of fine granularity will be achieved in any particular timeframe.

This paper won’t go into deep technical detail for phase granularity. Rather, we will simply use bits of precision as surrogate for degree of granularity. For more details on fine phase granularity, see my paper below.

So, some of the major milestones for the number of bits of precision for quantum Fourier transform (or quantum phase estimation) are:

  1. 8 bits. Too limited to be of much practical value, but a technical milestone.
  2. 12 bits. Even this is too much, at present.
  3. 16 bits.
  4. 20 bits. This would be a significant technical achievement.
  5. 24 bits.
  6. 28 bits.
  7. 32 bits.
  8. 36 bits.
  9. 40 bits. Current limit of what can be simulated on a classical computer.
  10. 44 bits.
  11. 48 bits.
  12. 56 bits. Possibly the upper limit of what could be simulated on a classical computer.
  13. 64 bits.
  14. 72 bits.
  15. 80 bits.
  16. 96 bits.
  17. 128 bits.
  18. 160 bits.
  19. 256 bits.
  20. 512 bits.
  21. 1K bits.
  22. 2K bits.
  23. 4K bits
  24. 8K bits. Sufficient for using Shor’s factoring algorithm to factor a 4K-bit public encryption key.

Candidate sizes for initial commercialization, C1.0:

  1. 28 bits. Really the minimum we should accept.
  2. 32 bits.
  3. 36 bits.
  4. 40 bits. Current limit of what can be simulated on a classical computer.
  5. 44 bits. A good goal to pursue. If not in C1.0, then relatively soon thereafter.
  6. 48 bits.
  7. 56 bits. Possibly the upper limit of what could be simulated on a classical computer.
  8. 64 bits.
  9. 72 bits.
  10. 80 bits.
  11. 96 bits.

Candidates for subsequent commercialization stages:

  1. 64 bits.
  2. 72 bits.
  3. 80 bits.
  4. 96 bits.
  5. 128 bits.
  6. 160 bits.
  7. 256 bits.
  8. 512 bits.

Beyond that is far too speculative at this stage.

For more on fine phase granularity, see my paper:

Milestones in quantum parallelism and quantum advantage

Quantum parallelism is the single most powerful capability of quantum computing, allowing any number of qubits to operate in parallel. It is the capability which permits quantum computing to achieve quantum advantage.

The primary mechanism for support for quantum parallelism is quantum Fourier transform (QFT) and quantum phase estimation (QPE). k qubits operating in parallel produces 2^k quantum states operating in parallel.

The milestones to achieve in quantum parallelism (and QFT, QPE, and quantum advantage) are:

  1. 10 qubits = 2¹⁰ quantum states — advantage of one thousand.
  2. 16 qubits = 2¹⁶ quantum states — advantage of 64K.
  3. 20 qubits = 2²⁰ quantum states — advantage of one million.
  4. 24 qubits = 2²⁴ quantum states.
  5. 28 qubits = 2²⁸ quantum states.
  6. 30 qubits = 2³⁰ quantum states — advantage of one billion.
  7. 32 qubits = 2³² quantum states.
  8. 36 qubits = 2³⁶ quantum states.
  9. 40 qubits = 2⁴⁰ quantum states — advantage of one trillion.
  10. 44 qubits = 2⁴⁴ quantum states.
  11. 48 qubits = 2⁴⁸ quantum states.
  12. 50 qubits = 2⁵⁰ quantum states — advantage of one quadrillion. Dramatic quantum advantage.
  13. 56 qubits = 2⁵⁶ quantum states.
  14. 64 qubits = 2⁶⁴ quantum states.
  15. 72 qubits = 2⁷² quantum states.
  16. 80 qubits = 2⁸⁰ quantum states.
  17. 96 qubits = 2⁹⁶ quantum states.
  18. 128 qubits = 2¹²⁸ quantum states.

Exactly which of these milestones will have been met by the initial commercialization stage, or each of the subsequent stages is completely unknown, but 20 to 30 qubits is expected for the initial commercialization stage.

When might commercialization of quantum computing occur?

There is no good and credible answer as to how many more years of research and development will be needed before quantum computing is ready to be commercialized. And this would be for the initial stage of commercialization, C1.0, with a lot of important capabilities coming in the years after that initial stage.

Some possible timeframes to consider for C1.0:

  1. Within 2 years? No, no chance.
  2. 3 years? Unlikely.
  3. 4 years? Slim to modest possibility. Flip a coin.
  4. 5 years? Moderately likely. But not a slam dunk.
  5. 6 years? Fair bet.
  6. 7 years? Fairly solid chance.

And estimated timeframes for subsequent stages of commercialization are even more murky — it’s best not to speculate about them until the initial commercialization stage is near, imminent, or has already happened.

Slow, medium, and fast paths to pre-commercialization and initial commercialization

As we just said, there’s no clarity as to the timeframe for commercialization of quantum computing, but it might be illuminating to speculate about maximum, nominal, and minimum paths to both pre-commercialization and initial commercialization. These numbers are somewhat arbitrary, but hopefully helpful to bound expectations.

So, here they are:

  • Pre-commercialization. Minimal: 2 years. Nominal: 4 years. Maximal: 10 years.
  • Commercialization. Minimal: 2 years. Nominal: 3 years. Maximal: 5 years.
  • Total. Minimal: 4 years. Nominal: 7 years. Maximal: 15 years.

And to be clear, all of these numbers are highly speculative and subject to change.

How long might pre-commercialization take?

Just to recap part of what was just discussed, pre-commercialization will take:

  1. As long as it takes. Estimates are all problematic. Anybody’s guess.
  2. Minimum of 2 years. If all goes well — actually if all goes perfectly.
  3. Nominally 4 years. Assuming no major obstacles. Add a year to cover a few contingencies.
  4. Worst case 10 years. Okay, it could be even worse than that or never happen.

If you want to be both optimistic and realistic, call it two to four years.

And this is just pre-commercialization. Product engineering for even a minimum viable product (MVP) for initial commercialization, C1.0, could take another two or three years, so a total of four to seven years. Four to seven years is a long wait, but technically-sophisticated organizations could at least be doing production-scale experimentation and evaluation as soon as pre-commercialization is complete or at least mostly complete, in two to four years.

What stage are we at right now? Still early pre-commercialization.

Despite the dramatic advances of the past five years, quantum computing is still in its infancy, still in the early stages of pre-commercialization.

As I noted in a recent paper, we still don’t have any 40-qubit algorithms:

And as noted in the preceding section, it could easily be another two to four years, if not as much as ten years, before we exit from pre-commercialization.

When might pre-releases and preview releases become available?

I would expect multiple preview products during pre-commercialization, at least during the latter stages. And that doesn’t include alpha, beta, and pre-releases during commercialization, at least later as product engineering gets closer to the initial commercialization release, C1.0.

It’s simply a matter of when the technology is ready for public consumption.

Dependencies

A particular product or technology frequently depends on other products or technologies. This affects the sequencing and timing of achieving milestones.

First, there are two types of dependencies:

  1. Internal dependencies. Within a single product or technology. Subsystems, modules, and components within a system, for example.
  2. External dependencies. Between products or technologies. Software depending on hardware, or algorithms dependent on software or tools, for example.

Second, there are dependencies:

  1. Within pre-commercialization. Sequencing and timing of activities within per-commercialization.
  2. Between commercialization and pre-commercialization. Pre-commercialization activities which must be completed before commercialization can commence.
  3. Within commercialization. Sequencing and timing of activities within commercialization.

This paper won’t attempt to enumerate such dependencies any further than has already been done earlier in this paper for the summaries of pre-commercialization and commercialization, but simply highlight that such dependencies must be taken into account when doing detailed planning to actually perform pre-commercialization and commercialization.

Some products which enable pre-commercialization may not be relevant to commercialization

Pre-commercialization is a journey of discovery and invention — and reinvention. Some technologies or products which get developed during pre-commercialization may not work out or get superseded by newer and better products and technologies, so that those obsolete technologies won’t necessarily be relevant to commercialization proper.

This doesn’t mean that such obsolete products or technologies were a mistake — or maybe they were — but simply that pre-commercialization is a precursor to commercialization, not an accurate representation of what commercialization will actually look like once it does come into existence.

In many cases, activities during pre-commercialization may have had requirements or dependencies which required or were greatly facilitated by these products or technologies which later become obsolete once the pre-commercialization activities which required them have completed or are superseded by newer and better products and technologies. A lot of tools can fall into this category. They are valuable and very much needed at the time, but not so much once commercialization commences.

Risky bets: Some great ideas during pre-commercialization may not survive in commercialization

There may be great product or technology ideas and even implementations during pre-commercialization which unfortunately don’t survive into commercialization.

They may have been great at the time and very needed, even obvious and slam-dunks, but as the underlying technology of quantum computing gradually and rapidly evolves, some of these great ideas don’t survive and can just fall by the wayside.

As such, it is very risky to bet too heavily on any particular product or technology that looks great during pre-commercialization having great or any prospects once commercialization commences, or once particular subsequent stages of commercialization are achieved and might make the former great ideas obsolete.

A chance that all work products from pre-commercialization may have to be discarded to transition to commercialization

Extending the previous section, there’s no guarantee that absolutely every tangible work product from pre-commercialization won’t have to be discarded in favor of a complete redesign and complete reimplementation during commercialization. Now, that may also be unlikely, but it is still a very real possibility.

The point is that people and planners should not presume that work products from pre-commercialization will necessarily provide a strong foundation for commercialization.

To be clear, research results and expertise gained from prototyping and experimentation during pre-commercialization are quite reusable and can still provide a strong foundation for commercialization, even if the product engineering team does start from scratch in terms of engineering work products.

Analogy to transition from ABC, ENIAC, and EDVAC research computers to UNIVAC I and IBM 701 and 650 commercial systems

The distinction between pre-commercialization and commercialization for quantum computing is loosely analogous to the transition that occurred in classical computing between the early research machines, Atanasoff’s ABC, ENIAC, EDVAC, and Whirlwind to the UNIVAC I and IBM 701 and 650 commercial systems.

What had to happen? Four things:

  1. Incremental technology improvements.
  2. Architectural innovations.
  3. Accumulation of experience actually using the computers for practical applications.
  4. Focus on the needs of real customers.

For the timeline of early classical computers, see my paper:

Concern about overreach and overdesign — Multics vs. UNIX, OS/2 vs. Windows, IBM System/38, Intel 432, IBM RT PC ROMP vs. PowerPC, Trilogy

With so much research still needed, it might be easy for the magnitude of quantum computing to be expanded rather than constrained and focused. This has happened before, with classical computing. Frequently with disastrous results. Sometimes the key is simply timing — trying to reach too far before the underlying technology is ready.

Some noteworthy examples from classical computing are:

  1. Multics. A magnificent research project, which actually resulted in a commercial product (for Honeywell), but with only limited commercial success. Much greater success has occurred for UNIX, which was based on Multics, but dramatically simplified to make it more viable on a wider range of commercial hardware platforms.
  2. OS/2. A grand effort to supersede Windows, which did become a commercial product, but with only limited success. Windows itself continued to evolve and grow and achieved even greater success. Windows NT was a whole new operating system, with arguably even greater features and capabilities than OS/2, but focused on both addressing a commercial customer base as well as growing the original Windows user base. Just a few years of advances in microprocessor performance and memory capacity made a huge difference.
  3. IBM System/38. A dramatic technical advance with 48-bit addressing and integrated database, but far beyond the needs of the targeted customer base. This technological and business misstep provided an opening for competitors to steal IBM customers.
  4. Intel 432. Another dramatic technical advance, with a novel processor directly supporting high-level programming languages,and the Ada programming language in particular. Technical overreach (overdesign). A complete flop but a lot of great ideas. In contrast, the Intel 386 and 486 rapidly took off.
  5. IBM RT PC ROMP. Another dramatic technical advance, Reduced Instruction Set Computer (RISC), which failed miserably, both technically and commercially. Succeeded by the PowerPC architecture, which was very successful for some time, especially for Apple. The latter was actually more sophisticated in addition to being more successful, but wouldn’t have been feasible at the time of the RT/ROMP. The Intel 386 also capitalized on the weak performance of ROMP, even though being technically less sophisticated.
  6. Trilogy. Legendary mainframe pioneer Gene Amdahl, who worked on the IBM 704, IBM 709, Stretch (became IBM 7030), was chief architect of the IBM System/360, and founder of mainframe pioneer Amdahl Corporation, started a new mainframe company, Trilogy Systems, intent on using wafer scale integration to put an entire mainframe CPU on a chip — using virtually the whole wafer for a single chip (in 1980). Alas, the project failed miserably. Ironically, the CPU chip of average desktops, servers, laptops, tablets and even smartphones today are far more complex but much smaller, cheaper, faster, and with far greater capacity. Timing is everything. Their timing sucked. A great example of overreach. Although the ideas were fantastic and logically sound, other than a number of nasty implementation details.

So many great technical ideas, but all so far ahead of their time.

Research and prototyping can explore greater limits, but product success requires careful and intensive focus on what’s really essential and most beneficial — and practical, feasible, and economically viable. Additional capabilities can be added in future stages, as needed — and as feasible and economical.

Full treatment of commercialization — a separate paper, eventually

This informal paper focuses on pre-commercialization rather than commercialization proper. Granted a fair amount of summary and even detail of commercialization has been included in this paper, but a full and exhaustive treatment commercialization, in all of its detail is beyond the scope of this informal paper.

I’ll leave it to a future paper to get into that level of detail on commercialization proper, but no time soon, since the primary focus needs to be pre-commercialization for quite some time to come.

Besides, the coverage of commercialization that is included should be sufficient for most readers for a number of years to come.

Beware betaware

Betaware is a pejorative term referring to releasing incomplete or unproven technology as if it was ready for production use. Sure, that short-cut, short-circuiting of a methodical and rigorous product engineering process can and does work in some situations, it is a rather risky approach in many other situations, especially if human life or limb or mission-critical business processes or financial transactions or personal privacy or national security are at stake.

In the context of this paper, attempting to use a preview product produced during pre-commercialization in production deployment would be an example of betaware. Clearly, a preview product, or an alpha, beta, or pre-release product, should never be used in a production deployment.

Betaware does have its place, but not in production — or in planning for production.

Vaporware — don’t believe it until you see it yourself

Vaporware is a product which has been promised and heavily touted for a myriad of amazing benefits, but… simply doesn’t exist. Maybe it will exist eventually, but sometimes it never comes into existence. It could be software, most generally, or hardware as well.

Betaware differs from vaporware in that it actually exists and you can actually use it — it’s just not typically production-quality or at least not guaranteed to be production-quality.

Commercial grade and industrial grade quality of products

The quality and reliability of products during pre-commercialization will vary greatly. Sometimes great. Sometimes not so great. Sometimes mediocre. And sometimes outright ugly.

Quality will improve as pre-commercialization progresses towards commercialization, but sometimes it won’t, with an occasional setback.

Quality during commercialization is another story. Commercial quality — commercial grade quality — is an absolute requirement during commercialization. That’s non-negotiable.

Commercial grade and industrial grade quality are frequently synonyms, but there are some subtle distinctions:

  1. Commercial grade. Generally means good enough for most customers and users. Generally free of bugs and other defects, and generally high quality. Customers and users are generally happy with what they get.
  2. Industrial grade. Goes well beyond commercial grade. Designed to work in extreme environments and under extreme stress, where any bugs or defects could cause disastrous consequences. May not be needed in most commercial settings, but required whenever human life or limb or mission-critical business processes or financial transactions or personal privacy or national security are at stake. Think rockets, satellites, missiles, implanted medical devices, and Wall Street and the Federal Reserve.

Right now, quantum computing is still in the prototyping and experimentation stage (pre-commercialization), so commercial grade and industrial grade quality are neither needed nor required, generally.

But once quantum computing transitions out or pre-commercialization into commercialization, quality requirements will jump up dramatically to commercial grade. Probably not all the way to industrial grade for most applications, at least in the early stages of commercialization, but it won’t take long before more-demanding quantum applications will begin to require industrial grade quality.

Pre-commercialization is about constant change while commercialization is about stability and carefully controlled and compatible evolution

The whole point of pre-commercialization is that there are lots of open issues and questions without clear or stable answers. The process of resolving them will result in a continuous sequence of changes, sometimes relatively minor but sometimes very disruptive.

The Lunatic Fringe will be perfectly happy with such constant change, but most organizations cannot cope with such constant change. That’s the point of commercialization, to provide a sense of stability, with any change being carefully controlled and done in a compatible manner to preserve technological investments.

During pre-commercialization, there’s no guarantee that work products that worked today will work tomorrow. The hardware is evolving rapidly, as is the software. But once we enter commercialization, customers and users will need to be able to rely on products, algorithms, and applications that work the same one day, one week, one month, one year, five years, and ten years in the future. Compatibility and consistency will be the watchwords — during commercialization, but during pre-commercialization not so much.

Customers and users prefer carefully designed products, not cobbled prototypes

Again, The Lunatic Fringe will be perfectly happy with the prototype products of pre-commercialization which are cobbled together from disparate components and which are far from polished, incomplete, inconsistent, and even cryptic and problematic to use. Worst case, they’re more than happy to roll up their sleeves and fix or even replace any problematic or balky products or components.

Commercial customers and users on the other hand would prefer carefully and even artfully designed products, which do everything that is expected of them, do it well, and do it smoothly and without any hitches. Prototypes simply won’t be acceptable to them. And this is what commercialization is for, to design, develop, test, deliver, and deploy carefully designed products that are completely free of any drama.

Customers and users will seek the stability of methodical commercialization, not the chaos of pre-commercialization

Customers and users of commercial products want to avoid all sense of drama in their IT products. They want the stability and consistency of commercialization rather than the chaos of pre-commercialization.

Again, The Lunatic Fringe is perfectly happy and at home with the chaos of pre-commercialization.

To each his own. In truth, there are two distinct audiences for pre-commercialization and commercialization. They don’t see eye to eye. They are not compatible. But, they both have their strengths despite their weaknesses, so both are to be valued and catered to.

Need for larger capacity, higher performance, more accurate classical quantum simulators

High quality classical quantum simulators are needed during both pre-commercialization and commercialization.

During pre-commercialization they are especially needed due to limited and preliminary hardware. Designers of quantum algorithms and developers of quantum applications need to be able to project how their designs will operate on future hardware, both near-term and long-term.

Designers of quantum algorithms and developers of quantum applications during commercialization might have a somewhat lesser need for simulation since adequate hardware will be available, but still have the need to project how their designs will operate on future hardware.

Simulators are also useful for debugging tricky algorithm issues, but only for algorithms up to about 40 to 50 qubits.

For more on the needs for simulation of quantum algorithms and applications, see my papers…

Hardware infrastructure and services buildout

Regardless of whether customers acquire their own quantum computer hardware or access vendor or service provider quantum computers over the Internet as a cloud service, quantum computing hardware infrastructure and services must still be built out. The concern in this paper is that there be sufficient buildout to satisfy the demand for quantum algorithm execution.

The need for quantum computing hardware infrastructure and services will obviously be greatest during commercialization, but will still be at least a minor issue during pre-commercialization, at least in the later stages once the bulk of research has completed and prototyping and experimentation pick up steam.

Hardware infrastructure and services buildout is not an issue, priority, or worry yet since the focus is on research

Although there is indeed a modest amount of prototyping and experimentation going on right now so that some hardware is required, the needs are fairly minimal. This state of affairs will continue as long as research is the main priority.

At some point, enough of the open research issues will have been addressed so that more serious prototyping and experimentation, even approaching production-scale algorithms and applications, will require significantly greater hardware infrastructure and services buildout, but we’re not there yet. Not even close.

Substantial research advances will be required before we get to the stage of needing any significant hardware infrastructure and services buildout.

Factors driving hardware infrastructure and services buildout

Once demand for quantum computers grows to the point of significantly driving hardware infrastructure and services buildout, there will be a number of factors which will drive the capacity of hardware infrastructure and services which will be required:

  1. Count of customers.
  2. Count of quantum applications at each customer.
  3. Frequency and duration of execution of each application.
  4. Count of quantum algorithms used by each application.
  5. Count of invocations of each quantum algorithm (circuit.)
  6. Count of circuit repetitions (shots) for each quantum algorithm invocation.
  7. Count of gates in each quantum circuit. Both total gate count and maximum depth.
  8. Rate of growth expected. In all of the above factors. Annual or otherwise.
  9. Redundancy to account for geographic redundancy, hardware failures, maintenance, peaks periods, and service level agreements (SLA).
  10. Service level agreements (SLA). Additional redundancy required. Guaranteed availability and response time.
  11. Monitoring and management.
  12. Cybersecurity. Monitoring, threat detection, mitigation.

For frequency of execution, there will be a number of measures:

  1. Average over time.
  2. High and low seasonal rates.
  3. Day of week usage patterns.
  4. Holiday usage patterns.
  5. Peak rates. May be multiple peaks.

Maybe a surge in demand for hardware infrastructure and services late in pre-commercialization

Late in pre-commercialization, as the onset of commercialization approaches, enough of the open research issues will have been addressed so that more serious prototyping and experimentation, even approaching production-scale algorithms and applications, may require significantly greater hardware infrastructure and services, spurring the first stage of a true and significant buildout.

How much of a buildout? Unknown.

Demand could grow gradually, over a year to eighteen months or even two years, or it could just suddenly spike if there are dramatic breakthroughs in hardware or algorithms, especially when The ENIAC Moment is reached.

Expect a surge in demand for hardware infrastructure and services once The ENIAC Moment has been reached

The arrival of The ENIAC Moment may literally open the floodgates of demand for hardware and services for quantum computing. Once one team proves what can be done, many other teams will pile on and follow suit.

Maybe not, or maybe more slowly, especially since it will likely still be only the most elite teams who can master the technology of quantum algorithms and quantum applications.

But a dramatic surge cannot be ruled out.

And this could happen in pre-commercialization as well as commercialization.

Development of standards for QA, documentation, and benchmarking

It would be best to get started on developing standards for QA, documentation, and benchmarking as soon as possible. Certainly the effort should be well-underway by early in commercialization, but it would be better to get started during pre-commercialization since it can take some significant elapsed time to work through all issues and discover the sweet spot(s).

Some of the activities and milestones:

  1. Informal development of QA, doc, benchmarking. Especially during pre-commercialization.
  2. Rudiments of conventions.
  3. Initiation of standards process.
  4. Standards underway.
  5. Initial standards.
  6. Validation of standards.
  7. Approval process.
  8. Initial adoption of standards.
  9. Incremental corrections and enhancements to standards.
  10. Vigorous acceptance of standards.
  11. Pragmatic reasons for failure to fully adopt the new standards.
  12. Vigorous adoption of standards.
  13. Incremental evolution of standards as the technology evolves.

Consider my proposal for documenting Principles of Operation for quantum computers:

Business development during pre-commercialization

Business development is an important activity during commercialization, focusing on enabling organizations to actually develop and deploy production applications. The technology has already been proven to work, has been packaged as a commercial product, and is ready for SLAs — service level agreements, which are contractual agreements or commitments that the technology will perform up to an expected level of reliability and performance. Pre-commercialization is a different matter since production deployment is out of the question.

A variation on commercial business development is still relevant during pre-commercialization, but is rather different since:

  1. The technology is not ready for production deployment.
  2. The technology has not even been proven to work.
  3. There are no production-quality production-scale applications, nor are they feasible. They will have to wait for commercialization.
  4. Expectations for the eventual product haven’t been set — or at least they shouldn’t be set until more is known about the technology.
  5. A service level agreement (SLA) would be wholly inappropriate at this stage. Except maybe for availability and throughput for experiments.
  6. No clarity as to the timeframe when the technology or a finished commercial product might become available.

Business development during pre-commercialization is most effective for:

  1. Building technology awareness. But not product awareness, yet, since there are no production-ready products, yet.
  2. Promoting prototyping and experimentation.
  3. Promoting technology familiarization.
  4. Marketing consulting services. Rather than product acquisition and deployment.
  5. Marketing quantum-enabling products. To be used during pre-commercialization. Such as tools and support software. And hardware components used to build quantum computers.

Some preliminary commercial business development late in pre-commercialization

Even though it’s inappropriate to engage in commercial business development during pre-commercialization when the shape of an eventual commercial product isn’t even known, late in pre-commercialization, as the commercialization stage is imminent, it can make sense to begin at least preliminary commercial business development.

It will still be inappropriate to consummate any commercial business deals that early, but at least the discussions can begin.

An exception is quantum-enabling products, such as tools and support software, and hardware components used to build quantum computers, which in fact are useful during pre-commercialization even though no production applications are being deployed.

Preliminary commercial business development early in initial commercialization stage

The exact same is true in the early portion of the initial commercialization stage where at least some product details are known even though a full commercial product is still some distance down the road. Such discussions can set vague, general expectations, but not with enough detail and certainty to finalize a formal legal contract such as a service level agreement (SLA.)

Only near the end of the initial commercialization stage, when product release is imminent would it make sense to finalize and seal any deals.

As details and certainty evolve from the start of the initial commercialization stage to the end of that stage when initial commercial product release occurs, extreme judgment is needed to decide what level of discussion and commitment is appropriate for progressing through the stages of any proposed business deals.

Some preliminary discussions could be held around alpha testing.

Some deeper preliminary discussions could be held around beta testing.

And there is the same exception of quantum-enabling products, such as tools and support software, and hardware components used to build quantum computers, which in fact are useful during the early part of commercialization even though no production applications are being deployed yet.

Deeper commercial business development should wait until after pre-releases late in the initial commercialization stage

Deeper business development discussions should wait until customers have access to pre-releases of the product so that they can experiment and validate any claims or expectations.

Consortiums for configurable packaged quantum solutions

Configurable packaged quantum solutions provide a great opportunity for widespread quantum applications which won’t require the customer to do any coding or algorithm design, but it will take a substantial investment to create such solutions. Rather than expect individual companies or research institutions to take on the full burden, it would make more sense to form development consortiums to perform such development, each participating organization contributing based on its own particular area of expertise or interest.

Finalizing service level agreements (SLA) should not occur until late in the initial commercialization stage, not during pre-commercialization

Just to make it crystal clear, finalizing service level agreements (SLA) should not occur during pre-commercialization, or even early in the initial commercialization stage.

Service level agreements should be finalized only when release of the initial commercial product is imminent. Only after alpha, beta, and pre-release testing.

IBM — The epitome of pre-commercialization, still heavily focused on research as well as customers prototyping and experimenting

IBM is actually the epitome of what this paper is calling pre-commercialization:

  1. IBM is still focused very heavily on research. Organizationally, the bulk of their effort is research.
  2. No hint of a true product engineering team at IBM. This is still a research project. Still prototyping new hardware and new software. No sense of what a commercial product would look like.
  3. IBM non-research staff facilitating customer experimentation and prototyping. Busy encouraging customers to experiment with the new, unproven, and evolving technology. Building excitement and enthusiasm, but no commercial products.
  4. IBM customers focused on prototyping and experimentation. No hint of efforts towards production-scale algorithms or applications yet.

You can be sure that IBM — and their customers — will turn quantum computing into a commercial product at the earliest juncture, but there is no hint of that occurring imminently.

No hint of any imminent ENIAC Moment.

Right now, the 127-qubit Eagle quantum processor is expected within the next month or so (by the end of 2021), but there is no hint that it will magically break out of pre-commercialization or come even close to being sufficient for The ENIAC Moment. Ditto for the 443-qubit Osprey quantum processor expected next year (2022.)

Raw qubit count is not the current gating factor — IBM already has a 65-qubit quantum processor. Rather, qubit fidelity, gate fidelity, connectivity, measurement errors, and coherence time are dominant gating factors on the hardware front. Lack of a high-level programming model and the lack of 40-qubit algorithms are gating factors on the algorithm front.

IBM probably has at least three to five years of pre-commercialization ahead of it. And at least two to three years of commercialization after that — very optimistically.

Oracle — No hint of prime-time application commercialization

Oracle is very big in databases, programming tools, and applications, so you can be sure that they will be heavily involved once quantum computing really is ready for prime-time application development and deployment, but… so far, there is no hint of that.

That’s rather persuasive evidence that quantum computing is not yet at the commercialization stage.

Amazon — Research and facilitating prototyping and experimentation

Amazon has two great potentials for quantum computing:

  1. As a customer and user of quantum computing and developer of complex and data-intensive applications.
  2. As a cloud service provider.

AWS is already offering cloud-based quantum computing services, but since the existing quantum computing hardware is very limited, it is only usable for research, prototyping, and experimentation — what this paper refers to as pre-commercialization.

Sure, once quantum computing research progresses and hardware vendors begin to offer commercialized products, you can be sure that Amazon will offer access to them as cloud services, but… that’s not yet happening and is not on the near horizon.

Amazon is also directly involved in research for quantum computing, with inhouse research staff and just recently opening a joint research hub at Caltech:

Note the mention of “next five or 10 years”, clearly focusing on long-term research rather than current or imminent commercialization.

In short, Amazon is heavily into pre-commercialization of quantum computing. Yes, they will be big in commercialization as well, but that’s off in the future.

Okay, technically they are in fact into commercialization but as a tool and service provider of quantum-enabling technologies, but not quantum-enabled technologies. This is all consistent with still being at the pre-commercialization stage.

Pre-commercialization is the realm of the lunatic fringe

The concept of the lunatic fringe in technology innovation refers to the leading edge or actually the bleeding edge of early-early adopters who are literally willing to try any new technology long before it is proven and ready for commercial deployment. And in fact they don’t even mind if it doesn’t work properly yet — since they enjoy fixing products themselves.

The lunatic fringe are drawn to quantum computing like moths to a flame — they cannot resist even though they may get burned very badly before the technology really is ready for commercial deployment.

For more on the lunatic fringe, especially related to quantum computing, see my paper:

For deeper background on the basic concept of the lunatic fringe, see my paper:

Quantum Ready

IBM’s notion of Quantum Ready is a statement about people and organizations preparing themselves intellectually and organizationally for the eventual arrival of quantum computing, but is not intended to suggest that commercial quantum technology products are here today and ready for production deployment or even imminent. Rather, the emphasis is on being prepared for what will be coming, even if it isn’t imminent.

As IBM put it four years ago:

  • We are currently in a period of history when we can prepare for a future where quantum computers offer a clear computational advantage for solving important problems that are currently intractable. This is the “quantum ready” phase.
  • Think of it this way: What if everyone in the 1960s had a decade to prepare for PCs, from hardware to programming over the cloud, while they were still prototypes? In hindsight, we can all see that jumping in early would have been the right call. That’s where we are with quantum computing today. Now is the time to begin exploring what we can do with quantum computers, across a variety of potential applications. Those who wait until fault-tolerance might risk losing out on much nearer-term opportunities.

IBM’s original Quantum Ready blog post:

Quantum Ready — The criteria and timing will be a fielder’s choice based on needs and interests

Quantum Ready is not a one size fits all proposition. Every organization has different needs and interests.

Some organizations will need and want to get started in quantum computing as early as possible.

Others will take a wait and see approach.

Others will simply wait until the new technology has matured and proven itself.

Some organizations will be producers of quantum computing technology, while others will simply be consumers of quantum computing technology. The distinction will impact the timing of getting into quantum computing.

Some organizations consider themselves to be leaders or the leading edge, others the bleeding edge, others fast followers, while others are content to be laggards.

Some organizations, teams, and individuals may indeed need to get Quantum Ready during pre-commercialization, some in the early stages, some in the later stages, but many won’t need to even start getting Quantum Ready until the beginning of commercialization or even later.

Each organization will have to make its own assessment of its needs and interests, to establish the criteria and timing of their entry into quantum computing. It’s truly a fielder’s choice.

Quantum Ready — Be early, but not too early

I half-agree with IBM. I agree with their general sentiment, that organizations do need to prepare and to be early, but I do think it is a mistake to be too early. And with quantum computing, it is super-clear to me that even now, four years after IBM’s words, it is still too early for most organizations to be betting too heavily on quantum computing.

In particular, the programming model is still far too primitive for most organizations to utilize effectively. And working with noisy NISQ qubits is far too difficult for most organizations.

I do agree with IBM’s words that organizations shouldn’t wait until fault-tolerance (automatic quantum error correction, QEC) before getting ready for quantum computing, but I do think that several technical gating factors really do need to be achieved before even leading-edge organizations begin serious preparation for quantum computing.

Quantum Ready — Critical technical gating factors

A handful of critical technical gating factors really do need to be achieved before even leading-edge organizations can begin serious preparation for quantum computing:

  1. Near-perfect qubits. At least four nines of qubit fidelity (99.99%), preferably five nines (99.999%). Still far short of true fault tolerance, but close enough that many simple to moderate-complexity algorithms can be implemented without superhuman effort.
  2. 40-qubit algorithms are common. Or at least 32-qubits. At least in the academic literature.
  3. Automatically scalable algorithms are common. Investing in non-scalable algorithms is a very serious blunder.
  4. Robust collection of high-level algorithmic building blocks. Can construct reasonably complex quantum algorithms without superhuman effort.
  5. A substantial fraction of quantum advantage is readily achievable and common. Again, without superhuman effort.

Without these critical factors in place, any investment in quantum computing is likely to be a throwaway which must be scrapped or reworked once the technology has matured enough that it is ready for use by most organizations.

Quantum Ready — When the ENIAC Moment has been achieved

The ENIAC Moment will be the first time that anybody has demonstrated a production-scale quantum algorithm and quantum application which addresses a practical real-world problem.

How soon might this happen? Anybody’s guess — it’s all a matter of speculation at this stage. But not soon. Probably at least two to four or even five years — or more.

This will likely be more of an academic or research effort, but does at least prove that it is possible.

It could still be another two to three years before similar such efforts can be replicated and made ready for production deployment.

This could be the most critical technical gating factor, proving that the raw technical capabilities of the critical technical gating factors listed in the previous section can actually be marshalled in a practical manner.

To be clear, even once The ENIAC Moment has been achieved, it’s still early and definitely not too late, even for leading-edge organizations to decide that this is the moment to get Quantum Ready.

Also to be clear, the arrival of The ENIAC Moment does not mean that it will be easy to design and develop quantum algorithms and quantum applications, but simply that somebody — probably a very elite technical team — has finally managed to develop and demonstrate a production-scale practical real-world quantum application. Replicating that feat will be possible, but not so easy.

But it at least provides some degree of safety and confidence.

Quantum Ready — It’s never too early for The Lunatic Fringe

We can debate exactly what the right moment is for a given organization to decide to become Quantum Ready, but for one group, The Lunatic Fringe, it is never too early. After all, by definition, they are willing to work with any technology at any stage, even before it is even remotely close to being usable. But, they are not representative of most organizations.

Quantum Ready — Light vs. heavy technical talent

Not all organizations will be recruiting a small army of elite PhD quantum heavyweights.

Some organizations might opt to outsource any and all heavyweight quantum work.

Sure, some organizations will in fact opt to do their own quantum research, possibly even designing and building their own quantum computers.

Some organizations will simply be users of quantum applications designed, developed, and marketed by others, such as configurable packaged quantum solutions. Their requirements for heavyweight quantum technical talent may be either relatively most or in fact nonexistent.

Recruiting and talent retention will be a really big deal for some organizations, but not for all.

Those who actually do have a need for heavyweight PhD talent will have their work cut out for them and face an incredibly competitive talent market. These organizations definitely will have to get started early recruiting, building, and retaining heavyweight quantum technical talent. In fact, if they are not already deeply invested in a heavyweight PhD talent pool, it may be too late. It’s probably never too late, just that it gets much harder and much more expensive, and much more risky.

Some organizations will in fact be able to make do by training existing in-house technical talent in quantum computing. They can afford to wait until the technology matures.

In short, there is no one size fits all guidance for when a given organization needs to get Quantum Ready from a technical talent perspective. Some definitely need to be early, while others can afford to wait.

Quantum Ready — For algorithm and application researchers anytime during pre-commercialization is fine, but for simulation only

Until quantum computing hardware has begun to approach the target capabilities for commercialization, all algorithm and application research should be restricted to simulation, with the simulator configured to match the target capabilities of commercialization (with the exception that qubit capacity must be limited to 32 to 40 or possibly 50 qubits, limited by the capacity supported by the simulator.)

Once the target capabilities for commercialization are imminent, research can include testing on real hardware, but only as a test — design and development and functional testing of quantum algorithms and quantum applications should continue to be performed via simulation only to avoid distractions caused by ongoing hardware anomalies until the hardware actually has reached its full target capabilities for commercialization.

Once the hardware has reached its full target capabilities for commercialization, the only reliance on hardware during design and development is to validate the correct behavior of the quantum algorithms for more qubits than are supported by the simulator — beyond 32 to 50 qubits.

But even then, the real reliance is on automatic scalability of the algorithms. Even executing the algorithms with 55 to 64 to 80 to 96 to 128 qubits should give predictable results as a result of automated analysis of the algorithm to detect dependencies on any hardware features which might not scale properly beyond the range of simulation.

The expectation is that algorithms should be demonstrated and fully tested to assure that they scale to 40 qubits and beyond — to the practical limit of the simulator.

In short, algorithm and application research, including prototyping and experimentation, can occur during pre-commercialization, but only using simulation configured to match the target hardware capabilities expected at the initial commercialization stage (or beyond.) And algorithms must be designed for automatic scalability.

Quantum Ready — Caveat: Any work, knowledge, or skill developed during pre-commercialization runs the risk of being obsolete by the time of commercialization

It’s all well and good to want to get a jump on the process and prototype and experiment with quantum computing during pre-commercialization, but there is one particularly grave risk: everything is dynamic and constantly subject to change, so that some or maybe even all of the work performed during pre-commercialization may be partially or even completely obsolete by the time commercialization rolls around.

This is especially true during the early stages of pre-commercialization.

And also at any point when major innovations occur.

It’s a safer bet to wait until the later stages of pre-commercialization, when the technology seems to be settling down, but even then it’s a risk since there may be late-breaking innovations which actually literally break assumptions which prototypes and experiments may have had about using the technology. It’s not as great a risk, but it’s plausible, so people should be prepared.

The bottom line is that management and planners should have an expectation of reviewing and possibly revising or even completely redoing work that has been completed during pre-commercialization.

As long as you are fully prepared to completely throw out work products from pre-commercialization, you should be set.

The hope is that research knowledge and the actual technology will incrementally advance as pre-commercialization progresses, and it is primarily that knowledge which will form the basis of commercialization, not the specific work products.

To be clear, it is the final knowledge at the end of pre-commercialization which matters, not preliminary knowledge earlier in pre-commercialization.

Quantum Ready — The technology will be constantly changing

It will be the old adage, the only constant is change. Even if the evolution of quantum computing technology does manage to be somewhat compatible, there will be an urge to adapt to take advantage of new benefits of the change.

Once an organization commits to beginning to be Quantum Ready, virtually every change in the technology will have to be evaluated to decide whether it provides benefits to the organization or is required due to incompatibility.

Some changes can be adopted gradually over time, while other changes will require immediate dramatic adaptation.

Get ready, get used to it, prepare for constant change. Or, hold off and defer entry until the organization is truly prepared to make the necessary commitment to constantly coping with change.

Quantum Ready — Leaders, fast-followers, and laggards

Hmmm… maybe this constant change might explain why some organizations prefer to be fast-followers rather than leaders, or even prefer to be outright laggards.

Quantum Ready — Setting expectations for commercialization

A general characteristic of the notion of quantum ready is — or should be — that vendors and technologists are finally able to set expectations as to what features and capabilities and results can be expected when quantum computing does eventually achieve its initial commercialization stage.

Prior to that moment, nobody will be really sure what a commercialized quantum computing offering will look like.

At present, expectation setting is still not visible even over the horizon.

This suggests that we are still in the middle of pre-commercialization. By definition, we won’t have visibility on commercialization expectations until we near the end of pre-commercialization, which is still years away, even by optimistic measures.

In general, there will be no reason for an organization to become Quantum Ready until they have visibility into what exactly they need to be ready for.

Quantum Ready — Or maybe people should wait for fault-tolerance?

Noisy or error-prone qubits are a real challenge. Fault-tolerance or quantum error correction (QEC) is the answer, but it will take more than a few years and require significant advances in quantum hardware.

This is why IBM says “Those who wait until fault-tolerance might risk losing out on much nearer-term opportunities.

But the operative word there is… “might.” But not necessarily, and not for a certainty.

I agree that it would likely be a very long wait. Maybe five years, or more.

But there is another possibility, that near-perfect qubits might be good enough for many or most quantum algorithms and quantum applications.

Quantum Ready — Near-perfect qubits might be a good time to get ready

Near-perfect qubits are not perfect, but they may be good enough for many or most quantum algorithms and quantum applications. Qubit fidelity of four to five nines — 99.99% to 99.999% can probably satisfy many or most applications.

The point is that near-perfect qubits are likely to be available much sooner than full fault-tolerance — quantum error correction (QEC).

The arrival and availability of near-perfect qubits might in fact be approximately the optimal time for many or most organizations to dive in with confidence and focus on getting Quantum Ready with great vigor.

Quantum Ready — Maybe wait for The FORTRAN Moment?

Another possibility is to wait for the arrival of The FORTRAN Moment, when a high-level programming model and high-level programming language are finally available, so that an organization won’t need an elite technical team to develop complex quantum algorithms and applications.

But, as with fault tolerance, that might be a long wait. In fact, it might come after fault-tolerance.

Still, some organizations may simply never be able to field the kind of elite technical team required for complex quantum algorithms and quantum applications, so The FORTRAN Moment will have appeal for them.

Quantum Ready — Wait for configurable packaged quantum solutions

Another option, especially for less-sophisticated organizations is to refrain from directly designing and developing quantum algorithms and quantum applications, and instead acquire configurable packaged quantum solutions which permit a less-sophisticated technical team to simply configure the applications for the desired input data without any need to even look at, touch, create, modify, or even understand the actual quantum algorithm or application code.

This option does not exist today, but I expect that it will in five to seven years or so.

This option won’t help organizations that require full-custom quantum algorithms and quantum applications, but should be sufficient for many if not the majority of organizations.

In this case, there is no need for the organization to become Quantum Ready per se since all of the quantum features will be hidden under the hood. Presuming that they are able to use quantum hardware in the cloud. But if their needs are great enough, they may need their own quantum hardware, but that’s more of an operational IT issue than an application development issue.

Quantum Ready — Not all staff within the organization have to get Quantum Ready at the same time or pace

Just as each organization will have its own needs, interests, criteria, and timing for when to get Quantum Ready, different divisions, departments, projects, teams, roles, and even individuals will have different needs, interests, criteria, and timing for when to get Quantum Ready. It won’t be a one size fits all proposition even within a particular organization. The timing will vary, as will the pace.

Some may need to get Quantum Ready early in pre-commercialization, others later in pre-commercialization, others not until early commercialization, others not until later stages of commercialization, and others at any stage in between, and others not at all.

Shor’s algorithm implementation for large public encryption keys? Not soon.

Given present understanding, it doesn’t appear that Shor’s factoring algorithm will be practical for factoring 2048-bit public encryption keys in the next 5–7 years, maybe much longer, well beyond the timeframe for commercialization in this paper. But, who can predict when breakthroughs might occur?

That said, it would still be helpful to track the progress of implementations of Shor’s factoring algorithm to track what size of integer it can factor at all stages during pre-commercialization and commercialization.

I do think that would be a reasonable benchmarking test.

Quantum true random number generation as an application is beyond the scope of general-purpose quantum computing

There is one special niche of quantum computing which has already achieved quantum advantage over classical computing — generation of true random numbers (TRNG or QRNG). It turns out that generating random numbers is trivial on a quantum computer since it is a fundamental aspect of the operation of qubits. But, that said, I consider it outside the scope of general-purpose quantum computing as an application since there is no logic or even computation required — it doesn’t require a general-purpose quantum computer, just a single operation on a single qubit.

A classical Turing machine can compute any mathematically computable function, by definition. Although there are mathematical algorithms for generation of pseudo-random numbers, true random numbers are, by definition, not computable — no classical algorithm or even any mathematical formula can “compute” a true random number using algebraic operations and logic. Special hardware can generate true random numbers, but not an algorithm on a Turning machine — see my paper below.

But a quantum computer can “compute” a true random number — just use a Hadamard gate to place a qubit into an exact superposition of 0 and 1, then perform a measurement, and you get a 0 or a 1 with equal probability. Do this with as many qubits or as many times as you want to get an n-bit random number. It’s really that easy.

Technically this is not a computation in a mathematical sense, since no sequence of algebraic operations and logic can produce a true random number.

In any case, this specialized form of “computation” is not emblematic of general-purpose quantum computing, so an application that simply generates random numbers is not an interesting application that is representative of the kinds of applications we seek for quantum computing, such as drug discovery, material design, quantum computational chemistry, machine learning, finance, or business process optimization.

Yes, random numbers and superpositions are indeed useful and even required for many quantum algorithms and quantum applications, but an application which only generates random numbers is not an example of general-purpose quantum computing.

So, although even today’s Noisy NISQ quantum computers can support the application of generating true random numbers, that’s not an indication of an ability to support general-purpose quantum computing.

For more on quantum true random number generation, see my paper:

Summary and conclusions

  1. To be clear, this paper is only a proposed model for approaching commercialization of quantum computing. How reality plays out is anybody’s guess.
  2. Commercialization is where all the real action will be, the holy grail of quantum computing.
  3. But first we need to complete a vast amount of research, prototyping, and experimentation, which we call pre-commercialization. Only then can the nascent industry proceed to commercialization.
  4. Research is the first, foremost, and main focus of pre-commercialization.
  5. It’s not clear how much research will be needed to complete pre-commercialization to be ready for commercialization.
  6. Ongoing research will continue indefinitely, never-ending, even once sufficient research has been performed to fully enable the initial commercialization stage — C1.0.
  7. Lots of prototyping will be needed to complete pre-commercialization. To figure out what an eventual product might really look like.
  8. Vast amounts of experimentation will be needed to discern which ideas work and which ideas don’t.
  9. Plenty of preview releases can be made available along the way in pre-commercialization, comparable to the alpha, beta, and pre-releases which will be made available during commercialization.
  10. Rely primarily on simulation for most prototyping and experimentation. Configured to match target hardware, primarily expected commercialization, not hardware that is not ready for commercialization.
  11. Primary testing of hardware should focus on functional testing, stress testing, and benchmarking — not prototyping and experimentation. Test based on carefully defined specifications. During both pre-commercialization and commercialization.
  12. Prototyping and experimentation should initially focus on simulation of hardware expected in commercialization. Not hardware which is not ready for commercialization.
  13. Late in pre-commercialization, prototyping and experimentation can focus on actual hardware — once it meets specs for commercialization.
  14. Prototyping and experimentation on actual hardware earlier in pre-commercialization is problematic and an unproductive distraction. Distortion of algorithms to work with hardware which is not ready for commercialization. Focus on correct results using simulation.
  15. The initial commercialization stage, C1.0, will be the first commercial product version of a quantum computer — with applications.
  16. There will be ten or more subsequent commercialization stages with incremental features and capabilities. C1.0 will only be the beginning of commercialization and won’t fulfill all promises, which will come in the subsequent stages, C1.5, C2.0, C2.5, etc.
  17. Quantum Ready — The criteria and timing for when a particular organization should get Quantum Ready will be a fielder’s choice based on needs and interests. Some will need to be quite early while others can afford to wait until the technology has stabilized and matured — leaders, fast-followers, and laggards, and everything in between.
  18. A subsequent paper will be needed to provide a detailed roadmap for full commercialization. No commitment is made as to the timeframe for that paper, but it won’t be soon since so much work, years, of pre-commercialization lie ahead of us. That said, there is a fair amount of coverage of commercialization in this paper, likely sufficient for most purposes over the next five years.
  19. Business development during pre-commercialization is rather distinct from business development during commercialization. Focused on technology awareness, familiarization with the technology, prototyping, experimentation, and consulting services, rather than development and deployment of production-quality, production-scale applications and production deployment.
  20. There will be plenty of opportunity for marketing quantum-enabling products during pre-commercialization, such as tools, support software, services, and hardware components needed to build quantum computers.

--

--

Jack Krupansky
Jack Krupansky

No responses yet