When Will Quantum Computing Advance Beyond Mere Laboratory Curiosity?
Although quantum computing has proved to be feasible to some degree, it still has not been able to advance beyond being a mere laboratory curiosity. The primary impediment being the lack of ability to handle production-scale real-world problems and deliver substantial real-world value. This informal paper will explore the question of what it will take for quantum computing to transition to being a commercial success and succeed at enabling practical applications which solve production-scale real-world problems and deliver substantial real-world value — and achieve a dramatic quantum advantage over classical computing.
Topics to be discussed in this paper:
- What is a laboratory curiosity?
- Criteria from the definition of laboratory curiosity
- Summary of how quantum computing stacks up relative to these criteria
- Short answer: Not soon
- Decent progress, but…
- Much research is needed
- More lab time is needed
- When Will Quantum Computing Have Its ENIAC Moment?
- Maybe an ENIAC moment for each application category
- When Will Quantum Computing Have Its FORTRAN Moment?
- ENIAC was still a laboratory curiosity
- Possibly between the ENIAC and FORTRAN moments
- The proof point for quantum computing
- A few good applications
- What can’t a quantum computer compute?
- No, not all compute-intensive applications are appropriate for quantum computing
- Intellectual property (IP) — boon or bane?
- Open source is essential
- Not yet a candidate for release from the lab
- Yes, quantum computing remains a mere laboratory curiosity
- No, quantum computing is not ready for prime-time production-scale applications
- Hedge: Maybe some narrow niche applications
- All of this could change with just a few key breakthroughs
- Moment of truth — imminent deployment
- Actual deployment vs. mere intent
- Evaluation of deployment
- Okay, but When?
- Milestones from today to post-laboratory curiosity
- Moore’s law for qubits
- Quantum ready
- Quantum insurance
- Setting expectations
- Papers, books, conferences, conventions, trade shows, seminars, online communities, and meetups
- Quantum volume
- Beyond success of consultants
- Critical mass of interest, but…
- Need a critical mass of technology
- Technological deficits
- The greatest challenges for quantum computing are hardware and algorithms
- Not clear what the ideal qubit technology will be
- The ideal qubit technology has not been invented yet
- Hybrid applications — how best to blend quantum and classical computing
- Google — no commercial machine yet
- Microsoft and Intel — no machines yet
- Honeywell — an initial splash, but follow-through needed
- Rigetti — losing steam?
- IonQ — some initial progress, but waiting for follow-through
- IBM — lots of machines, but still too limited
- Other machine vendors
- How many qubits does a production system need?
- Subsidiary technologies
- Need a critical mass of algorithms and applications
- Need a critical mass of algorithmic building blocks
- Need a critical mass of design patterns
- Need a critical mass of application frameworks
- Is NISQ an obstacle?
- Is quantum error correction needed?
- What if quantum error correction is required?
- Gate fidelity is important
- What algorithm advances are needed?
- Quantum advantage
- Need benchmarks for quantum advantage
- Quantum advantage is mandatory
- There’s no point to quantum computing without quantum advantage
- Quantum supremacy
- Didn’t Google achieve quantum supremacy?
- Which application category will be first to achieve quantum advantage for a production-scale application?
- When will a practical algorithm be implemented for more than 32 qubits?
- Quantum advantage today: true random number generation
- Need for higher performance quantum simulators
- Need for a new model for design of scalable algorithms
- Need to move beyond the lunatic fringe of early adopters
- How scalable is your quantum algorithm or application?
- Do we need a universal quantum computer?
- Quantum computer as a coprocessor
- Tools and support software are essential
- Need for Principles of Operation documentation and specifications
- Need for detailed personas, use cases, access patterns
- How are companies using quantum computing today?
- Isn’t Monte Carlo simulation good enough for most applications?
- Quantum-inspired algorithms
- What about D-Wave Systems?
- Is money a significant issue at all?
- Is more venture capital needed?
- Limited talent pool
- Repurpose existing technical talent
- Obsession over Grover search algorithm even though not exponential advantage
- Shor’s algorithm is still cited and treated as if it was already implemented even though very impractical
- Can we expect quantum computing to cure cancer, hunger, poverty, and inequality?
- Never underestimate the power of human cleverness and intuition
- Would Rip Van Winkle miss much if he slept for the next 2 years? 5 years?
- Will two or three years be enough? Very unlikely
- Some say three to five years, but I don’t see it
- Five years? Outside possibility, but still unlikely
- Seven years? Maybe, if researchers finally get their acts together
- Ten years? One would hope, but on the verge of being a zombie technology
- Fifteen years? Seems like a slam dunk, but you never know
- Twenty years? If not by then, maybe never?
- Prospect of a quantum winter?
- Mixed messages from the National Science Foundation (NSF)
- Ethical considerations
- Regulatory considerations
- What’s next?
What is a laboratory curiosity?
What is a laboratory curiosity? I wrote another informal paper to define and explore the concept:
Quoting from that paper, I offered a simple definition:
- A laboratory curiosity is a scientific discovery or engineering creation which has not yet found practical application in the real world.
That’s correct, but the vague language “practical application in the real world” doesn’t definitively tell us whether quantum computing is or isn’t a laboratory curiosity. How practical is practical? People have written quantum circuits to compute solutions to a variety of practical problems, but only to a limited degree, at a very small scale.
That paper offers a more nuanced definition as well:
- A laboratory curiosity is a scientific discovery or engineering creation which has not yet been effectively transformed into a product or service which economically delivers substantial real-world value and which can be used outside of the laboratory. It still requires the careful attention of the research technical staff for its use, and faces significant ongoing research and development. It promises to deliver fantastic benefits, but has not yet done so, and doesn’t yet have a very short-term path to doing so. It is not yet ready for prime time — for production-scale real-world applications. A new technology needs to offer clear, substantial, and compelling benefits of some sort over existing technology, whether they be new functions and features, performance, less-demanding resource requirements, or economic or operational benefits. There may well be papers, books, conferences, conventions, trade shows, seminars, online communities, and meetups focused on the technology and its potential applications, but they may focus more on academic topics and evaluation and experimentation — proofs of concept and prototypes — rather than focusing on actual delivery of substantial real-world value — they are necessary but not sufficient to advance beyond mere laboratory curiosity.
The key operant phrase in there is:
- delivers substantial real-world value
That’s where we want and need to be to be able to credibly claim that quantum computing is clearly no longer a mere laboratory curiosity.
That’s the core criterion for no longer being a laboratory curiosity.
Criteria from the definition of laboratory curiosity
From that longer definition we now have some more concrete criteria, such as:
- product or service
- economically delivers
- delivers substantial real-world value
- can be used outside of the laboratory
- requires the careful attention of the research technical staff for its use
- faces significant ongoing research and development
- promises to deliver fantastic benefits, but has not yet done so
- … doesn’t yet have a very short-term path to doing so
- not yet ready for prime time
- … for production-scale real-world applications
- offers clear, substantial, and compelling benefits over existing technology
- … new functions
- … new features
- … performance
- … resource requirements
- … economic benefits
- … operational benefits.
- papers, books, conferences, conventions, trade shows, seminars, online communities, and meetups — but not focusing on actual delivery of substantial real-world value.
So how does quantum computing stack up against these criteria?
Summary of how quantum computing stacks up relative to these criteria
Quantum computing does satisfy some of those criteria, to some extent:
- Rigetti and IBM are offering remote access over the Internet, but the machines themselves remain in laboratory environments.
- Almost anybody can use the systems, remotely, although no mere mortal outside the laboratories can operate or maintain the systems themselves.
- Product or service? Depends how you want to define that. Is remote, shared access sufficient? As a service, yes, as a product, no.
- Economically delivered? Well, it’s essentially free right now since the vendors are giving it away, but for production-scale use we have no hint as to what it might cost. I have no expectation that quantum computing at a production scale will be free. Essentially, the hardware vendors are currently eating 100% of the costs — I wouldn’t call that economic delivery.
- Can it be used outside the laboratory? Well, indirectly, using remote access, but the machines themselves remain closeted in the laboratories.
- Quantum computers do deliver one small function which classical Turing machines can’t even theoretically offer: true random number generation — it’s inherent in the probabilistic nature of quantum mechanics and quantum computers. Classical computers can generate pseudo-random numbers, but not true random numbers, although special, non-digital hardware can be used to collect entropy from the environment to generate true random numbers. This function is available today, even on the simplest of quantum computers.
- Plenty of papers, conferences, conventions, trade shows, seminars, and meetups, but focused more on academic topics and evaluation and experimentation, such as proofs of concept and prototypes, and setting speculative expectations for speculative future use, rather than delivery of substantial real-world value in the present.
The more problematic aspects include:
- Quantum advantage is mandatory, but not yet achieved. There is only one benefit that quantum computing promises to offer — dramatically greater performance than even the best classical computers — known as quantum advantage. Without quantum advantage, quantum computers have no inherent advantage over classical computing. Quantum supremacy is a key promise of quantum computing as well — performance so incredible that computations are now possible which were not possible on classical computers at all, even given years, decades, or centuries of running time, but not mandatory at the early stages. Are we there yet? No, not even close, for either quantum advantage or quantum supremacy.
- A number of the announced hardware entrants in the sector have not yet fielded working systems, even in the laboratory. (Intel, Microsoft, Xanadu?)
- Some of the machines up and running are not yet available to the outside world, either at all or generally other than via special arrangements. (Google, IonQ, Honeywell)
- Robust documentation and detailed specifications are not generally available.
- Not yet delivering substantial real-world value, and no clear pathway to that end.
- Facing significant ongoing research and development. Still only in the early stages, far short of the hardware needed for production-scale applications.
- Promises to deliver fantastic benefits, but has not yet done so
- … doesn’t yet have a very short-term path to doing so
- not yet ready for prime time — not even close
- … for production-scale real-world applications
At a more detailed level, the more problematic aspects include:
- Hardware — not enough qubits.
- Hardware — poor fidelity — coherence, gate errors, measurement errors.
- Hardware — no clear sense of whether quantum error correction is essential or whether NISQ will be good enough.
- Hardware — minimal circuit depth.
- Hardware — quite a few of the announced machines are not yet available.
- No interesting level of algorithmic building blocks for building applications.
- Little in the way of design patterns.
- Need for application frameworks. Minimize reinvention of the wheel by each application.
- Few examples of realistic algorithms (quantum circuits). Mostly proof of concept, not production-scale.
- Proof of concept and prototype stage. People are attempting to develop algorithms and applications for real-world use cases, but they are still at the proof of concept and prototyping stage — very limited input data, limited function, and no clear path for scaling up to production-scale real-world use cases.
- No reasonable high-level programming model. Forced to work at the level of the raw physics — Bloch sphere rotations and unitary matrices
- No easy way to transform classical applications or algorithms to quantum computing.
- Quantum advantage — still no meaningful examples of quantum algorithms for a practical real-world application actually outperforming classical solutions in a truly dramatic manner.
That’s not meant to be an exhaustive list, but should illustrate the depth of the problem.
Overall, I would say that the major impediment is twofold:
- No clear picture of where we’re really going — what a quantum application will look like when quantum computing is ready for prime-time, production-scale real-world applications? Quantum error correction? Post-NISQ? Algorithmic building blocks? Design patterns? Application frameworks? High-level programming model? Quantum-specific programming language(s)?
- In what time frame can we expect any or all of that? Five to ten years? Sooner? Much sooner? Later? Much later? What sort of roadmap?
How exactly is any real-world organization supposed to plan for such a massive degree of uncertainty? Other than mabe simply saying let’s just wait a few more years and check back to see if things have settled down. Or, buy into solely experimentation and evaluation, but accept that development and deployment of production-scale practical real-world quantum applications is not in the cards in the next few years, if not substantially longer.
Short answer: Not soon
Despite all of the hype, attention, interest, and enthusiasm, quantum computing is not even close to being ready to advance beyond being a mere laboratory curiosity.
The rest of the paper will discuss all of the relevant issues and obstacles.
Decent progress, but…
Yes, we’re seeing a lot of progress — both hardware and software.
We’ve come a long way. No question about that.
But we still have a long way to go. A very long way to go, in fact. Much research is needed, both basic research and applied research.
Incremental progress alone doesn’t advance you beyond status as a mere laboratory curiosity. You need to get to the stage of delivering substantial real-world value. We’re not there yet.
Evaluation and experimentation, proofs of concept, and prototypes are not the real-world value that is sought. Most of these efforts are internal processes that have value internally but that don’t deliver commercial, business, or organizational value.
Quantum computing won’t advance definitively beyond being a mere laboratory curiosity until deployment of actual production-scale real-world applications.
Meanwhile, a lot more progress is needed.
Even so, even a lot more progress will not necessarily guarantee success at advancing beyond the status of mere laboratory curiosity.
Much research is needed
Much research has been done over the past 25 years, but much more is needed.
- Theoretical research. I’m not persuaded that all of the needed theory has been fully elaborated, especially when it comes to programming models and algorithmic building blocks, as well as scaling beyond a few to a couple of dozen qubits to hundreds and thousands, possibly even millions. Yes, there’s a lot of applied research needed, but it should have a much firmer bedrock of theory under it than I perceive at present. One example that concerns me is the granularity of phase — quite a few algorithms treat phase as if it were an infinitely-fine continuous value — both theory and basic research are needed to determine the truth about what assumptions algorithm designers can make about the granularity of phase.
- Basic research. Additional quantum phenomena which can be exploited for qubits. Controlling noise, errors, coherence, and environmental interference. Algorithm research at a basic level.
- Applied research. Large ensembles of qubits and their connectivity. Engineering as well as science — how to actually build working qubits, ensembles of qubits, and entire quantum computers. Development of algorithmic building blocks, higher-level programming models, and design patterns and application frameworks to serve as a foundation for algorithm designers and application developers.
How close are we to having all of the research questions answered so that all that is left is mere engineering? Not very close at all, it seems. There is no significant evidence that we are close.
Another 5 years of research? 10 years? 15 years? 20 years? Nobody really knows, which is the point and a tell that the technology is still a mere laboratory curiosity.
More lab time is needed
This is mostly related to the need for more research in general, but simply to highlight the point that many areas of quantum computing need to spend a lot more time in the lab before being ready to be considered for application to real-world problems and release to use in the real world.
Sometimes solutions to problems and issues have been found, but experimentation in a controlled (lab) environment is needed to validate the solutions with reasonable care and to iterate on refined solutions in a reasonably methodical manner.
Some problems need more than simply a list of issues to be addressed, but the passage of sufficient elapsed time in the lab to shake out even issues of which we may not yet even be aware.
Quantum computing definitely needs a lot more time in the lab. Years, for sure. Hopefully not many decades, but it’s too difficult to know for sure right now whether more than a decade of additional research may be needed. That’s how little visibility we have into how far we are from the light at the end of the tunnel.
When Will Quantum Computing Have Its ENIAC Moment?
When can we expect quantum computing to have advanced to a stage comparable to the public unveiling of ENIAC in 1946, when the future has finally arrived and become now, when a quantum computer is finally capable of solving a substantial, nontrivial, real-world computing problem with nontrivial amounts of data rather than being merely yet another promise and mere hint of a future to come, some day, but not real soon?
To be clear, the ENIAC moment for quantum computing might not yet mark the moment when quantum computing necessarily advances from being a mere laboratory curiosity, but it is a necessary milestone along the way.
A simple definition:
- ENIAC moment. The stage at which a nascent technology is finally able to demonstrate that it is capable of solving a significant real-world problem — actually solving a problem and delivering substantial real-world value, in a manner which is a significant improvement over existing technologies. The moment when promises have been fulfilled.
This question is discussed in greater depth in this paper:
Most of the key issues from that paper are discussed or at least mentioned elsewhere in this paper. The key issues are:
- Sufficient hardware capabilities.
- Sufficient algorithm sophistication to solve a real-world problem.
- Sufficient algorithm sophistication to solve the problem is a way that is dramatically superior to classical solutions.
- Sufficient application development sophistication to put the whole application together.
In that paper I discuss timing and suggest five to seven years for the ENIAC moment for quantum computing.
In short, that’s the best estimate I can give as to the earliest time when quantum computing might begin to advance beyond mere laboratory curiosity.
It is possible that the ENIAC might mark the transition beyond mere laboratory curiosity, but only a modest chance at best. It is likely that even at the ENIAC moment, quantum computing will still not be sufficiently capable for most true production-scale applications. ENIAC might have showcased one application, but at great expense and with only the most elite talent. And even then the hardware had rather severe limitations. The ENIAC moment will be the starting line, but more progress will still be needed to enable wider adoption.
Even if the ENIAC moment does turn out to be the official moment of the advance, from a practical perspective it may take a bit more time for the idea that quantum computing is no longer a mere laboratory curiosity to really sink in.
Technically, ENIAC was still in its lab when it had its ENIAC moment. It took five years before UNIVAC I became the first commercial computer — designed by the designers of ENIAC. And another year before IBM introduced the IBM 701 computer, IBM’s first commercial computer. There were a number of other computers developed in laboratories in the interim.
In short, the ENIAC moment is essential and necessary, but may not be sufficient to put the final nail in the coffin of the notion that quantum computing is a mere laboratory curiosity.
It is well worth noting that a very elite team of scientists, engineers, and application developers will be needed to accomplish the feat of the ENIAC moment for quantum computing. The technology at that stage will still fall well short of being usable by mere-mortal more-average application developers.
Maybe an ENIAC moment for each application category
The ENIAC moment for a technology only technically needs a single application, but for a technology which applies to multiple application categories, each category deserves its own ENIAC moment.
That’s possible, but it may also be true that when it rains it pours, so that a single application category having its ENIAC moment can very rapidly lead to multiple application categories following suit in rapid succession. There may be a lot of technology, including algorithmic building blocks, design patterns, and algorithms which are common between application categories. The first application out of the gate may be a guide for those who follow. Even if not directly shared, there may possibly be significant knowledge and expertise which can be gleaned by examining the technology used in another application category.
For quantum computing applications we might want to see:
- Characterization of a complex molecule.
- Characterization of a complex chemical reaction.
- Design of a new material.
- Design of a new drug.
- Optimization of a business process.
- A finance application.
- A dramatic advance in machine learning.
Any one of those would be sufficient for the overall ENIAC moment, but until we see solutions in all, most, many, or at least a few of those application categories, it will still feel as if the technology still isn’t quite ready to advance beyond being a mere laboratory curiosity.
When Will Quantum Computing Have Its FORTRAN Moment?
If the ENIAC moment for quantum computing establishes the raw technical feasibility of developing quantum applications which can deliver substantial real-world value, the FORTRAN moment will signify the moment when more widespread use of the technology is practical, and ultimately it is that widespread use which signifies that the technology is no longer a mere laboratory curiosity.
The reference to FORTRAN is not intended to be literal as in actually using FORTRAN, but as an abstract metaphor for some collection of software technology which offers the kind of intellectual leverage that the real FORTRAN offered for programmers on early classical computers.
The nature of the FORTRAN moment for quantum computing is discussed in more detail in this paper:
I think it’s fairly safe to say that the FORTRAN moment is the moment most worthy of our attention and focus when contemplating quantum computing as more than a mere laboratory curiosity.
And when might this FORTRAN moment for quantum computing occur? As the paper indicates, that’s very, very unclear. Maybe not for seven to twelve years, call it 9–10 years. That’s depressing, but it is what it is. Those are just very rough estimates, so anything goes, and a lot hinges on whether we have a bunch of dramatic breakthroughs in fairly short order or mostly a lot of very slow periods of painfully slow incremental progress only occasionally sprinkled with moderate advances and very rare big leaps.
ENIAC was still a laboratory curiosity
As dramatic an advance as ENIAC was in 1946, it never really made it out of the laboratory. It did indeed perform a number of useful calculations for military applications, but it was eclipsed by a rapid succession of superior computer systems, including commercial products over the next five years.
Technically, I would still classify ENIAC as a laboratory curiosity. It did deliver significant real-world value for the military until 1955 after it was physically moved out of the laboratory, so you could conclude that it was no longer a true laboratory curiosity. That puts it in the gray area of satisfying a unique and special government need.
Still, ENIAC was indeed great progress and really got the ball rolling. It’s creators went on to found a commercial company. And IBM came out with its own in a long line of computers.
Quantum computing is still not at the stage of an ENIAC moment. Conceptually, we could see a situation where some government agency commissions a project to develop a quantum computer tailored to solving a particular need of that agency, but no such prospect has publicly surfaced to date.
Possibly between the ENIAC and FORTRAN moments
The FORTRAN moment would much more clearly usher in widespread adoption and use of quantum applications, but that’s not the absolute requirement for advancing beyond being a mere laboratory curiosity. Some intermediate stage between the ENIAC moment and the FORTRAN moment might actually be the sweet spot, where leading edge developers are actually able to build and deploy production-scale quantum applications which deliver substantial real-world value even if many would-be developers are still left out in the cold.
As noted earlier, while the ENIAC moment itself might be sufficient to advance beyond being still a mere laboratory curiosity, more progress may still be needed. It may or may not be necessary to progress all the way to the FORTRAN moment before application development and deployment becomes easier and more routine.
More likely it will be some stage between the ENIAC moment and the FORTRAN moment. When might that moment be? I can’t say for sure now, but I’m sure that we’ll know it when we see it.
The proof point for quantum computing
As mentioned in the earlier paper, What Makes a Technology a Mere Laboratory Curiosity?, there will likely be a very clear turning point when it suddenly becomes crystal clear that the technology has been proven to work in some significant fashion and that it is only a matter of time before the remaining pieces of the puzzle fall into place. This would be the so-called proof point.
The ENIAC moment is a great candidate for the proof point. A great moment of clarity.
It’s possible that the proof point comes even earlier than the full ENIAC moment.
It’s not likely that we would have to wait for the full FORTRAN moment to get our proof point, but it’s possible.
More likely the proof point will come somewhere between the ENIAC moment and the FORTRAN moment. Whether it is exactly halfway between those two milestones, or closer to one than the other is completely unknown at this stage.
The proof point will indeed technically mark the advance of quantum computing from being a mere laboratory curiosity — in a technical sense that the technology is ready. The market, users, and businesses may still need a little more time for this sea change to sink in, but for advanced and leading-edge users it’s full steam ahead once the proof point is reached.
But if users and organizations really need FORTRAN-moment technology to grease the skids for them, there might be a gap between the proof point of the advanced users and adoption by more average users. Or not, if the proof point comes at or only shortly before the FORTRAN moment.
A few good applications
One possibility is that instead of immediate widespread commencement of application development from scratch, maybe a lot of organizations wait until a number of relatively generalized or standardized applications have been developed, so that they can copy, mimic, or even directly use quantum applications development by other organizations.
A distinct scenario is that a few killer applications seed the market, so that the average organization can quickly jump into quantum computing without the need to grow their own elite application development group. Maybe they hire a consultant to adapt, configure, test, and deploy some existing quantum application. No muss, no fuss.
For example, maybe there will be a few generalized optimization applications. Or a few generalized quantum chemistry or material design applications.
Some examples of applications and application categories for which quantum computers may be appropriate can be found in this paper:
What can’t a quantum computer compute?
For all of the great applications of quantum computing, is there anything it can’t do? Well, quite a bit. Quantum computers are appropriate for any application where quantum parallelism can be exploited, but generally that means no complex logic. You must reduce your computation to the raw physics supported by qubits. Very little of what you can do in a classical programming language, even BASIC, can be readily transformed into a quantum program.
For some examples, read this paper:
No, not all compute-intensive applications are appropriate for quantum computing
As noted in the preceding section and the linked paper, exploitation of quantum computing — quantum parallelism — requires a relatively simple algorithm which performs a relatively simple computation over a very large solution space. Any complex logic is out of the question. Many (most?) of the compute-intensive applications currently running on classical computers have relatively complex logic which cannot be readily transformed into the simple raw physics operations of qubits.
That is not to say that more sophisticated analysis of compute-intensive classical applications couldn’t identify new and novel methods to map the problem space into a solution space which is much more compatible with the raw physics of qubits, but that is a nontrivial task and there is no guarantee that such new and novel methods can be identified for all or most classical compute-intensive problems.
The level of difficulty of this problem dramatically reduces the number of candidate applications which could be developed to exploit quantum computing, presenting a challenging impediment in the path to advancing quantum computing from being a mere laboratory curiosity.
Intellectual property (IP) — boon or bane?
Intellectual property (IP) such as patents can cut both ways. The prospect of proprietary advantage is a fantastic incentive. But open source can be a huge advantage as well. If too much of the key technologies of quantum computing are locked up due to IP protections, innovation and adoption can be stifled or delayed.
There’s no problem with IP at present, and no hint of an imminent problem, but it is a potential issue to keep an eye on.
Open source is essential
Open source technology, both software and hardware is the antithesis of private intellectual property. Access to source code and designs for algorithms, applications, tools and support software, and hardware can greatly accelerate progress for quantum computing. Researchers and product and application developers can rapidly build on the work of others.
Without access to open source technology, progress will be impeded or delayed.
Not yet a candidate for release from the lab
A two-step process is needed before committing to release a technology from the lab:
- Raise the prospect of release to begin considering whether the technology is ready for release from the lab.
- Go through a vetting process to determine if that preliminary decision is worthy of being finalized.
Personally, I don’t feel that quantum computing is even remotely close enough to consider raising the prospect of it being a candidate for release from the lab.
Yes, quantum computing remains a mere laboratory curiosity
Despite all of the progress over the past 25 years, quantum computing remains a mere laboratory curiosity. The many technological advances and many small proofs of concept and prototypes simply haven’t reached the critical mass to deliver substantial real-world value for production-scale real-world applications.
No, quantum computing is not ready for prime-time production-scale applications
Despite great progress over the past five years, quantum computing is not even close to being ready for prime-time production-scale applications.
Quantum computing is still very much a mere laboratory curiosity.
It is indeed possible to do some limited experimentation, limited small-scale proofs of concept and prototypes, but nothing that would be suitable for practical application development and deployment.
Will quantum computing be ready for prime-time production-scale applications any time soon? No, absolutely not. Not this year. Not next year, or the year after. Not three or four years either. And most likely not even in five years, although that’s getting too far out to even forecast with any sense of reality. Simply no time soon that would be relevant for the planning horizons for most organizations.
Hedge: Maybe some narrow niche applications
Although I’m confident that quantum computing is not yet ready for general production-scale deployment, it may turn out that there might be some very narrow niche applications for which even present-day (or near future) NISQ computers in the lab are sufficient for those particular niches.
I am unaware of any such niches today — other than generating true random numbers, but it’s not out of the bounds of reason to speculate that some may come into existence while we await the true moment when quantum computing really does develop the capabilities required for delivery of production-scale real-world applications.
But just because I am open to that possibility does not suggest that anyone should hold their breath in anticipation of that outcome.
All of this could change with just a few key breakthroughs
Much of what is said in this paper is based on progress to date on quantum computing. Granted, it’s always risky to project the future based on the past, but it’s equally risky to project the future on mere speculation. And it’s definitely risky to project the future based on over-rosy optimism.
That said, given that my personal risk is that I may be projecting a somewhat pessimistic view of the short to medium-term future of quantum computing, I feel obligated to hedge and acknowledge that all it might take is a few key breakthroughs to completely change the picture.
What key breakthroughs? Unknown and unknowable. I’m simply allowing for the possibility that there could be dramatic breakthroughs. Although you could easily come up with a substantial list of desirable breakthroughs simply by reading through this paper, such as a lot more qubits with a lot higher fidelity and a much higher-level programming model, for starters. Or some radically new technology for fabricating qubits which changes everything.
Moment of truth — imminent deployment
The ultimate moment people are waiting for is the moment of truth, the moment when a quantum computer is ready for imminent deployment and is about to be switched on or placed online and the intended application can commence. Will it work as hoped and planned, or stumble badly, or sort-of work but in a mediocre manner, or function properly but fail to deliver dramatic quantum advantage?
If significant problems are encountered, it may be back to the drawing boards. Or hunt for a quick fix.
This is almost the moment when the rubber hits the road and it becomes crystal clear whether quantum computing has indeed advanced beyond being a mere laboratory curiosity.
Even if this moment is a failure for a particular quantum computer, there should be other quantum computers which can follow and maybe one of them will be more successful.
But such a moment of truth has not yet occurred, is not imminent, is not in the relatively near future, and nobody really knows how many years or even decades it could take.
Actual deployment vs. mere intent
Imminent deployment is certainly a fantastic milestone to achieve, but it is actual deployment which is the true milestone, the true culmination of the development process for a quantum computer and applications. The true moment when the rubber actually does hit the road and it becomes crystal clear whether the technology has indeed advanced beyond being a mere laboratory curiosity.
Evaluation of deployment
Even then, once deployment has occurred, there is still more work to do. Deployment simply means that the quantum computer and applications are available for use. There is still the need to see actual evidence of use by actual customers and users in meaningful and relevant use cases. Does it work as hoped and planned, or stumble badly, or sort-of work but in a mediocre manner? Is the anticipated quantum advantage demonstrated as expected?
Once you’ve performed an evaluation of the deployment, only then can you pass judgment on whether quantum computing and the chosen applications have indeed advanced beyond being a mere laboratory curiosity.
Okay, but When?
I’ve covered so many of the issues and obstacles, but the headline question remains unanswered: When? When will quantum computing finally advance from being a mere laboratory curiosity?
Besides the simple fact that I do not know — and neither does anybody else, what meaningful statements can we make about timing?
- A lot of years of research are needed. 5? 7? 10? 12? 15? 20? Take your pick.
- Much basic hardware research is needed. How to build a better qubit. How to build a large number of qubits. How to connect them all.
- Much more basic research in quantum algorithms.
- Much more basic research in analyzing real-world problems and transforming them into a form that is amenable to the programming model of quantum circuits.
- Much more engineering research into building and controlling machines with large numbers of qubits.
Okay, so… When?
I’d dearly like to say that we could see the ENIAC moment for quantum computing in two years and the FORTRAN moment in five years, but… at present both goals seem out of reach.
Five to seven years for the ENIAC moment? Even that might be a stretch and overly optimistic, but believable, at least.
Another two to five years beyond the ENIAC moment to reach the FORTRAN moment — seven to twelve years total? Call it ten years, give or take.
Will the ENIAC moment actually be enough to advance far enough from pure laboratory curiosity to achieving delivery of real-world value? Maybe not. Probably not.
By definition, the FORTRAN moment will be a clear indicator that delivery of real-world value has arrived.
But there may be a middle stage between the ENIAC and FORTRAN moments where there are some niche applications which do in fact deliver substantial real-world value.
- ENIAC moment in 5–7 years.
- Another 2–5 years to reach FORTRAN moment.
- 7–12 years total.
- Call it 10 years to have a round number.
- Maybe 5–7 years if we catch a bunch of lucky breaks.
- Maybe 12–15 years if we run into too many walls.
- Bottom line: 5 years as a most optimistic estimate. But don’t hold me to that!
Milestones from today to post-laboratory curiosity
It feels premature to suggest a detailed and credible sequence of milestones — let alone dates — for how to advance from today to the moment of no longer being a mere laboratory curiosity, but some possible milestones might include:
- Numbers of qubits. 32, 48, 64, 72, 80, 96, 100, 128, 192, 256, 512, 1024, 2048, 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1M, 2M, 4M, 16M.
- Algorithm improvements. Well beyond today.
- Advanced, high-level programming model.
- Sophisticated algorithmic building blocks.
- Design patterns. Some general. Some category or domain-specific.
- Application frameworks. Some general. Some category or domain-specific. But how much of this is needed for the ENIAC moment?
- Reached the ENIAC moment. A first credible production-scale real-world quantum application.
- High-level quantum programming language. Conceived and under development with preliminary experimentation and evaluation.
- Reached the FORTRAN moment. Widespread use, although may not have achieved deployment of applications under development.
- Done. Quantum computing is no longer considered a laboratory curiosity. Widespread use and a significant number of production-scale real-world quantum applications.
At that stage, quantum computing will have arrived and be ready for mainstream computing. Classical computing will still be quite relevant for most applications, but quantum computing will have enabled a whole new level of computing.
Moore’s law for qubits
How many years may it take to advance to a desired number of qubits? I have my own variation of Moore’s Law which basically says that qubit capacity of quantum computers will double every one to two years. Put simply:
- Qubit count of general-purpose quantum computers will double roughly every one to two years, or roughly every 18 months on average.
Read the details in this paper:
There is this marketing notion of Quantum Ready, that the technology of quantum computing may not be ready today or even in the next few years, but that due to the steep learning curve organizations must endeavor to get started now with education, training, experimentation, and proofs of concept and prototyping so that they will be up to speed and ready to hit the ground running when the required technology finally does become available.
As IBM put it almost three years ago (Getting the World Quantum Ready):
- We are currently in a period of history when we can prepare for a future where quantum computers offer a clear computational advantage for solving important problems that are currently intractable. This is the “quantum ready” phase.
- Think of it this way: What if everyone in the 1960s had a decade to prepare for PCs, from hardware to programming over the cloud, while they were still prototypes? In hindsight, we can all see that jumping in early would have been the right call. That’s where we are with quantum computing today. Now is the time to begin exploring what we can do with quantum computers, across a variety of potential applications. Those who wait until fault-tolerance might risk losing out on much nearer-term opportunities.
Even accepting IBM’s overly-buoyant optimism, they’re suggesting seven more years until quantum computing is really here and now and ready to use for production applications.
I would simply suggest that being Quantum Ready does not mean that quantum computing technology has indeed advanced beyond being a mere laboratory curiosity.
In fact, I would go further and assert that the very notion of the appeal of the concept of Quantum Ready is fairly compelling evidence that the technology is nowhere near ready to deliver substantial real-world value, and hence should be most definitely considered a mere laboratory curiosity, for now and the indefinite future.
Quantum insurance is essentially the same concept as Quantum Ready but putting emphasis on the risk and potential cost and lost revenue and competitive disadvantage from being left behind if quantum computing somehow manages to surge ahead without you noticing or paying attention.
The open question is really what level of resources should be deployed with no sense of whether or when those resources will pay for themselves.
You have six basic choices:
- Build out a large team, paying top dollar for elite technical staff, year after year after year, with no visibility on timing of payoff. Pray that your bosses are okay with such extreme pending.
- Dedicate a small team to simply keep an eye on the emerging sector, raising the flag when the technology is finally on the verge of being ready.
- Assign a fairly small number of senior technical staff to do the monitoring of the merging sector — on a part-time basis. Minimal cost, more of a distraction.
- Hire a consulting firm to brief you on the technology, at intervals.
- Hire a consulting firm to outsource development of a small number of exploratory research projects to determine if the technology is close to being ready for use.
- Do a little light reading (or attending seminars) periodically to monitor the field, but don’t even think about expending significant resources on the other five options until the technology finally does seem on the verge of practical application.
This is not that different from Quantum Ready, but with a more intensive focus on concern over being left behind.
Part of the basis for quantum insurance is the belief that the technology is likely going to be ready in three to five years. Many organizations can buy into that as a reasonable planning horizon for new advanced technologies.
But if you believe as I do that quantum computing won’t be ready for practical application until well after that three to five year window, it just doesn’t make a lot of sense to expend such significant resources, other than options #3 (part-time staff to monitor) or #6 (light monitoring.)
In short, quantum insurance is further evidence that quantum computing hasn’t advanced beyond being a mere laboratory curiosity and is not likely to do so any time soon.
By all means, check up on the sector periodically, but there is no need to commit too heavily to it (besides basic research) any time soon.
One way or another, people will develop expectations for a new technology. Better to be proactive and set expectations for them. But the challenge is to set expectations in a realistic manner. That’s a huge challenge for quantum computing.
There are two main challenges with setting expectations, setting them:
- Too low or not at all. There’s no ready and enthusiastic audience or market to take the technology and run with it when it is ready. The technology may end up fizzling and dying off. Not a problem for quantum computing at this stage.
- Too high. Disappointment and outright disenchantment can set in. People may simply walk away in frustration when the technology doesn’t meet expectations and perform as expected. This is a real and looming problem for quantum computing at this stage, not in the sense of the technology failing, but simply that the technology isn’t close to being ready.
It’s a real balancing act.
The good news is that there’s no risk that current expectations for quantum computing are too low. The bad news is that current expectations may be too high, way too high.
The Quantum Ready efforts of IBM and others have assured that expectations are not too low.
Unfortunately, it seems that many people expect that quantum computing will be ready for prime-time practical applications very soon, if not already.
My fear at this stage is not that people won’t be ready once the hardware and algorithms are ready, but that sometime over the next five years many organizations will be increasingly asking — demanding — So, where is it?!! And itching to pull the plug on this developing money pit.
The flip side is that we really do need to accelerate and deploy more resources on research for both hardware and algorithms and applications. So the knock-on risk is that as disenchantment with lack of available hardware (and algorithms) grows, commitment to research and advanced development might wane as well.
In any case, all of this merely confirms that quantum computing is not even close to being ready to advance from being a mere laboratory curiosity.
Papers, books, conferences, conventions, trade shows, seminars, online communities, and meetups
Papers, books, conferences, conventions, trade shows, seminars, online communities, and meetups are all expected for a commercially successful product or service, but although they may be necessary they are not per se sufficient to prove that a technology is no longer a mere laboratory curiosity. This has already been proven true for quantum computing.
These accessories to commercial success may well enable:
- Academic research.
- Proofs of concept.
- Interest in the technology.
- Discussions and interactions among potential users.
But until and unless any of that translates into actually delivering substantial real-world value, the transition from laboratory curiosity has not been accomplished for quantum computing.
The good news is that quantum computing has already generated enough interest to develop a robust cottage industry for papers, books, conferences, conventions, trade shows, seminars, online communities, and meetups.
The bad news is that even with all of this support, the underlying technology of quantum computing still isn’t ready to enable quantum computing to advance from being a mere laboratory curiosity.
All of that said, we should continue to welcome, encourage, and support the further development and flourishing of this nascent cottage industry of papers, books, conferences, conventions, trade shows, seminars, online communities, and meetups.
Another marketing concept for quantum computers is quantum volume which provides a somewhat vague and general notion of how powerful a quantum computer is, permitting a vague, rough comparison of two or more quantum computers. Unfortunately, although it allows people to make statements such as “quantum computer ABC is twice as powerful as quantum computer XYZ”, it doesn’t provide any specific, actionable information to either the designers of quantum algorithms or the developers or quantum applications. And for the purposes of this paper a quantum volume metric tells us little if anything about how powerful a quantum computer will be needed to finally get quantum computing out of the lab and into delivering production-scale real-world value.
The current notion of quantum volume is actually only designed for a modest number of qubits, up to around 50, so it’s not clear that it will be a valid or useful metric once we get to 128, 256, 512, and 1024 and more qubits.
Even if we knew how many qubits and what qubit fidelity was required, there is no easy formulaic method to estimate quantum volume for an algorithm. Worse, even if there were, there’s no way to examine a quantum volume number separate from any particular quantum computer to determine the capabilities that number would imply.
In short, the notion of quantum volume, as currently conceived, won’t help us in our quest to understand when quantum computing will advance beyond being a mere laboratory curiosity.
What we know for sure right now is that we will need quantum computers with a lot more qubits, a lot higher qubit fidelity, greater connectivity, and smaller gate and measurement errors.
Beyond success of consultants
Although the technology may not be ready, consultants are always ready. Consultants can make money at any stage of development. Being a mere laboratory curiosity may only enhance the need for knowledgeable consultants.
The opportunities and demand for consultants won’t provide a reliable indicator of whether quantum computing has successfully advanced from being a mere laboratory curiosity.
Critical mass of interest, but…
It’s quite clear that there is a substantial groundswell of interest in quantum computing. A literal critical mass of interest has been reached, but… the actual technology just isn’t ready yet. All dressed up but no place to go. That won’t be sustainable for too long. How quickly will technology catch up?
Need a critical mass of technology
There are simply too many technological deficits at present to do anything of practical value with quantum computing today. There simply isn’t a critical mass of technology in place today.
It’s simply not possible to achieve a critical mass of technology if there are so many technological deficits. As mentioned earlier, some of the technological deficits are:
- Hardware — not enough qubits.
- Hardware — poor fidelity.
- Hardware — no clear sense of whether quantum error correction is essential or whether NISQ will be good enough.
- Hardware — only minimal circuit depth.
- Hardware — quite a few of the announcements are not yet available.
- No interesting level of algorithmic building blocks for building applications.
- Little in the way of design patterns.
- Need for application frameworks. Minimize reinvention of the wheel by each application.
The greatest challenges for quantum computing are hardware and algorithms
For much more depth on technological deficits, read this paper:
Not clear what the ideal qubit technology will be
The single biggest technological deficit on the hardware front may be that we don’t yet have a handle on what the ideal qubit technology might be. I’m not so optimistic about current superconducting transmon qubit technology, and even trapped-ion qubits may not be enough to get us to the stage where quantum computing is ready to leave the lab and no longer be a mere laboratory curiosity.
The ideal qubit technology has not been invented yet
My personal gut feeling is that the ideal qubit technology has not been invented yet. I could be wrong, but I’m currently not prepared to bet that what we have today will be sufficient for much more than a mere laboratory curiosity.
Granted, there have been a number of interesting ideas floated for new and novel phenomena for fabricating qubits, but that’s part of what I’m talking about — new technologies that may not even be up and running in a lab yet.
And even technologies or phenomena which have not yet even been discovered or invented.
Personally, I’d be partial to using photons at room temperature. Make qubits as small and least-demanding as possible. There is a lot of interest in photonic computing, but this is all basic research, further evidence that quantum computing is nowhere near being ready to transition from being a mere laboratory curiosity.
Hybrid applications — how best to blend quantum and classical computing
Generally, any application utilizing a quantum computing will be a hybrid application since a quantum computer doesn’t have the capabilities for most application operations, such as I/O, database access, network access, and user interface.
But even once you’ve factored out all of those operations that flat-out cannot be done by a quantum computer, there remains the issue of how to partition the core computations of the application — all of the code which could conceivably be executed on a quantum computer, into classical and quantum parts and then manage the transitions between those parts.
Generally, anything which can trivially and efficiently be done on a classical computer should.
Generally, nothing should be done on a quantum computer unless there is a clear and compelling quantum advantage for doing so.
But the task of partitioning core computations for an application will rarely be so black and white — there will be lots of shades of gray.
There may be computations which superficially look ideal for a quantum computer, but upon closer inspection have complex logic or require too much data to be technically feasible to execute directly on a quantum computer. There are techniques for dealing with such situations, but they will tend to involve complex tradeoffs with unsatisfying consequences.
All of this leads to the situation that developing quantum algorithms and quantum applications is not as cut and dried as you might hope and imagine, making it very difficult to effectively use a quantum computer even if you have all of the hardware you need. As a result, even where applications seem obvious, they will tend not to happen as quickly as desired, which means that quantum computing will remain in the lab as a laboratory curiosity for longer than expected.
Google — no commercial machine yet
Google has certainly succeeded in producing a quantum computer in the laboratory, but they haven’t managed to bring that technology to market, nor have they announced any commercial plans or a roadmap.
Google had promised a 72-qubit system a couple of years ago, but only managed to produce a 53-qubit system last year.
So, the Google machine is clearly still a mere laboratory curiosity, although it must be noted that their efforts for their experimental system were quite notable and quite a major milestone.
Now the question for Google is what will they do for an encore machine.
Microsoft and Intel — no machines yet
Both Microsoft and Intel have announced intentions to design and build quantum computers, but so far they haven’t yet produced a machine in a laboratory.
So, neither Microsoft nor Intel is even at the laboratory curiosity stage yet, let alone fielding commercial products.
Honeywell — an initial splash, but follow-through needed
Honeywell has announced ambitious intentions and appears to have a small machine in the lab with ambitious plans to rapidly evolve to larger machines, but so far all they have is a machine in a lab, albeit with remote access.
Their spash included a claim of quantum volume of 64, which sounds great and is double that of IBM’s machines, but they only have six qubits vs. IBM with 28 qubits. They get a much higher quantum volume because their qubit fidelity is much better. The latter is great, but it doesn’t help you if you need more than 6 qubits.
Honeywell has promised significantly larger qubit capacities in future machines, but that’s the future, not now. Evaluation of a machine or vendor relative to being a mere laboratory curiosity must be about the here and now, not the unspecified future.
So, even Honeywell is still at the laboratory curiosity stage.
Rigetti — losing steam?
Besides IBM, Rigetti Computing was one of the early leaders getting a sequence of machines up and running (in the lab), but lately their momentum seems to have dwindled. They announced intentions to produce a 128-qubit machine “over the next year”, but that was two years ago, and now their most capable machine has 31 qubits.
We’re waiting for their encore.
IonQ — some initial progress, but waiting for follow-through
Before Honeywell’s recent announcement of a trapped-ion quantum computing system, IonQ was the only hardware vendor pursuing this alternative to superconducting transmon qubits. They have some interesting hardware in their labs, but they need to demonstrate some serious follow-through, real soon, lest they begin to lose momentum.
In short, they are still at the laboratory curiosity stage.
IBM — lots of machines, but still too limited
IBM has been working on quantum computing for over 25 years. They currently have, at last count, 18 quantum computers running in their labs. And they are certainly in the lead, but that’s not saying that much in a sector where everybody is plagued with dramatic technological deficits that need to be overcome. IBM continues to make progress, but their machines are still too limited relative to what might be needed to deliver substantial real-world value for production-scale practical real-world applications.
IBM does claim to have a 53-qubit system, although there are no publicly-available specs for it yet. Their most recent announcement was for a 28-qubit system.
A lot of people have experimented with the IBM systems using their API in the cloud, but all of this is mere experimentation or proof of concept work, not serious development of applications for production deployment today or in the very near future.
As such, all of the IBM machines remain mere laboratory curiosities.
Other machine vendors
There are some number of stealth vendors and some relatively new smaller vendors of quantum computers who claim or are reported to be working on new and exotic machine architectures, but the bottom line is that their machines are either still laboratory curiosities or not even yet laboratory curiosities.
How many qubits does a production system need?
At this stage we have no visibility as to how many qubits might be needed to achieve the level of performance, capacity, and quantum advantage needed to demonstrate production-scale real-world applications comparable to achieving the ENIAC or FORTRAN moments of classical computing.
- 128 qubits.
- 256 qubits.
- 512 qubits.
- 1K qubits.
- 2K qubits.
- 4K qubits.
- 8K qubits.
- 16K qubits.
- 32K qubits.
- 64K qubits.
It all depends, and on so many factors.
Every application and application category may have a different profile for resource requirements, not to mention being dependent on the precise input data.
My default choice is 1K qubits to achieve the ENIAC moment. We’ll have to wait to see about that.
Whether additional qubits would be needed to achieve the FORTRAN moment is unclear. It may depend more on other factors such as coherence time and circuit depth, and the richness and ease of use of algorithmic building blocks and design patterns.
This paper generally couches quantum computing as a singular, monolithic entity, but the reality is that it is an umbrella concept with a variety of subsidiary technologies under that overarching umbrella, which may come in three forms:
- Independent technologies for designing and fabricating a quantum computer (e.g., a particular qubit technology.)
- Components which come together to produce a quantum computer. Including software.
- Components which are shared between distinct approaches to designing and fabricating a quantum computer.
Either way, each subsidiary technology may have its own arc and trajectory from concept to final product. Some of the subsidiary technologies may indeed advance from being mere laboratory curiosities sooner even as others remain stuck in the lab a little longer or even a lot longer, or maybe not even make it out of the lab at all.
Generally, all of the component technologies of a given quantum computer must come together for the quantum computer to function at all.
Generally, each distinct technology for designing and fabricating a quantum computer (e.g., a qubit technology) can proceed to advance beyond mere laboratory curiosity even if other technologies for designing and fabricating quantum computers remain stuck in the lab. So, for example, superconducting transmon qubit technology can be on a distinct arc separate from trapped-ion qubit technology.
Generally, shared components must come together regardless of the overall technology of the quantum computer.
With quantum computing we have hardware as well as algorithms and applications. Each is a subsidiary technology under the overall umbrella of quantum computing. Each particular quantum computer, each particular algorithm, and each particular application is a distinct subsidiary technology.
Need a critical mass of algorithms and applications
Even once the raw technological deficits have been identified and addressed, there is a need for a critical mass of algorithms and applications which use those algorithms.
Most of the algorithms available today are too primitive and mostly simply trying to show that the limited hardware available today can be used to do something, anything useful. Actually having real practical value is well beyond the capabilities of both the hardware and limited algorithmic building blocks available today. Some — make that many — dramatic advances are needed on the algorithm front.
And even when we reach a critical mass of algorithmic building blocks, design patterns, and algorithms, we still need to see a critical mass of full applications which utilize those algorithms, as well as a critical mass of application frameworks on which to build those applications.
Need a critical mass of algorithmic building blocks
We can’t achieve a critical mass of algorithms and applications without first achieving a critical mass of algorithmic building blocks.
All we have today are raw, primitive quantum logic gates and a random assortment of relatively primitive and hard-coded algorithms.
We need a much richer semantics of algorithmic building blocks from which higher-level algorithms can be composed.
What should these algorithmic building blocks look like? That’s an open research question.
But without a critical mass of algorithmic building blocks quantum computing is likely to remain a mere laboratory curiosity.
Need a critical mass of design patterns
Even with a critical mass of hardware and algorithmic building blocks, we need a critical mass of design patterns for best practice for assembling the building blocks into sophisticated algorithms.
What should these design patterns look like? That’s another open research question.
But without a critical mass of design patterns quantum computing is likely to remain a mere laboratory curiosity.
Need a critical mass of application frameworks
Progress will be slow if every application developer must develop each application from scratch. There will likely be a fair amount of common functions and features in many applications. Much of that common logic can be factored out of each application and transformed into an application framework where each application developer can start by standing on the shoulders of the framework developers and then focus on the unique portions of their own application.
I doubt that there would ever be a single application framework that does everything. There might be different frameworks for different application categories. There might be some generic frameworks as well, such as for variational methods. And there will likely be significant common components shared among the frameworks.
What should these application frameworks look like? That’s another open research question.
But without a critical mass of application frameworks quantum computing is likely to remain a mere laboratory curiosity.
Is NISQ an obstacle?
Reliable qubits and gate operations would certainly help a lot, but are noise, errors, limited coherence, and environmental interference a key technical obstacle holding back quantum computing and preventing it from advancing from being a mere laboratory curiosity? Sure, to at least some degree, but it’s more complicated than that.
Based on its limited capacity and noisiness, current hardware is known as NISQ devices — Noisy Intermediate-Scale Quantum devices.
Will a post-NISQ quantum computer automatically no longer be a mere laboratory curiosity? Not necessarily. Hardware is half the equation, with algorithms being the other half.
So, I don’t see NISQ as an absolute obstacle, but I do see it as a significant impediment or obstacle to be overcome.
Is quantum error correction needed?
It’s unclear whether quantum error correction (QEC) might be required before quantum computing can successfully make the leap from a mere laboratory curiosity to a commercial product delivering production-scale real-world value.
It does seem clear to me that NISQ machines as they exist today and for the next year or two or more most definitely are not sufficient in terms of qubit fidelity.
That said, a significant part of me believes that a few major iterations and many small iterations of improvements to qubit technology might get the hardware to the point where it is the lion’s share of the way to where QEC would get us anyway. Relatively high-fidelity qubits may do the trick.
Let’s not let pursuit of the perfect cause us to lose sight of the value of the good.
We’ll see in a few years. If we can’t get to relatively high-fidelity qubits in 5–7 years, QEC may turn out to be the only viable solution to relatively low-fidelity qubits.
What if quantum error correction is required?
Personally, I suspect that many quantum applications will be able to get by without full-blown quantum error correction (QEC) as qubit and gate fidelity incrementally improve, but I may be wrong. Designers of quantum algorithms and quantum applications and the the companies, laboratories, organizations, and agencies which are committing to utilizing them need to ask themselves this fundamental critical question:
- What if quantum error correction is required?
Because if the answer is yes, the implication is that you may have to wait an additional 5–7 years compared to a NISQ solution. That may indeed be the truth for your algorithm or application, but that’s a very hard pill to swallow. Most organizations are looking for solutions now or at least in the near future, but QEC is neither.
And there’s a lot of uncertainty as to the actual time frame when QEC can be expected.
In any case, in the meantime, just be aware of and transparent about what it is that you and your organization are biting off and committing to, including the uncertainty.
Gate fidelity is important
Even if or when quantum error correction or relatively high-fidelity qubits become available, it will be for naught if the hardware and firmware are unable to make similar improvements in gate fidelity.
Even when an algorithm or quantum circuit explicitly declares what operation is to be performed on a qubit, the current state of affairs is that the hardware and firmware are unable to reliably assure that the operation will be performed flawlessly as requested. If not reliably performed, a quantum logic gate results in what is known as a gate error.
Much improvement is needed and we aren’t even close to being ready to advance from being a mere laboratory curiosity.
What algorithm advances are needed?
Beyond better hardware, algorithms hold the key, but algorithms are not free, cheap, or easy. We need:
- More advanced algorithms.
- More refined algorithmic building blocks.
- Richer programming model.
- Higher-level programming model.
- Design patterns.
- Application frameworks.
- Richer example applications.
They would go a good distance towards advancing away from mere laboratory curiosity. But they may not be enough.
Quantum computing needs:
- Much better hardware.
- Richer support for algorithms.
- Better algorithms.
- Applications based on those algorithms.
- Skill at translating application requirements into applications using quantum algorithms.
So even if we have the first three, it may still take significant time to finally develop the algorithms and applications needed to finally deliver substantial real-world value so that quantum computing can finally advance from being a mere laboratory curiosity.
Right now, people are struggling just to get quantum computers to function at all for anything beyond the most trivial algorithms, let alone tackle meaningful applications. But the goal, the whole purpose for quantum computing is to achieve quantum advantage — performance that is so dramatic that it far exceeds anything that even the fastest classical supercomputers can achieve. Alas, at present, we still have no meaningful examples of quantum algorithms for practical real-world applications which actually outperform classical solutions in a truly dramatic manner.
Until we have even a single example of a meaningful, real-world quantum algorithm which dramatically outperforms classical algorithms, there’s no need to release quantum computers from the laboratory.
Even once we achieve quantum advantage in the laboratory, it may be some time before the technology is actually ready for prime time. The technology will remain a mere laboratory curiosity for at least that long.
Need benchmarks for quantum advantage
We aren’t even close to being able to consider what algorithms or applications to consider as benchmarks for concluding that a lab-based quantum computer is finally ready to be released into the real world, other for the types of evaluation and experimentation that we are currently doing.
Quantum advantage is mandatory
The bottom line is that quantum advantage is a mandatory requirement for quantum computing to advance beyond being a mere laboratory curiosity.
To prove that quantum advantage has been achieved, every quantum algorithm and quantum application needs to be able to answer this question:
There’s no point to quantum computing without quantum advantage
Just to hammer the point home, quantum advantage isn’t just mandatory because it’s nice or beneficial, but because it’s the only reason why it’s worth pursuing quantum computing.
If your quantum algorithm or quantum application doesn’t offer dramatic quantum advantage, you’ve got nothing.
Quantum advantage and quantum supremacy are related and sometimes used as synonyms. The key difference is that while a quantum advantage means exactly that — a quantum solution dramatically outperforms a classical solution to the same problem — an advantage, while quantum supremacy means that quantum computing can offer solutions while classical computing cannot even offer a solution since the computational complexity is too high (exponential) and may take many centuries to complete even if that were possible.
I wouldn’t say that true, full quantum supremacy is a requirement for advancing quantum computing from being a mere laboratory curiosity. Quantum advantage is definitely required — otherwise there is no benefit to bothering with quantum computing at all. Quantum supremacy would be a distinct bonus. Further, quantum supremacy alone would not be sufficient to advance quantum computing from being a mere laboratory curiosity — actually delivering substantial real-world value is the true goal, even if quantum supremacy might be achievable in some specialized niche areas but with limitations or qualifications which limit its real-world application.
Didn’t Google achieve quantum supremacy?
Yes, technically Google did achieve quantum supremacy, but with a narrow technical niche which at present doesn’t seem to apply to any of the broad categories of applications for which quantum computing is seen as a potential solution. As such, Google’s achievement doesn’t result in quantum computing advancing from being a mere laboratory curiosity. In fact, Google’s achievement itself is the epitome of a laboratory curiosity.
Google’s achievement is indeed a respectable and even notable degree of progress for quantum computing — it simply doesn’t mean that the game is over.
For more on Google’s achievement of quantum supremacy, read this paper:
Which application category will be first to achieve quantum advantage for a production-scale application?
It is completely unknown and unpredictable which application category might be the first to achieve quantum advantage for a production-scale application which delivers substantial real-world value.
That said, optimization is a fair bet, but don’t hold me to it.
When will a practical algorithm be implemented for more than 32 qubits?
Most of the currently published algorithms use a relatively small number of qubits, rarely more than a dozen and very rarely over twenty. It’s simply not possible to get a dramatic quantum advantage using such a small number of qubits. I just saw a paper which uses 28 qubits — that’s the most I’ve seen to date, but still not that large. We need to start seeing quantum algorithms using well more than 32 qubits on a regular basis before we can even begin expecting to see something resembling quantum advantage.
When will this happen? Well, it won’t happen until we have hardware with more than 32 qubits or quantum simulators which can simulate more than 32 qubits.
Google and IBM have both announced 53-qubit quantum computers. Google’s machine is still in the basic research lab and not publicly available. IBM’s machine is nominally available, but I’ve seen no published specifications nor published papers using more than 32 qubits. Google’s quantum supremacy experiment used more than 32 qubits, but this was for a randomly-generated circuit, not for a practical real-world problem.
It may be too soon to see quantum simulators which support efficient simulation of quantum circuits in excess of 32 qubits. In time, yes, but not so soon. One simulator supports 40 qubits, but that’s not the norm today.
Maybe in two to four years we might start seeing algorithms using 32–48 qubits on a regular basis.
Alas, even algorithms utilizing 32 or even 48 qubits would still not be sufficient to advance quantum computing beyond being a mere laboratory curiosity, but it is a necessary stepping stone. A definite and needed milestone.
Quantum advantage today: true random number generation
There is one useful function that a classical Turing machine can’t compute, even theoretically: true random number generation. By definition, Turing machines calculate deterministic results — 2 plus 2 is always 4. Programmers must resort to a variety of clever contortions to even approximate random number generation, at best producing so-called pseudo-random numbers.
Quantum computers, on the other hand, are probabilistic rather than deterministic by nature, able to generate true random numbers trivially.
Granted, special non-digital hardware can be attached to a Turing machine to allow a classical computer program to collect entropy (randomness) from the external physical environment to form true random numbers. This does work, but once again you can’t even come close using solely a classical Turing machine.
This issue is discussed in greater detail in this paper:
Need for higher performance quantum simulators
It’s fairly easy to simulate a small quantum computer, but once the number of qubits gets larger it gets increasingly harder — exponentially harder — since the number of quantum states rises exponentially with the number of qubits — 2^n. Even a mere 20 qubits require 2²⁰ or a million quantum states. 32 qubits would require 2³² or four billion quantum states. 40 qubits would require 2⁴⁰ or a trillion quantum states. That’s approaching the practical limits for today’s quantum simulators. 64 qubits would require 2⁶⁴ or millions of trillions of quantum states, far beyond current systems. And that’s just getting started for getting to the number of qubits needed for practical applications, whether that’s 72, 96, 128, 256, 512, 1024, or even more.
Maybe we could hope to simulate 45 to 55 qubits, and maybe even 64 qubits at a great stretch with some really clever system design.
Also, my suspicion is that 2^n is really the worst case of quantum states assuming all n qubits are constantly in use, but in any realistic algorithm a much smaller number of quantum states might be used. That doesn’t mean that we could always simulate all algorithms for an n-qubit machine, but possibly enough common cases to have reasonable utility.
Need for a new model for design of scalable algorithms
One of the things that is desperately needed is a much more robust model for quantum algorithms which would in fact allow an algorithm to be tested and simulated on a smaller number of qubits with the knowledge and expectation that the algorithm can be reliably scaled up to a significantly higher number of qubits. For example, test and simulate with eight to 32 qubits with an expectation of scaling up to 64, 128, 256, or even 1024 or more qubits without running into scaling issues on real hardware. Today this is not practical or even theoretically feasible with current hardware or algorithm technology.
Such an advance in algorithm scaling would be good, but simulating medium-size quantum computers would be desirable as well since validation of scaling is still needed. Granted, it may take many thousands of classical processors to simulate even a 50 to 55-qubit quantum computer, but without such validation we would be taking a big technological risk, rolling the dice. But, again, the starting point is to enable the design of scalable algorithms, with intelligent analysis tools which can detect and report on scaling issues for an algorithm even before it is run on either a simulated or real machine.
Need to move beyond the lunatic fringe of early adopters
Every technology needs early adopters, but the earliest adopters commonly won’t be representative of the main audience for the technology. Commonly the earliest adopters are really merely the lunatic fringe, the elite individuals and elite organizations which are able to accept and work with a new technology in its crudest and least-developed form, well before it is ready to be used and exploited fully by mere-mortal users.
I discuss this topic in greater depth in this paper:
The main point here is that until quantum computing really is ready to move beyond the lunatic fringe, it will remain a mere laboratory curiosity.
How scalable is your quantum algorithm or application?
I have my doubts about the scalability of most current quantum algorithms and quantum applications. Doubt is putting it charitably. It’s clear to me that virtually none of the current algorithms will scale reasonably from their current state to hundreds or thousands of qubits.
That said, it behooves every designer of quantum algorithms and every developer of quantum applications to ask and answer this critical question about their handiwork:
- How scalable is your quantum algorithm or application?
That said, as noted elsewhere in this paper, people are simply too busy getting anything to work in the crazy new world of quantum computing to worry too much about the distant future, or even next year or the year after.
If you think your algorithm or application really is scalable, simply try to answer basic questions about shot count and circuit repetitions as your input data scales dramatically in size. Hint: read up on the issues in this paper:
As discussed in that paper, it’s rare that shot count is discussed in any great detail even in academic papers. It may be mentioned in passing or with an informal justification, but usually not in a robust and formal manner. No robust formulas for calculating or estimating shot count, or even informal rules of thumb. And the issue of how shot count would scale as the input grows is not usually discussed either. All of this only confirms that quantum computing is still a laboratory curiosity and not even close to supporting production-scale practical real-world applications.
Do we need a universal quantum computer?
It would be great to have a true universal quantum computer — which combines all of the features of a classical computer with all of the features of a quantum computer with no latency or delay between classical and quantum operations, but I believe that is more of a long-term aspirational goal rather than a requirement for the simpler goal of advancing quantum computing beyond being a mere laboratory curiosity.
For now and the indefinite future, it should be sufficient to use a quantum computer as simply a coprocessor for a classical computer.
Quantum computer as a coprocessor
At present, a quantum computer is simply a coprocessor for a classical computer — classical code prepares a quantum circuit, input data, and parameters for a quantum algorithm, hands the prepared circuit off to an attached or remote quantum processor for execution, and then post-processes the results — measured qubits — from the quantum algorithm using more classical code.
This is not an ideal state of affairs, but should be sufficient to achieve quantum advantage to at least some reasonable degree.
Tools and support software are essential
Software tools and support software are critical for effective use of quantum computing.
Sometimes tools and support software merely make it easier to accomplish tasks, but sometimes the magnitude of the complexity can become so daunting that only the most elite and motivated technical staff can utilize the underlying quantum technology without the support of higher-level tools.
That said, I don’t believe that lack of tools and support software will be a significant blocking factor for advancing the use of quantum computing, if for no other reason than that we have great experience developing tools and support software and most of that technology is reasonably straightforward and can be conceptualized, planned, and implemented without much of the uncertainty that surrounds the raw quantum technology.
Tools and support software might well make the difference between the ENIAC moment and the FORTRAN moment for quantum computing. The elite and motivated technical staff pursuing the ENIAC moment will likely be of such a high caliber that lack of tools and support software will not be a major impediment — if they need a tool, they’ll create that tool. Whereas the FORTRAN moment is predicated on a more average level of skill, so any needed tools or support software will need to be developed and put into place before the FORTRAN moment can occur.
Need for Principles of Operation documentation and specifications
Current documentation of current quantum computers is mediocre and spotty at best. It simply illustrates the degree to which current systems are still stuck in the lab and haven’t undergone a full and complete product development engineering process.
Developers need to have full and complete documentation which provides them with all of the information they need to fully exploit the power of each quantum computer. I have a proposal on the kind of detail needed in that documentation:
In addition, developers should have full and complete technical specifications for each quantum computer they are using so they can be aware of any limitations, performance optimization opportunities, and any nuances which could have some impact on the execution of their algorithms and applications.
Need for detailed personas, use cases, access patterns
The quantum computing sector needs a rich, comprehensive, complete, clear, concise, and detailed elaboration of personas, use cases and access patterns:
- Personas. The many roles of individuals who will be involved in any way in the development and deployment of quantum computers and quantum applications.
- Use cases. The many specific applications of quantum computing. Specific real-world problems to be solved.
- Access patterns. How specifically quantum computing is used. Including design patterns, application frameworks, variational methods, hybrid quantum/classical applications, in-house hardware, remote and cloud access, simulators, etc.
It’s on my own list to look at.
The lack of such detailed information is a clear sign that quantum computing is not prepared to transition from being a mere laboratory curiosity.
How are companies using quantum computing today?
Quantum computing is definitely in the news a lot these days and a lot of companies are talking a lot about it, but what are companies actually doing with quantum computing — other than the vendors who are developing and providing access to machines in their laboratories? For the most part they are getting ready for quantum computing:
- Learning about the technology. Reading. Training. Attending conferences and seminars.
- Experimenting with the technology. Primitive hardware available today. Limited quantum simulators available today.
- Proofs of concept. At a very small scale.
- Prototypes. At a very small scale. Really just proofs of concept.
- Using quantum simulators. Easier to use and more configurable than real quantum computers. Can run some algorithms which don’t yet work on limited real hardware.
- Evaluation. Assessing whether the technology has value relative to the particular needs of a particular organization. Some of this comes before experimentation — looking at the experiences of others, and the rest comes after experimentation, proofs of concept, and prototypes — evaluating how well the results demonstrate delivering real-world value to the organization.
- Speaking at conferences and seminars. Relating their experimental results to date and elaborating on their expectations for future applications.
To be clear, there are no companies which have developed or deployed production-scale practical applications which deliver substantial real-world value. We’ll definitely hear about it when they do.
To get an idea what some companies are anticipating doing with quantum computing once it has advanced beyond being a mere laboratory curiosity, read this paper:
I’m not at all critical of what companies are doing — it’s all completely consistent with using a laboratory curiosity — but I am critical of anybody who talks as if quantum computing is going to be ready for production-scale practical applications in the relatively near future, like the next couple of years.
As far as any company or organization should be concerned, quantum computing is not ready to deliver substantial real-world value for production-scale practical real-world applications. Except maybe for consultants charging high rates to help companies and organizations understand the new technology, as well as component vendors who manufacture the components needed to build quantum computers in the labs.
Isn’t Monte Carlo simulation good enough for most applications?
A traditional approach to problems with a combinatorial explosion of possible solutions is to take a statistical approach such as Monte Carlo simulation (MCS). The results may not be optimal, but with careful attention to heuristics and a little patience it is not uncommon to get results which are good enough for the immediate need for many applications.
It’s certainly true that quantum computing with its quantum parallelism promises to deliver the optimal result, but we’re not close to being able to achieve that promised potential.
At a minimum, Monte Carlo simulation could be used as a benchmark to compare a quantum solution to — does the quantum algorithm and application deliver:
- The optimal solution. Or at least a better solution than the MCS solution?
- An acceptable result in much less time and resources — quantum advantage.
Unless the quantum solution delivers both an optimal solution and a much more rapid solution, it may continue to remain true that an MCS solution is… good enough.
The discovery or development of a quantum algorithm tends to require such out of the box thinking that the process might uncover or suggest an alternative approach to a classical implementation which performs much better than a more direct classical solution.
Since by definition a quantum-inspired algorithm runs exclusively on a classical computer, it might by itself represent an advance beyond being a mere laboratory curiosity, even if the original quantum algorithm remains in the lab due to lack of sufficient real quantum hardware.
What about D-Wave Systems?
D-Wave Systems has had a succession of commercial quantum computing products. Shouldn’t they count as commercial quantum computers? Possibly. Probably. Maybe.
The key distinction of the D-Wave systems from the entire rest of the quantum computing sector is that D-Wave has a hardwired algorithm — quantum annealing — for optimization problems, but lacks the more general-purpose universal gate-oriented model upon which all other quantum computers are based. So, even if you come to a conclusion about D-Wave for certain optimization problems, it won’t necessarily apply to other quantum computers or to other applications which cannot be readily transformed into the application model supported by D-Wave.
Even though D-Wave does have a commercial offering and the systems are installed at the customer’s site, it is worth noting:
- They have very few commercial customers. Granted, they now have a cloud-based remote access solution which does not require purchase of a complete system, but at least at this stage there is no evidence of any truly widespread usage.
- Even their 2000Q system with 2048 qubits is roughly a 45 x 45 grid, so it still can handle only fairly small problems.
- Even their upcoming Pegasus system with roughly 5000 qubits would support only roughly a 70 by 70 grid, still supporting only fairly small problems.
- The system supports a very constrained optimization algorithm. That may work well for a niche class of problems, but lacks the generality of universal gate-based quantum computers.
Even if you were to conclude that D-Wave systems are no longer mere laboratory curiosities, that wouldn’t justify concluding that universal gate-based quantum computers are no longer mere laboratory curiosities.
My conclusion is that D-Wave is interesting, but its potential and applicability is too limited to consider it as anything other than a mere laboratory curiosity. The company may be shipping systems and supporting cloud-based access, but the customers and users are not yet using the systems in a way that is clearly delivering substantial real-world value for production-scale practical real-world applications.
That conclusion may evolve as D-Wave evolves their systems, but we have to judge them by their current product offerings.
Is money a significant issue at all?
Money is always an issue, but is it the gating factor at this stage of quantum computing? No and yes — I don’t think money alone is the reason quantum computing is still a mere laboratory curiosity, but a lot more money focused on the right areas could indeed make a difference, even if not within the next few years.
Potential areas for additional funding:
- Research. Basic research and applied research. And theory as well.
- Product engineering? I don’t think that’s a gating factor right now.
- Marketing? Ditto.
- Training? Ditto.
- Education? Some expansion of the talent pool is needed, especially for basic research, but it seems premature to puff up actual application development, deployment, and operation.
- Venture capital? Seems premature to me. Much more basic research is needed. It’s inappropriate to use venture capital to fund basic research. Venture capital should be reserved for developing products and services using off-the-shelf technology — technology that is no longer a laboratory curiosity, or a laboratory curiosity which is in fact ready for product development without further research.
- Strategic investment and joint ventures. Mostly too early, especially for applications with a short-term focus. Focus on applied research and algorithm research could be of significant value.
Is more venture capital needed?
As just mentioned, no, availability of venture capital is not a gating factor at this stage of development of the quantum computing sector. Much more research is needed. Once research for hardware and algorithms has advanced through some indeterminate number of additional milestones, only then will it be appropriate for venture capital to pay attention to quantum computing.
This mere fact that the technology is not ready for a serious and massive venture capital infusion is a major tell that the technology is not even close to being ready to advance from being a mere laboratory curiosity.
Limited talent pool
Progress at any stage of research, development, and uptake of a new technology can be severely constrained by a limited talent pool of technical staff needed to work on and utilize the new technology, especially for very advanced technologies such as quantum computing which require the expertise of elite scientific and technical disciplines.
Lack of sufficient technical talent may mean that even though people and organizations know how to advance a technology and the theory is just sitting on the shelf, they simply cannot get enough of the right people to do so, leaving the technology stuck in the lab as talent slowly becomes available.
Talent shortages may be for the staff needed:
- In the lab itself. For research.
- In product engineering. To develop products and services.
- In the field. For development and deployment of applications of the technology.
Maybe even in all three areas. Different areas might have needs and shortages at different stages of the development of the technology.
A core challenge is that you can’t simply develop talent at a moment’s notice. It can take years, even many years. So even if application development talent isn’t needed during the research stage, that may in fact be the stage when the development of talent needs to commence.
The major challenge for quantum computing is that given the lack of clarity around how much more research may be required, it’s difficult to assess when it will be appropriate to surge spending on product engineering and then spending on field staff for development and deployment of applications.
It’s clear that a lot more technical staff will be needed, both to continue research, and to prepare for product engineering and application development and later deployment. Without this additional staff it will not be possible to advance quantum computing from being a mere laboratory curiosity.
At present, there is a fairly severe talent shortage for the sector. The talent pool is simply far too limited to satisfy demand — even for major, deep-pocket vendors such as Google.
On the flip side, my great fear is that some significant number of people and organizations will get sucked into prematurely focusing on quantum computing, and then as the years go by without adequate hardware and algorithms, disenchantment will set in and a lot of those organizations may pull back on spending, possibly leaving a lot of new, quantum-savvy technical staff high and dry or out on the street, scrambling to find non-quantum work.
My advice to anyone considering entering the field is to have a backup plan. Students should either treat quantum computing as a minor, or have some other aspect of computing as their minor. Researchers should have some other aspect of physics, chemistry, computer science, or mathematics as at least a side interest. Even if employment is available and hot for the next two years, demand may wane in a few years once it becomes clear that quantum hardware and algorithms which can exploit it are simply not yet available at that time.
It is worth keeping in mind that quantum information science includes other areas besides quantum computing, including quantum communication, quantum networking, quantum sensing, and quantum measurement, so even if quantum computing itself sees delays, researchers in the quantum field will remain in high demand.
In any case, the degree of uncertainty about the timing of when product engineering and field staff should be surged is a robust tell that quantum computing is not close to being ready to advance from being a mere laboratory curiosity.
Repurpose existing technical talent
My advice to enterprises seeking talent in quantum computing for the purpose of investigating applications is to repurpose a small fraction of your existing in-house technical talent on a part-time basis. Unless you’re a very large organization with very deep pockets or a niche tech firm focused on quantum computing, it just won’t make sense to attempt to build a full quantum-only technical team. Instead, select a few of your best technical staff, send them to training, assign them to do some reading, give them time and resources to experiment with and evaluate the technology — on a part-time basis.
That’s the best thing to do right now.
Wait another two years or so before making a deeper commitment.
Oh, and don’t expect great results from such efforts. It’s more in line with the notion of quantum insurance — simply being ready if some major breakthrough does occur in the next two to five years.
The fact that this advice is the most aggressive I can honestly offer at this stage is further evidence that quantum computing really is still at the stage of being a mere laboratory curiosity.
Obsession over Grover search algorithm even though not exponential advantage
As noted earlier, we don’t have a reasonable and robust collection of basic algorithms or algorithmic building blocks to serve as the foundation for development of quantum applications. One of the early algorithms developed by researchers was the Grover search algorithm. It’s somewhat interesting but it’s not a great example to base the foundation of quantum computing since it provides only a quadratic speedup, not the exponential speedup which is supposed to be the hallmark of quantum computing.
This is fairly clear evidence that quantum computing — premised on exponential speedup — is not close to being ready to advance from being a mere laboratory curiosity. We need example algorithms of significant complexity which indeed have been fully implemented — and have actually achieved exponential advantage.
Shor’s algorithm is still cited and treated as if it was already implemented even though very impractical
In theory, or at least as claimed, Shor’s algorithm should be able to factor very large semiprime numbers such as 4096-bit public encryption keys. But it’s hardware requirements are so extreme and there are so many questions about its feasibility that it is not a credible example of quantum computing either today, the next few years, five years, or even longer. Nonetheless, even recent academic papers continue to cite Shor’s algorithm as if it were practical and in fact as if it already had been implemented in full, which it has not.
This is further evidence that quantum computing is still not close to being ready to advance from being a mere laboratory curiosity. We need example algorithms of significant complexity which indeed have been fully implemented.
Can we expect quantum computing to cure cancer, hunger, poverty, and inequality?
Seriously, so many people are talking as if quantum computing will be able to solve many hard problems which classical computers are unable to address. In fact, the current presumption seems to be that any problem which can’t be solved by a classical computer can be solved using a quantum computer.
We’re told that quantum computers can:
- Discover new drugs.
- Develop new materials.
- Discover more efficient batteries.
- Optimize even the most difficult business problems.
Given that, it doesn’t seem too much of a stretch to extrapolate that quantum computers would enable:
- Discovery of new drugs for treating — and curing — cancer.
- Discovery of new food crops to boost food production even in areas with poor soil.
- Optimize economic and financial systems to more equitably distribute money and wealth.
So, maybe we can expect quantum computing to cure cancer, hunger, poverty, and inequality!
My point is not to set that expectation, but simply to highlight that many people are setting such high expectations for quantum computing that disappointment and disenchantment will be inevitable as reality sinks in.
Or more to the point, people are already acting as if quantum computing was no longer a mere laboratory curiosity when in fact it very much still is.
So, don’t expect a cure for cancer, hunger, poverty, and inequality any time soon. Not with a mere laboratory curiosity, at least.
Never underestimate the power of human cleverness and intuition
I’m a big fan of being methodical and disciplined, but sometimes plain old simple human cleverness or intuition can achieve results which far surpass the most methodical and disciplined efforts of even diligent professionals.
Alas, you can’t always just snap your fingers and make cleverness or intuition work its magic, but when all hope seems lost or too far beyond our reach despite our best methodical and disciplined efforts, cleverness or intuition can make a surprise appearance and save the day, provided that you have an open mind.
So even if I write that something can’t or is unlikely to be done or done in some time frame, we always need to keep in mind that cleverness or intuition are always an option. We can’t depend on them, but only a fool would count them out.
Would Rip Van Winkle miss much if he slept for the next 2 years? 5 years?
How much might a modern-day Rip Van Winkle miss if he drank too much fermented quantum Kool-Aid and fell asleep for the next two (or three or five) years? I’d say not much.
He’d definitely miss a lot of hype, hyperbole, press releases, frothy news articles, conferences, academic papers, etc., but other than that he probably wouldn’t miss much.
Of course he’d have to take a refresher course to cover the current state of the art for quantum information science, but all technologies and products which came and went or were superseded during his slumber could safely be ignored. The shelf life for a lot of hardware and software advances is less than two years, in my opinion. Any quantum hardware or software that’s more than a year old can generally safely be ignored.
The important point is that Rip wouldn’t have been able to design, develop, and deploy production-scale practical quantum algorithms and applications which deliver substantial real-world value during those two years anyway.
Is the same advice true for a slumber of five years? I think so. Everything I just said is equally true for a five year period. The only difference is that I would expect quite a few significant advances — but still not enough to enable quantum computing to advance beyond being a mere laboratory curiosity.
The downside is that Rip would have missed out on two or five years of decent income while he spun his wheels, bounced around from technology to technology and vendor to vendor, and wrote report after report, fruitlessly seeking that holy grail that should have enabled him to deliver substantial real world value — assuming that he was very adept at making creative excuses to his bosses for why he remained unable to deliver substantial real-world value during those two or five years.
Will two or three years be enough? Very unlikely
I honestly don’t see any promising path to advancing quantum computing beyond being a mere laboratory curiosity over the next two years.
How about three years? A lot can be done in a year, but quantum computing has so far to go, that even a full year simply doesn’t move the needle in a meaningfully noticeable manner.
Some say three to five years, but I don’t see it
Some people are aggressive optimists and forecasting that quantum computing will be ready for practical applications in three to five years. I don’t see it.
Sure, the technology will have progressed dramatically by that time frame, but it will still have a very long way to go, even for basic research.
I could be wrong, but this is the way it looks at this stage. Quantum computing will most likely remain a mere laboratory curiosity in the three to five-year time frame.
And to their credit, some of the people citing a three to five year time frame are calling that a minimum or “on the inside”, although they aren’t indicating what the outside might be.
Five years? Outside possibility, but still unlikely
I can imagine some pathways to dramatic advances in quantum computing over the next five years, but once again the technology has a very long way to go, so I see quantum computing still as a laboratory curiosity in five years.
Seven years? Maybe, if researchers finally get their acts together
Maybe another two years will do the trick. It’s very possible, but I still rate it as not a slam dunk.
If somebody showed me a roadmap that gets us to production-scale real-world applications in seven years or five to seven years, I could see it as a possibility, but I’d want to hedge reasonably strongly since there are simply too many variables and to me it just doesn’t feel as if the sector is firing on all cylinders yet.
I seriously want to bet on seven years, or even five to seven years, but it still feels like a roll of the dice.
Ten years? One would hope, but on the verge of being a zombie technology
Ten years feels like a much more credible time frame for finally seeing at least a few production-scale practical real-world applications which actually deliver substantial real-world value, signifying that quantum computing has finally advanced from being a mere laboratory curiosity.
Even then, no guarantee.
If there is no palpable feeling that deployment of a production-scale practical real-world application is imminent, people will really be grumbling that quantum computing has become a zombie technology, like nuclear fusion power, which has boundless potential and endless promises, but never seems to get even remotely close to the finish line.
Fifteen years? Seems like a slam dunk, but you never know
I’d be very surprised if quantum computing wasn’t a mainstream technology in fifteen years, but… you never know.
Twenty years? If not by then, maybe never?
Personally, I’d be expecting a universal quantum computer in twenty years which fully merges classical and quantum computing, as well as photonic computing and room temperature operation. Actually, I’d hope for and expect a universal quantum computer sooner, like 12–15 years.
In theory, quantum computing should be very mainstream in twenty years. If not, it’s difficult to believe that many people wouldn’t consider it a zombie technology — still alive, but still consuming tons of money for research, and not seeming to come close enough to practical applications. The great promises remain, but they remain unfulfilled as well.
If a time traveler told me (or I pulled a Rip Van Winkle and saw it with my own eyes) that quantum computing still hadn’t fulfilled its promises in twenty years, I’d be disappointed, but not completely shocked. After all, I have memories from when I was a child in the 1960’s of hearing of the great promise of nuclear fusion power which remains unfulfilled to this day.
If we don’t dramatically ramp up basic and theoretical research for quantum computing soon, we run the very real risk of disappointing real-world Rip Van Winkles who expect that if they go to sleep today and wake up in twenty years that quantum computers will be everywhere rather than still stuck in the lab as mere laboratory curiosities.
Prospect of a quantum winter?
The notion of a technology going through a winter, such as an AI winter, could become a very real concern for quantum computing. What defines a winter for a technology?
- A technology winter is the period of disappointment, disillusionment, and loss of momentum which follows a period of intense hype and frenzy of frothy activity as grandiose promises fail to materialize in a fairly prompt manner. The winter is marked by dramatically lower activity, slower progress, and reduced funding for projects. The winter will persist until something changes, typically one or more key technological breakthroughs, emergence of enabling technologies, or a change in mindset which then initiates a renewed technology spring. The winter could last for years or even decades. A technology could go through any number of these cycles of euphoria and despair.
I personally don’t think quantum computing is in imminent danger of entering a quantum winter, but if a lot of the hype doesn’t get transformed to reality in the next couple of years, it is a very real prospect.
Almost by definition, a quantum winter would be a clear confirmation that quantum computing remains a mere laboratory curiosity.
For the sake of discussion, one scenario is that it is possible that we see the hype and frenzy reach a fever pitch over the next two years, but then two to four years from now organizations suddenly find themselves under great pressure as they are unable to deliver promised solutions in the near-term (2–4 years), beginning the descent down the slippery slope to a quantum winter that could last for two to five or even ten years as we (im)patiently await the arrival of the technological advances needed to begin delivering quantum solutions.
I would rate that scenario a 50/50 proposition — a coin flip. Are you feeling lucky?
Mixed messages from the National Science Foundation (NSF)
You don’t have to accept my word on the need for a lot more research and that commercially-viability of quantum computing is not imminent. I’ve extracted a long list of key phases from a description which the National Science Foundation (NSF) just recently posted on their website. Their rhetoric is somewhat confusing and conflicting, sending mixed messages, buoyant with hope, promises, and anticipation, but, to their credit, loaded with many caveats which match a lot of my own sentiments.
There are two key points to bear in mind about NSF:
- NSF is focused on research. That’s a very good thing.
- If NSF is involved, you better believe that there is a significant level of research required before commercial viability can be achieved.
- Bringing you the quantum future — faster
- Post dated August 7, 2020.
I didn’t bother with most of the hope, hype, and promises, but I’ve highlighted the many caveats which echo a lot of my own sentiments:
- You’re going to live in a quantum future. Sooner than we may once have imagined…
- But just how distant is this future?
- The shift to a quantum world won’t happen overnight.
- Today, we are on the cusp of a similar revolution…
- We can expect innovative applications of quantum principles to emerge at an accelerated pace…
- … over the next few decades.
- Making the quantum future a reality is a goal that researchers around the globe have long been working toward.
- Quantum is still an emerging area of science
- building technologies that harness its potential will require extensive, fundamental research to better understand the principles that drive it.
- The U.S. also needs a significantly larger, quantum-educated science and engineering workforce ready to develop, operate and maintain the quantum technologies of the future.
- For decades, the U.S. National Science Foundation has led strategic investments in foundational research and development that have jumpstarted the quantum revolution.
- NSF is working to address key scientific and technological challenges that must be overcome to unleash its full potential.
- At NSF, we’re working to bring you into that quantum future — faster.
- The quantum future grows nearer
- While still in the early phases of development
- One day, they will do more than simply function as faster and better computers
- QIS researchers have ambitious goals; and at every step of the way, they’re encountering new challenges that require resources and radical thinking to address.
- QIS has the potential to fundamentally revolutionize society, but only after some overarching challenges are addressed.
- Quantum computers are in development, but getting them to the point of commercial viability requires making them more reliable.
- a quantum computer has to be reliable to truly reach its potential.
- It needs quantum networks. And every component that goes into those networks faces scientific questions just as difficult as those that face quantum computers.
- NSF is tackling some of the big questions through its new Quantum Leap Challenge Institutes working to make sure that we’re ready to use quantum computers once they become more viable.
- We are on the cusp of a new quantum revolution
- NSF has been funding quantum research and education since the 1980s
- There are many more obstacles that we know about between us the quantum future — and even more we don’t and will encounter along the way.
- But by identifying these roadblocks and giving researchers the resources they need to remove them, NSF is accelerating the quantum revolution.
So, if I were to judge the status of quantum computing solely by these characterizations by the NSF, I would certainly conclude that quantum computing is definitely neither ready nor soon to be ready for commercial application, and that as of now it remains a mere laboratory curiosity.
As noted in the underlying What Makes a Technology a Mere Laboratory Curiosity? paper, there will always be ethical considerations for any technology which is going to be introduced into the real world. Quantum computing is not immune from such considerations. Ethical considerations should of course be considered before transitioning quantum computing from being a mere laboratory curiosity to being accessible in the real world.
Whether there might be any ethical considerations which constitute gating factors, precluding a release outside the lab into the real world, is unknown at this time. There are none that I personally know of. But if any gating factors do arise, quantum computing will remain stuck in the lab and remain a mere laboratory curiosity until all gating factors have been addressed.
This paper does not explore ethical considerations for quantum computing, not to downplay their significance, but it should be noted that ethical considerations of quantum computing are worthy of a full treatment on their own.
It is worth noting that it is not the raw quantum computations on a quantum computer that might raise ethical considerations, but whether applications which happen to utilize a quantum computer for core computations might raise ethical considerations. Whether, for example, much more powerful computations might enable applications which have ethical considerations that weren’t raised for classical computing due to the much more limited processing power.
Generally, any technology advancing out of the laboratory will need to be evaluated with respect to government regulatory requirements, if any apply. The point here is that it may not be possible for a technology to advance from being a mere laboratory curiosity until it successfully complies with all relevant regulations and gains any required regulatory approval.
But I am aware of no regulatory requirements that would apply to quantum computing, other than the same regulations concerning electrical and electronic devices which apply to classical computers.
Whether there might be any export control regulations now or in the future is an open question.
Personally, I really do see quantum computing as still being a mere laboratory curiosity.
Sometimes it almost seems a little more than that, but not by much. There are still far too many important pieces of the puzzle missing or deficient in some significant way.
When will quantum computing finally advance beyond mere laboratory curiosity?
- This year? No way.
- Next year? Ditto.
- 2–3 years? Very unlikely.
- 5 years? Outside possibility, but high risk.
- 7 years? Maybe. If researchers finally get their acts together.
- 10 years? One would hope! On the verge of being a zombie technology.
- 20 years? If not by then, maybe never.
But predicting the future is always a risky business. There’s no telling when dramatic breakthroughs might occur. And there’s no telling when presumably simple matters somehow turn out not to be so simple. Promising paths can turn into dead ends, and simple human cleverness can sometimes enable leaps over the most insurmountable obstacles.
In any case, if you go away or sleep for two to five years you may miss a lot of noise, hype, and drama, but not miss the graduation of quantum computing from being a mere laboratory curiosity.
- Waiting for the next dramatic breakthrough. Will it be sufficient to finally break out from being a mere laboratory curiosity? How many more dramatic breakthroughs will be required?
- Watching the endless stream of incremental progress. Like watching grass grow or paint dry. Rarely seems to amount to a significant breakthrough.
- Watching the flow of money, resources, and attention to basic research.
- Watching the progress on the algorithm front.
- Waiting for the ENIAC moment. A first substantial real app — with quantum advantage.
- Waiting for the FORTRAN moment. Making it easy to develop real apps — with quantum advantage.
- Waiting for evidence of quantum advantage.
- Waiting for evidence of true quantum supremacy for a real-world application with a non-trivial amount of data.
- Wondering which application and application category will be the first to finally make it clear that quantum computing is no longer a mere laboratory curiosity.
- Grow tired of people insisting that quantum computing is no longer a mere laboratory curiosity when it clearly is.
For more of my writing: List of My Papers on Quantum Computing.