When Will Quantum Computing Be Ready to Move Beyond the Lunatic Fringe?
Elite organizations and elite developers will do some truly amazing things with quantum computers over the next couple of years, but the real question is when quantum computing will be ready for use and mastery by mere-normal, mainstream customers and developers. It may not be possible to even roughly predict when quantum computing will finally move beyond the realm of the lunatic fringe and finally be ready for mainstream, primetime usage by mere mortals, but this informal paper will explore the factors, obstacles, and milestones likely to be encountered along the way — not in full, gory detail, but from a high-level perspective.
The lunatic fringe
The term lunatic fringe refers to those customers and individuals who are able to do amazing things with even the most primitive and difficult to work with technologies, with minimal support from vendors, stuff that’s well beyond the full comprehension of mere mortals, the mainstream staff at mainstream organizations. The topic of the lunatic fringe in the context of technology is explored more deeply in the What Is the Lunatic Fringe (of Technology)? paper.
The ENIAC and FORTRAN moments as key milestones
In previous papers I have explored two key milestones for adoption of quantum computing — the ENIAC moment and the FORTRAN moment, when the hardware and at least one significant application come together, followed by the moment when a high-level programming model and language become available. The lunatic fringe remains involved before and after each of those two key milestones, but at each milestone an additional level of mere-mortal staff and organizations can begin to work more facilely with the technology as it matures and becomes more usable.
It would be good to have a firm grasp of the conceptual content of the two papers I have written on those topics before continuing:
- When Will Quantum Computing Have Its ENIAC Moment?
- When Will Quantum Computing Have Its FORTRAN Moment?
The danger of the Adatran trap
A significant risk even with all sorts of wonderful breakthroughs and great technical progress is the Adatran trap, where improperly trained and improperly supervised developers misguidedly attempt to use (misuse) the new technology as if it were the old technology (classical algorithms and classical programming models and languages) with which they were much more familiar. Adatran originally referred specifically to attempts to use the more modern Ada programming language by individuals skilled with the older, less-sophisticated FORTRAN programming language. The risks and strategies for mitigating them are explored in a third paper:
Stages of the journey to escape the dependence on the lunatic fringe
With the backdrop of those three papers, the stages of progress for quantum computing are likely to be:
- Long before the ENIAC moment. Only the truly hard-core lunatic fringe are able to master the technology, especially with very limited hardware. Others, mere mortals, can experiment (i.e., play) with the technology, but not in a meaningfully productive manner.
- Shortly before the ENIAC moment. Hardware is getting more capable, but algorithms are still really hard. Still primarily the realm of the hard-core lunatic fringe, but now they can assist the non-fringe to at least explore the technology in an almost-meaningful manner. Any substantial application is still no more than a pipedream promise with some increasingly tantalizing hints and fragments of actual implementation, other than much more modest, “toy” applications. Fulfilling the promise must wait for the ENIAC moment itself.
- The ENIAC moment itself. Finally, a successful implementation of a real, substantial application that the non-fringe can see and relate to. The hardware is finally starting to look halfway decent, but algorithms are still very hard, especially when using a lot more qubits. The lunatic fringe will be required to master both the technology and the application to achieve this moment. Quantum advantage is finally achieved, which is a real victory, but still a pyrrhic victory since the intensity of level of dependence on the lunatic fringe is too great to be either sustainable or achievable for most mainstream organizations.
- Shortly after the ENIAC moment. Still the realm of the lunatic fringe, but now they are beginning to work with more mainstream developers and organizations to develop applications that real people can relate to. Tools are getting a bit more capable, but complex algorithms are still really hard and still mostly the realm of the lunatic fringe.
- Moderately after the ENIAC moment. Real applications are becoming more common, but still at an unacceptably high cost. Organizational management begins to realize that quantum is here for real, but not so sure how to deal with it due to this dependence on the lunatic fringe — applications are promised and are in fact delivered, but on a frustratingly long timeline.
- Long after the ENIAC moment. Still the realm of the lunatic fringe. More developers and managers are beginning to use the technology, but the initial honeymoon period is over — developer and managers are really beginning to complain about how long it takes and how much it costs to develop nontrivial applications. Early tools and even languages and rudimentary programming models are accumulating, but with no clear winner and rather mediocre productivity. Algorithms are still really hard, but there are enough off-the-shelf existing algorithms that a lot of projects are simply reusing the work of past projects. In short, a very mixed bag, with rising disenchantment despite the great progress — reality is sinking in as the hype dissipates.
- Shortly before the FORTRAN moment. The word is out, people know it’s coming, and they are simultaneously thrilled, anxious, and frustrated that the long-rumored FORTRAN moment is not yet here. The lunatic fringe are still in charge, running the game — and people are getting tired of it. The existing rudimentary tools, languages, and programming models are just not cutting it, but people have no choice. Staff and management have increasingly long lists of real applications for quantum computers, but the lunatic fringe is just stretched too thin to satisfy the demand — and this is a misuse of the lunatic fringe anyway since their purpose is to explore uncharted territory, not to hold the hands of the masses. Most applications will commonly be simply reuse and minor adaptation of existing algorithms and applications — so-called copy and paste programming — the lunatic fringe will still be required to go much further afield.
- Finally the FORTRAN moment itself. The long-promised high-level programming language and programming model is here, but… there’s a learning curve and the systems still have plenty of kinks to be worked out., so the lunatic fringe is still required and, unfortunately, in charge. There’s a real risk that many initial efforts may fall into the Adatran trap — using the new programming model and language in a superficial sense, but still designing algorithms in the classical mindset rather than the true quantum mindset, thus missing out on the full potential of quantum computing. OTOH, with the lunatic fringe still heavily involved, they may be successful at guiding developers to the new quantum mindset. Hybrid applications — part classical and part quantum — will be common, but developers will struggle with how to balance the two realms — only the lunatic fringe will manage to get it right.
- Shortly after the FORTRAN moment. Non-fringe staff is incrementally crawling up the learning curve and actually starting to get real work done without the constant assistance and supervision of the lunatic fringe. A small minority of mere mortals will begin embarking on exotic new projects without waiting for the lunatic fringe to blaze each and every single new trail, but the technology will still be too new, untested, unproven, and frequently not quite working as promised for more than a small minority to be doing well. There may be some backlash as people wonder why the long-promised technology is not fully delivering on its many promises. The risk of the Adatran trap is still very present, but kept in check by the intense scrutiny of the lunatic fringe, and the fact that in these early stages only the more sophisticated developers will be working on quantum, even with a great new programming model and language. Developers will continue to struggle with balancing classical and quantum approaches in hybrid applications, and generally only the lunatic will get the balance right. Still way too soon to declare mission accomplished.
- Moderately after the FORTRAN moment. The initial kinks have been worked out, the word is getting out and spreading wide, and usage is beginning to spread like wildfire as leading edge adopters report success after success after success. Management is finally able to kick off application projects without the need for a single lunatic fringe person. The lunatic fringe may still get called in on occasion for some episodic consulting on thorny problems, but that is increasingly the exception rather than the norm. Finally, more advanced developers and customers — but still far short of the level of the lunatic fringe — will be routinely blazing new trails without the need for the lunatic fringe. People are starting to talk about how quantum computing is on the verge of becoming a mature technology. The risk of the Adatran trap rises dramatically as less experienced developers try their hand at this new quantum thing — without sufficient training, direction, or supervision, but with so many projects going on, the lunatic fringe will be unable to scrutinize and police even a small fraction of them. A lot of projects will fail due to unimpressive gains due to the Adtran trap, but enough, a critical mass, will succeed to keep the momentum growing. Developers will struggle less with balancing classical and quantum approaches in hybrid applications, but it will remain an ongoing challenge — the need for the guidance of the lunatic fringe will incrementally decline as successful projects accumulate.
- Long after the FORTRAN moment. Adoption is now truly widespread. The technology has proven itself. The lunatic fringe is now a distant and fading memory — they’ve moved on to the next big thing. Quantum computing applications are now fully mainstream and fully ready for primetime deployment. People now accept that quantum computing is now a mature technology. The risk of the Adatran trap will have peaked and then declined as the majority of organizations and developers finally get the message about leaving classical algorithms behind and focusing on new quantum approaches to algorithms. The risk of new applications falling into the Adatran trap will be minimal, but there may be plenty of existing quantum applications still caught in the trap — not fully exploiting all of the best features of quantum computing, but embedded in applications which are too complicated for any mere mortal to untangle and redesign, and no longer a lunatic fringe available or economical to do the necessary untangling. Hybrid applications — part classical, part quantum — will abound, and developers will finally be doing a much better job of balancing designs to capture the best of both worlds. So, finally, it is now safe to declare mission accomplished.
Key technical hurdles
Embedded in all of that are several key technical issues:
- Basic hardware. Qubits that fully function and have the required coherence and connectivity. Including a rich and complete universal gate set.
- Scalable hardware. Enough qubits, enough connectivity, large enough quantum programs.
- Sufficiently rich programming model.
- Sufficiently rich library of algorithmic building blocks.
- Sufficiently rich library of application building blocks and subsystems.
- Reasonably high-level programming languages that simultaneously allow the programmer to speak in terms that make sense for conceptualizing applications and can be readily and automatically transformed into optimal use of the raw quantum hardware.
- The combination of hardware and algorithms finally results in an actual quantum advantage over classical computing and eventually actual quantum supremacy for a fairly wide range of applications.
- Sufficient and readily accessible specifications, documentation, and training for use of the hardware, systems, tools, and application modeling — by mainstream staff, not just the lunatic fringe.
- Avoiding the Adatran trap — the risk that even with an advanced programming model and language — the FORTRAN moment, developers may continue to think more in classical terms than in quantum terms as they design algorithms. A dramatic change in mindset is needed, but that’s a very hard thing to do and can take a lot of time, energy, resources, and commitment by both technical staff and management.
For some deeper exploration of hardware and algorithm challenges, read my The Greatest Challenges for Quantum Computing Are Hardware and Algorithms paper.
Breakthroughs, winters, and setbacks
Progress in any technological field rarely proceeds in a strictly linear trajectory. Sure, sometimes progress is in fact fairly linear, but all too often linear progress is interrupted by either stunning breakthroughs that accelerate progress or depressing slow periods (called winters) when it feels as if little if any progress is being made, and even occasional setbacks where efforts must be abandoned and new paths found. There’s certainly no predictability as to how many such disruptions may occur before a desirable end-state is finally achieved.
Quantum computing has certainly been on a roll over the past few years, but it simply isn’t possible to predict whether the recent pace will continue, accelerate, or decelerate. Or how many times or in what sequence it will do so.
- Breakthroughs. Unexpected sudden discoveries, achievements, or changes which rapidly accelerate progress.
- Winters. Extended periods of time (many months or even years) when the pace of progress slows to a craw or even stops. Sometimes only a breakthrough can restore the pace of progress.
- Setbacks. Insurmountable obstacles or technical dead ends are reached. The current approach must be abandoned and either effort expended to find a new approach or revise the current approach, or effort redirected to known alternative approaches.
How many breakthroughs, winters, and setbacks will be experienced before quantum computing finally breaks free from the clutches of the lunatic fringe is anybody’s guess.
D-Wave Systems — a specialized quantum computer
This paper is concerned with general purpose quantum computers which support a universal gate set capable of arbitrary computations, but there may be any number of specialized or single-function quantum computers which are developed as well, especially until general purpose quantum computers become powerful and usable enough for many applications.
One such special-purpose quantum computer is from D-Wave Systems. It has 2048 qubits, which is quite impressive (with a 5,000-qubit machine on the way), but is functionally limited to quantum annealing using Ising and QUBO (Quadratic Unconstrained Binary Optimization) models for discrete optimization. It does address an interesting application, but it doesn’t have even the very limited programming features of ENIAC, let alone the features of a universal gate set, common on virtually all other quantum computers.
This is indeed a powerful and useful machine for sure, but personally I consider it to be more of a quantum coprocessor. Granted, it is a very sophisticated and powerful — and very useful — coprocessor, but a coprocessor nonetheless rather than a full-fledged, general purpose computer.
Also, in my view, it has more in common with analog computers than a digital computer.
And, worse, it still requires the close attention of the lunatic fringe to use it effectively.
Still, as more organizations get more experience with the specific application niche addressed by this and similar special-purpose quantum computers, and more users can simply reuse existing applications with only minimal effort to adapt to their specific needs, some day it may be possible for mere mortals, normal technical staff, without deep quantum background, to utilize these machines without such a great need to be supported by the lunatic fringe. But, that “some day” is not today, tomorrow, next week, next month, or next year. Still, it could happen within a couple of years. And an approximation of it even a little sooner, for some even narrower niches of applications.
In short, these machines don’t help to address the larger issue of the dependence on the lunatic fringe for general-purpose quantum computing, but they do constitute a distinct and somewhat separate branch off of the main body of quantum computing.
Resource requirements will also proceed in stages:
- Early stages. Minimal requirements. Maybe only a single person part-time. The hardware vendors are bearing the full expense of the hardware for the early quantum computers, with free or low cost remote access.
- Early expansion. One or more full-time staff. Some training and conference expenses. Some equipment, maybe, or just personal computers and Internet access to quantum computing as a service in the cloud.
- Real expansion. More full-time staff. Management. Some real equipment. Possibly even an actual quantum computer, but maybe simply a higher volume of remote access to a shared quantum computer.
- Further expansion. More projects. More applications. More full-time staff. More managers. More support staff. Quality assurance. Performance testing. Documentation. Training development. Seminars to spread the word. Higher cost for access to actual quantum computers for extended amounts of time. Possibly writing, publishing, and presenting technical papers and presentations at industry conferences and publications.
- More permanent service organization. A real, substantial budget. A real, substantial headcount. Although, application costs may shift out to business units who are sponsoring the applications, including their respective quantum computing costs. Some early development costs can ramp down as significant knowledge becomes off-the-shelf rather than work in progress, but there will always be new developments and new technologies coming along at an unpredictable pace.
How long will it take?
So hard to tell when the early dependence on the lunatic fringe will switch over to reliance on mainstream staff. Progress could accelerate consistently from where we are today, or we could see one or more “quantum winters” — long stretches of time with only minimal technical progress — before mainstream, primetime adoption is achieved.
- Within one year? No, get real.
- Within two years? Remote possibility, but rather unlikely.
- Two to three years? It could realistically happen, but still rather unlikely.
- Three to four years? A decent likelihood, reasonably likely, but certainly not a slam dunk.
- Four to five years? Higher probability, and I’d almost bet on it, but nothing is certain in this business.
- Five to seven years? I’d hope it would happened by then and I’d be rather disappointed if it doesn’t, but things can happen or not happen that scramble even the best laid plans.
- Seven to ten years? I’d be rather surprised and disappointed if it doesn’t happen by then.
- Ten to fifteen years. No good excuse for it to take this long, but who knows.
I’ll try to simplify and suggest that two to three years after the FORTRAN moment is probably the best bet at this stage…
Oops… in my FORTRAN moment paper I settled on a timeframe of three to four years for the FORTRAN moment, so add two to three years on to that and you get five to seven years. Sure, it would be disappointing if it does indeed take that long, but unless people start moving more aggressively and some real breakthroughs occur, that may be the best assumption to make at this stage. But I’ll be optimistic and call it three to seven years.
Three to seven years. That’s my best estimate at this stage for how long it will take to get to the point where the lunatic fringe is no longer the main gating factor in real progress for quantum computing.
And if you want to be conservative, stick with five to seven years. And expect to recalibrate that estimate every one to two years between now and then.
And that will be a spectrum — still fairly heavy demand for the lunatic fringe at the front end of that range, with a minority of projects succeeding without the lunatic fringe, a 50/50 balance of lunatic fringe and mainstream staff in the middle, and finally primarily mainstream staff and a minority of lunatic fringe towards the end of that range.
Actually, that would be the stage where the lunatic fringe are no longer needed for most projects, so the stage when some to a significant number (but not necessarily most) of projects can proceed without the need for the lunatic fringe might be closer to three to five years. Or four to six years if you want to be more conservative.
Take your pick as to what mix you personally want to use as the threshold for demarcating a successful transition away from the lunatic fringe.
Yeah, sure, we certainly would really like to do better than that, but quantum computing is actually really, really hard, and algorithms that deeply exploit quantum parallelism are really, really, REALLY hard, so we should get used to settling in for a protracted reliance on the lunatic fringe for at least another three to four solid years, and even that is being really optimistic.
There’s no conclusion at this stage, only the beginnings of a long, hard slog into the future.
Progress depends on a lot of advances yet to occur:
- Quantum hardware.
- Algorithmic building blocks, metaphors, design patterns, and libraries.
- High-level programming model.
- High-level language.
- More than a few substantial handcrafted applications that blaze the trails and guide the projects to follow.
The lunatic fringe certainly has their work cut out for them.
Once we see the lion’s share of that work achieved, then we can finally break out and finally move beyond the limited capacity of the lunatic fringe.