# My Journey into Quantum Computing

How did I get started in quantum computing? How did I first become aware of quantum computing? How did I get to where I am now in quantum computing? What was the arc of my trajectory? This informal paper chronicles the major milestones — and obstacles — along the way of my journey into the quantum world of quantum computing, quantum mechanics, and quantum information science in general. Maybe something in my own trajectory might benefit others as they consider their own entry and path in this new field of study and sector of technology and commerce.

There is a lot of material here. After reading the first few sections for background, you might want to skip ahead to the central sections:

- My quantum journey timeline
- My watershed moment — November 2017
- Reflections from my journey

Topics to be covered in this paper:

- The impetus for this account of my quantum journey
- The goal of my journey into quantum
- Focus on understanding and truth, not popularity or getting a job
- Non-goals of this paper
- Is quantum computing a field or a sector?
- My pre-quantum history and career as a software developer
- My work and career status: I’m semi-retired
- I’m a technologist
- No interest in specialization
- Jack of all trades
- My motivation and interest
- Addressing the hype — what is real and what is not
- I’m an idea guy at heart
- Hands-on was quite appealing when I was younger
- My interest lies in more than what I can accomplish by myself with my own hands
- My mission
- Grandiose claims
- I have no hands-on involvement in production or application of quantum computing
- Career in quantum computing?
**My quantum journey timeline****My watershed moment — November 2017**- Reflections from my journey
- No formal education in quantum computing or quantum mechanics
- Thinking of getting a college degree focused on quantum?
- Is quantum computing real?
- Is practical quantum computing imminent?
- Is the hype warranted?
- Everything feels… premature
- D-Wave is somewhat inscrutable
- Not really ready for Quantum Ready
- Ambiguity of quantum ready
- Quantum volume has no real technical value
- Quantum ready and quantum volume are basically marketing scams
- Quantum Summer of Love — but might a Quantum Winter be coming?
- All it takes is a single massive breakthrough or two or three to break out and we’re off to the races
- But… my journey is far from complete
- Ask me again in two years
- Much more basic research is needed
- General reflections
- Many exciting advances have been going on in classical computing
- The awesome and unparalleled intellectual power of classical computers
- Seminal role of Feynman
- Quantum computer as a coprocessor
- Must map problems to solutions using the raw physics of quantum mechanics
- Lack of confidence that many current quantum algorithms will scale
- Tedium of a low-level programming model
- Need a high-level programming model
- Potential for quantum-inspired algorithms and quantum-inspired computing
- Odd, cryptic, and poorly-defined terminology and heavy reliance on greek symbols
- Low-level programming model
- Need a high-level programming model
- Lack of a quantum simulator in the early years
- Probabilistic results, statistical aggregation, and approximate determinism
- Shot count and circuit repetitions
- Exponential speedup — isn’t free, easy, and automatic
- Much more algorithmic building blocks, design patterns, and application frameworks are needed
- Need credible, real-world example algorithms and applications
- Twin qubit challenges: isolation and maintaining entanglement
- We know too little about the granularity of phase
- RISC vs. CISC issues for firmware
- Emphasis on Shor’s algorithm seems unwarranted
- Emphasis on Grover’s algorithm seems unwarranted
- DiVincenzo’s five criteria (requirements) for a quantum computer
- Boundless cleverness of algorithm designers
- My sources of information
- Wikipedia articles related to quantum computing
- IBM Qiskit
- IBM Qiskit Text Book
- MIT online quantum courses
- MIT xPro Quantum Computing Fundamentals — beyond my budget
- My budget in general: $0
- Books
- Blogs
- Medium
- Videos
- Lecture notes
- My writing on quantum computing
- What is Quantum Computing?
- My social media presence
- In hindsight, what path would I take if I was starting from scratch in 2020?
- What path should a newcomer take in 2020?
- Maybe I should take a two to five-year Rip van Winkle-like slumber
- Monitoring for advances and breakthroughs
- My endgame?
- What else?
- Conclusions
- What’s next?

# The impetus for this account of my quantum journey

The impetus for writing this account is that I was reading someone’s post which mentioned how they got started in quantum computing — “*I caught the quantum computing bug in the spring of 2018*” — and I realized that was roughly when I got started, but not quite, so I went back to check my old notes — it was the fall of 2017 for me. I decided that I should collect all of these personal milestones in one place, both for my own benefit, to review my own progress, and possibly to benefit others as they consider their own entry and path in this new field of study and sector of technology and commerce.

If nothing else, maybe somebody can avoid mistakes which I’ve made!

# The goal of my journey into quantum

To put it as simply as possible, the *goal of my journey* into quantum has been:

**To understand quantum computing as deeply as possible, to understand***how real it is*, its capabilities, limitations, and issues.

# Focus on understanding and truth, not popularity or getting a job

I haven’t been trying to win any awards for popularity or win an offer for a dream job. My focus is on ferreting out the truth and understanding this emerging sector of technology and commerce — let the chips fall where they may. That may mean stepping on some toes and offending some who seek to paint a rosier picture of the current state of quantum computing than I think is warranted, but so be it.

# Non-goals of this paper

My intended focus for this paper is how I got started and my early progress, so I have:

- No intention to detail the full history of quantum computing itself. See the Wikipedia
article if that’s what you’re looking for.*Timeline of quantum computing* - No intention to detail the more recent developments of my progress. The intention is to focus more on how I got started and really got going. Sure, I’ll cover some of my more recent efforts, but not in great depth here.
- No intention to focus on where I am now or where I might go next — simply how I got started and my early progress. Sure, I’ll cover some of where I am now and what might be next, but not in great depth here.

# Is quantum computing a field or a sector?

Yes, quantum computing is ** both** a

*field*and a

*sector*:

- Quantum computing is a
*field*or*field of study*from a science, research, or academic perspective. - Quantum computing is a
*sector*from a business, commercial, industrial, or technology perspective.

Personally, I will usually refer to quantum computing as a *sector* since my main interest is practical applications which deliver substantial real-world value. But I am also very interested in research, where it makes more sense to refer to it as a *field* or *field of study*.

# My pre-quantum history and career as a software developer

I’m not going to go into all of the gory details of how I spent my entire life before I shifted my focus to quantum, but just some highlights.

If anyone is curious about what kinds of work I’ve done in my career as a software developer, you can look at my LinkedIn profile:

Some highlights from school:

- Got heavily into (classical) computers in the 10th grade in high school through the computer club run by a math teacher — 1970. (Ah, that’s actually 50 years ago.)
- I attended two summer programs in computers while in high school — 1970 and 1971, at a time when there was no formal computer education in high school, one for five weeks residential at a college, where I learned assembly language programming for the PDP-10, which was a really big deal in those days.
- Actually got paid to do data entry and even some application programming in the summer after graduating from high school and before starting college — 1972.
- Worked in the computer center at college all four years as a systems programmer focused on programming languages and compilers — 1972–1976.
- Worked as an applications programmer at the computer center of a community college for a summer and a semester when I dropped out of college for a semester.
- Continued work as a systems programmer at the college computer center for a semester even though I had dropped out for a semester — as staff rather than as a student.
- Earned a master’s degree in computer science simultaneously with my bachelor of science degree.

Some highlights as a professional software developer:

- Worked at a couple of computer companies — 1976–1981.
- Worked at a bunch of tech startups, including one developing both hardware and applications and system software.
- Focus on development of programming languages, compilers, and development tools.
- Worked closely with hardware engineers for graphics hardware and firmware and instruction set design.
- Worked on graphics subsystems and graphics engines.
- Worked on development of electronic CAD and CAE systems.
- Worked on database systems and database engines.
- Worked on development of a video-based interactive computer-aided instruction system.
- Worked on search engines.
- I’ve always been heavily into sophisticated data structures. Get the data structures right and the code is a lot simpler.
- And data modeling in more recent years as I got deeper into database systems.
- Did a lot of technical writing, including one online book.

Some areas that I didn’t focus on:

- Applications in general.
- User interface and user experience.
- Data processing applications. (Actually, I did a little at that summer work at the community college.)
- Scientific computing. Physics. Chemistry. Biology. Astronomy.
- Engineering computing.
- Material research.
- Energy research.
- Drug discovery.
- Optimization.

Granted, a number of those areas outside of my historical focus are key potential applications for quantum computing, but I’m more of a technologist interested in hardware, system software, tools, and libraries and frameworks than an application developer.

# My work and career status: I’m semi-retired

I am late in my technical career, still a little too soon to officially and formally retire, but no longer interested in any hands-on work. I refer to myself as *semi-retired*.

I remain passionately interested in technology, especially advanced, bleeding-edge technology such as quantum computing and advanced artificial general intelligence, but I’m just not interested in actually using and applying it on a hands-on basis.

I’d consider a consulting engagement if it was a senior, high-level, advisory role — and it really interested me. I’m not at all interested in doing any work which doesn’t interest me just for the money.

# I’m a technologist

I consider myself a *technologist*, meaning that there are four things that interest me about any particular technology or technology in general:

**Capabilities.**What can the technology do? Functions and features. What problems can it solve?**Limitations.**What*can’t*the technology do? What are the technical limits, boundaries, and performance characteristics?**Issues.**What impediments or obstacles or missed opportunities prevent the technology from achieving its full potential or prevent users from fully exploiting its full potential? Understanding the extent of hype.**Communicating.**Documenting all of the above. Helping people understand all of the above. But my goal is to share and communicate, not persuade per se.

I wrote an entire informal paper on those interests for quantum computing in particular:

# No interest in specialization

When I was younger I was more specialized — focused on programming languages and compilers, and developer tools, but gradually I was exposed to other areas. These days I have little interest in specialization per se. I’m interested in whatever interests me at the moment.

I’m far more interested in focusing broadly than focusing narrowly.

# Jack of all trades

My traditional focus has been as a software developer, but gradually I’ve been exposed to a variety of other roles. In fact, I no longer have any interest actually writing code — not even the slightest interest. And in fact I have no interest whatsoever in any *hands-on* roles.

Additional roles I’ve been involved in to varying degrees:

- Product planning.
- Solutions architect.
- Consultant.
- Technical writing.
- Technology strategy.
- Product architecture.
- Software architecture.
- Hardware/software architecture.
- Technical team management.
- Quality assurance.
- Performance modeling, measurement, and characterization.
- Product support.
- Pre-sales and post-sales support.
- Marketing.
- Sales.

None of these roles interests me as an exclusive focus.

# My motivation and interest

I imagine that most people entering the quantum computing field or sector are doing so because either:

- They need the power of a quantum computer to solve real-world problems.
- They seek employment or business opportunities in a hot new field.

I personally have no application for a quantum computer. I’m a technologist, not an application developer. I’ve worked for a couple of computer companies, focused on system software and working closely with hardware engineers, but that’s not my interest these days either.

I have two motivations:

- Understanding the technology, as a technologist — capabilities, limitations, and issues.
- Addressing excessive hype. Helping people understand what is real and what is not.

My motivation has also been to understand *quantum mechanics* which is the physics which enables quantum computing to more fully evaluate what’s feasible. This should help with both motivations — understanding what features quantum mechanics enables, and understanding the limits of what quantum mechanics can enable, as well as what might be practical in the near term, and what might not be practical even in the long term.

# Addressing the hype — what is real and what is not

A major part of my motivation is that I hear a lot of hype that just doesn’t sit right with me. To my mind, public communication about any technology should fairly closely align with the actual capabilities and limitations of the technology itself.

Sure, quantum computing has had a lot of hype for many years — decades now — but the hype has escalated dramatically and is now asserting that quantum is “ready” and “here now” or at least getting very close, when that doesn’t seem to be the case.

So, my motivation has been to get a handle on whether any of that hype is really true. And to dig down and get at what are the facts on what it can and can’t do, today and for the next year or two. Longer than that just doesn’t matter as much right now. Longer-term futures are of interest too, but the hype shouldn’t speak as if the distant future was here now.

# I’m an idea guy at heart

The most dispiriting thing I’ve ever heard during my career is that “*Ideas are a dime a dozen — they’re not worth anything.*” Yeah, maybe, but still… *ideas are all that interest me*. Sure, you have to *implement* ideas to get paid, but if foregoing pay is the price for being able to focus on ideas to the exclusion of working on implementing them, then that’s a price that I’d like to be able to afford. As it is, I’m close enough to real, formal, official retirement — in less than four years when I hit age 70 and my Social Security retirement benefits reach their limit — that I can squeak by living off savings and private retirement funds, provided that I maintain a fairly tight budget, which I am doing now and have been doing for five years now.

Put simply, ideas:

- Excite me.
- Satisfy me.
- Intrigue me.
- Challenge me.
- Help me learn.
- Help me grow.

# Hands-on was quite appealing when I was younger

When I first got started with (classical) computing in high school in 1970 theory and ideas were of no real interest to me. The whole appeal of *writing a computer program* was that you were actually *creating* something, not actually a living creature, but you were able to cause a dumb machine to behave in a manner that you could control, even if it simply read in a few numbers, did a little math, and printed the results. It was awesome! Being hands-on was a real thrill.

Being hands-on gave me a sense of *accomplishing something*.

That thrill lasted about five years for me. It lasted longer than that, but not with the same intensity.

The thrill waxed and waned throughout the years. There were times when the thrill was super-intense, there were times when the thrill was completely absent, and everywhere along that spectrum.

# My interest lies in more than what I can accomplish by myself with my own hands

I have no idea how many computer programs, applications, tools, or lines of code I developed over the decades of my career. Wow, it’s been fifty years now!

But somewhere along the way I lost interest is “*just coding*.” Seeing another program run was no longer an exciting moment for me. The only sense of thrill, was checking the box that I accomplished the task (and hopefully get paid for it), and move on.

I’m not sure how I was able to last as long as I did!

But my passion for ideas, working with ideas, has not diminished even the slightest over recent decades and years. If anything, that passion and excitement has only intensified. Ideas are everything to me.

In fact, I find myself pursuing philosophical and non-technical real-world issues precisely because they raise *big idea* issues that I don’t generally run into with computing.

I’ve also found myself much more interested in *physics* than when I was younger. That dovetails nicely with quantum computing.

To me, ideas are big. Granted, some ideas are actually small or modest in size, but those that aren’t big just don’t interest me.

I may not have the ability (or interest) to actually implement many of the big ideas that come into play with quantum computing or artificial general intelligence, simply *working with the ideas* is all that I am after. Let somebody else, like the young kids like I was in high school, college and my early career, roll up their sleeves and *implement* the ideas.

Working with ideas, big ideas is what excites, challenges, and satisfies me. That’s what I really want to be doing. Even if it means that I can’t get paid for my time and effort.

# My mission

It didn’t take me long to realize that one of the foremost issues confronting quantum computing is *rampant hype*.

It’s not a significant exaggeration to summarize the hype as claiming:

**Quantum computing can solve every computing problem which classical computing cannot solve today.**

It would be excusable to have a knee-jerk reaction and push back that clearly that claim is *categorically false*, but given my technology background my stronger inclination is to take an analytical approach and *assess* the claim on a technical basis.

So the strategic objectives of my assessment are:

- Confirm whether quantum computing is real or just hype. Which parts are more real, and which parts are more hype.
- How close are the more realistic claims to being true?
- What are the qualities, functions, features or the more real aspects of quantum computing?
- What exactly makes the hype impractical?
- What real really means. Such as timing — today, next year, two years, five years, or… when? Will it perform as advertised? Will it be readily and widely available or only to a chosen few?
- Will it really be worth the wait? What will the net
*quantum advantage*amount to? - Will quantum computing apply to a very broad swath of applications and application categories or just to relatively narrow niche use cases?
- To share and communicate my findings and chronicle my journey, primarily through my writing.

# Grandiose claims

Quantum computing is plagued by *grandiose claims*, what we call hype. I won’t try to catalog all of the many grandiose claims, but a few merit immediate attention:

- Quantum computing can solve every computing problem which classical computing cannot solve today.
- Quantum algorithms deliver an exponential advantage over classical algorithms.
- Quantum computers are here today.
- Quantum computers will be ready for production applications very soon.
- Quantum computers will be ready to deploy production applications within two years.
- Classical computing has hit a wall, and only quantum computing can solve many hard problems.
- It’s easy to program a quantum computer — anybody can do it.
- Quantum computing is poised to fundamentally transform the way businesses solve critical problems, leading to new efficiencies and profound business value in industries like transportation, finance, pharmaceuticals and much more.
- Quantum computing is poised to change everything.
- Quantum computing is poised to change our digital world.
- Quantum computing is poised to take a quantum leap.
- Quantum computing is poised to transform our lives.
- Quantum computing is poised to fundamentally disrupt almost every aspect of our daily lives.
- Quantum computing is poised to transform the global computing market.
- Quantum computing is poised to upend entire industries from telecommunications and cybersecurity to advanced manufacturing, nance, medicine, and beyond.
- Quantum computing is poised to change artificial intelligence and machine learning — forever.
- Quantum computing is poised to create a paradigm shift in flight physics.
- Eight things software engineers need to know about how quantum computing is poised to change the world.
- Quantum computing is poised to impact many significant markets.
- Quantum computing is poised to be a driver of innovation in the next decade.
- Quantum computing is poised to make classical computers look like sloths — and launch new, never-imagined processes.
- Quantum computing is poised to disrupt traditional computing methods.
- Quantum computing is poised for takeoff in industries from medicine to finance.

Sad to say it, but most of those claims are literally taken from Google search results when I searched for “*Quantum computing is poised*”.

It’s not that quantum computing won’t in fact do some or even many of these things, but that it is being claimed *with certainty* that it will do all of them.

Part of my mission is to dispel a lot of the hype, especially for such grandiose claims.

Oh dear, literally, just as I finished writing this section I saw this on Facebook:

- “
”*Fact: Quantum computing will make you more fun at parties.*

Okay, that Facebook post did continue on… “*Ok, fact is a strong word. STILL, you could build your own applications with access to our quantum cloud. That’s hot.*” That’s where the sector is as of August 30, 2020.

# I have no hands-on involvement in production or application of quantum computing

As I’ve already noted, I am merely a technologist and an idea guy, so I don’t have a job, position, or role in the quantum computing field or sector. In particular, I don’t:

- Develop quantum computers. Not in theory, laboratory research, or product engineering.
- Develop applications for quantum computers.
- Use applications of quantum computers.
- Have any involvement in marketing, sales, or support of quantum computers or tools or applications for them.

# Career in quantum computing?

I began my quantum journey focused only on deeply understanding the technology, not having any intention to actually pursue a career in quantum computing.

Occasionally during my journey I would ponder the possibility of pursuing some sort of income-producing opportunity in the sector, but nothing ever stood out as appealing or appropriate for my interests, background, ability, and skills. I was really solely interested as a technologist and idea guy. Quantum computing became an area of *fascination* for me, not a true career.

To some extent my interest in quantum computing from primarily the perspective of an idea guy is a bit of a *luxury* — I’m free to do what I want and focus on what I want without having to answer to anyone, no boss, no managers, no shareholders, no customers. The downside — no paycheck or stock options.

# My quantum journey timeline

Now for the gory details of my journey. I wanted to highlight all of the interesting little milestones which marked my journey into the world of quantum computing. I don’t include all of the milestones of quantum computing, except when I have some personal angle to note. Subsequent sections will explore some of these areas in a little more detail, but the thumbnail summaries here should suffice:

**Before 1990**

- I have no formal schooling in quantum computing — or quantum mechanics.
- I have no quantum experience — quantum computing or quantum mechanics — or even exposure to speak of either in high school or even in college.
- Actually, I briefly considered taking a graduate math course in linear algebra in college, but I didn’t see it as particularly relevant to my main interests in computer science — and I was put off by odd terms such as
*eigenvector*,*eigenvalue*, and*Fourier transforms*. I wasn’t even aware that it was essential for quantum mechanics, which wasn’t on my radar back then anyway. - I did take a few courses in calculus and differential equations in college, but none of that interested me at the time.
- I did take a few courses in probability and statistics in college. They gave me some background, but the material didn’t really interest me at the time — and I never had any use for it in the vast bulk of my career as a software developer.
- I was familiar with matrix math and complex numbers from high school, but I had no reason to use them in my pre-quantum career — with the exception of coordinate transform matrices for graphics rendering.
- Incidentally, my main focus in college was programming languages, compilers, operating systems, system software, and computer engineering. I did take several physics courses (but nothing related to quantum) as an undergraduate, but none of that interested me at the time.
- No quantum experience or even any significant exposure to speak of during most of my career — until 2017.
- A few vague recollections of
*hearing about*quantum computing in the 1980’s and 1990’s. - I tried to read a magazine article by Feynman on quantum computing (early or mid 1980’s), but the physics was too opaque for me to make any sense out of it at the time, and gave me no sense of where exactly the power of quantum computing came from or how it worked. Rather than inspiring me, this article turned me off to quantum computing. Besides, there were no real quantum computers anyway.
- After my failed attempt at Feynman, I pretty much pushed quantum computing aside as being a vague future, unworthy of my attention.
- Odd, cryptic, and poorly-defined terminology and heavy reliance on greek symbols did nothing to make quantum computing more appealing. If not for these key deficiencies, I might have gotten more deeply into quantum computing at an earlier stage. Especially if interactive simulators had been available much sooner as well — lack of real hardware was not the critical stumbling block.
- And there was so much exciting progress in classical computing in the 1970’s and 1980’s, both hardware and software, so quantum computing was unnecessary overkill for most applications, at the time.

**1990’s**

- Vague awareness of occasional technical media coverage of research advances in the 1995 to 2005 period. But it was all still a vague future. Plenty of theory and basic research, but still no real machines — or imminent prospects. Too many little bits and pieces but no strong hint of a full and usable quantum computer any time soon.
- Besides, there were too many exciting advances going on in classical computing. See a section on that later in this paper.
- Back in the 1990’s, every time I read something about quantum computing, I wondered about things like whether quantum computers supported floating point numbers and what would happen if you tried to compute 1/3 (infinite repeating decimal), pi (infinite non-repeating digits), or the square root of 2 (also infinite non-repeating digits.) There were no ready answers back then.
- Back in the 1990’s it was never clear what gave quantum computing its advantage.
- It was supremely odd to me how people would make a really big deal of even a few qubits — and they did an extremely poor job of explaining how just a few qubits could be used to compute solutions which might take hundreds or thousands of bytes of classical code, and be much faster than the classical code as well.

**2000–2005**

- Sometime in the late 1990’s to early 2000’s I read something attributed to Feynman — that it is very hard for us to simulate
*n-body problems*with traditional computers, even 3-body problems, but that nature is able to do so instantly, and this was his rationale for promoting quantum computing. That was finally some insight that stuck with me, from that moment on, through today. Unfortunately we can’t actually do that with a quantum computer — as they are envisioned today, but the sentiment was appealing. I haven’t been able to find that quote on the Internet, try as I may. - I vaguely recall the 5-qubit NMR-based quantum computing experiment in 2000/2001 or so (
and*Experimental Realization of an Order-Finding Algorithm with an NMR Quantum Computer*), but, seriously, who could get terribly excited by 5 bits — other than a physicist?*Quantum Computing and Nuclear Magnetic Resonance*

**2006–2015**

- I vaguely recall D-Wave Systems announcing a 16-qubit quantum computer in 2007, but there was skepticism about whether it was a true quantum computer (which persists through today!). But for me the bottom line was that 16 bits meant nothing to me.
- I can’t recall for sure whether I read the press coverage of Google using the D-Wave system to do image recognition. I just found this Google blog post dated December 2009:
. It was a big advance — to 128 qubits, but once again it meant nothing to me. Yes, it was progress, but too little to matter — to me.*Machine Learning with Quantum Algorithms* - D-Wave had further improvements — 1024, 2048, and now 5000 qubits, but they were and remain dogged with the questions about whether it was merely a special-purpose computer rather than a true general-purpose quantum computer. In fact, wonder if it is more of an analog computer rather than a digital computer.
- I recall trying to read a little about quantum computing on a few occasions, but I always found it very cryptic and opaque so that it made no sense to me. Superposition of 0 and 1 and entanglement of qubits didn’t really give a deep sense of the nature of the power of quantum computing.
- I recall reading mention of the Grover “database” search algorithm, which certainly sounded intriguing, but it was too vague and there was no real machine to run it on anyway. I recall one news article opining that Google could replace its entire search engine with Grover’s algorithm — yeah, right.

**2016**

- I saw mention of Rigetti in 2015 or 2016 and I was actually quite surprised that they were actually building a real, functional quantum computer, but I basically just skipped over those reports since it sounded as if they weren’t far enough along to have a big commercial success any time soon. It still seemed like more of a research project. Four years later, and they still aren’t much closer despite having made a lot of progress.
- When IBM announced their
*Quantum Experience*service in May 2016, it suddenly seemed a little more noteworthy — but only a little more. It was still only five qubits, so it wasn’t enough to draw me in. But it was a little more notable — it just seemed more professional, like when IBM introduced their personal computer — it seemed less like a toy or novelty. It actually seemed real, although it was still clearly a*laboratory setup*, not a commercial product — the service seemed more like a commercial product, but the quantum computer itself just looked like it belonged in a lab, not a data center. The IBM*Q System One*introduced by IBM in 2019 actually looked like a professional commercial product, but that’s not where IBM was in 2016.

**2017**

- In May 2017 IBM announced 16-qubit and 17-qubit processors. I noticed that, but again it just wasn’t noteworthy enough to capture my attention.
- Then, finally, in November 2017 IBM announced both 20-qubit and 50-qubit processors. That was still not enough to grab my full attention, but 50 qubits was at least tantalizing. In fact, that was the moment when I decided to try to focus
*some*of my attention on quantum computing and try to figure out what it was really all about.**This was my watershed moment, so to speak.** - Contrary to previous press releases which I quickly scanned and then moved on, I carefully scrutinized every section, every paragraph, even sentence, every clause, every phrase, and literally every term and word of this latest IBM press release. I began keeping a list of all of the technical terms which were unfamiliar to me, which eventually became my online glossary which now has over 3,500 terms.
- Once again I tried to do a little reading, but found the material very cryptic and vague, and not insightful in the least. Very frustrating.
- My first stop was the Wikipedia article on Quantum computing, but I found it virtually useless to give me a crisp handle on quantum computing. I understood a moderate amount of what it was saying, but it just wasn’t very satisfying. It didn’t provide me with an “Aha!” moment, just plenty of “So what?” moments.
- Even with IBM’s November 2017 announcement, I wasn’t yet deeply committed to quantum computing. Maybe part of that was that I was still more focused on artificial intelligence at that time.
- So for the remainder of 2017 I kept a closer eye on quantum computing, but I wasn’t fully committed, yet.
- Early on in my journey I stumbled across the
*Quantum Algorithm Zoo*, an interesting catalog of quantum algorithms. It sure seemed comprehensive, and I personally used it some in my early quantum days, but it’s very oriented towards algorithms from academic publications, and not very oriented towards solving real-world problems today. - I had been aware of and intrigued by Microsoft’s announcement of the Q# (“Q Sharp”) programming language tailored to quantum computing in September 2017 and delivered in December 2017, but the documentation I briefly perused at the time didn’t persuade me that this was going to make a dramatic difference, at least to me. I don’t recall exactly when I first looked at it, but I did look a little closer in early 2018, but my opinion didn’t change. Even today, in 2020, I’m not persuaded that it adds enough value beyond what people can do with Python libraries, which is what everybody besides Microsoft is using.
- I’m not sure exactly when I first heard the term
*Quantum Ready*, but I do recall that it was associated with IBM, such as this IBM blog post in December 2017 —. I do recall that I felt that it seemed a bit odd since even in late 2017 quantum computing certainly felt a long way from being even remotely*Getting the World Quantum Ready**ready*for development and deployment of production-scale real-world applications. The technology wasn’t ready, wasn’t close to being ready, and it wasn’t even close to time for software developers to even think about getting ready to use a future version of the technology, which doesn’t exist even today in 2020, nor is it likely to be ready in the next couple of years. - I closed out 2017 resolved to dig deeply into the theory, the reality, and the promise of quantum computing in 2018.

**2018**

- I started from the beginning of 2018 with a focus on digging deep into quantum computing.
- I started poking into virtually everything I could find on the topic of quantum computing.
- After careful thought in early 2018, I concluded that the only way I could really understand what quantum computing was really all about and how much of the hype was real was to start by building myself a foundation of understanding by diving first into
*quantum mechanics*, the underlying physics. I poked around online and found several free online undergraduate courses at MIT. I plodded through the videos and lecture notes for the first half of 2018. My original intention was to just go through one course to get the basics, but I ended up going beyond that. - My budget for training and education in quantum? $0. Absolutely zero. There was clearly a lot of online information available for free. If I were an employee at a Fortune 500 company, sure, then I would have considered some expensive courses or professional training and a shelf full of expensive books, but I’m not, so I didn’t. For example, MIT xPro Quantum Computing Fundamentals — for only $2,2,149. Others might not follow my path.
- In fact, my budget in general for quantum was and remains $0. Absolutely zero. That includes courses, seminars, conferences, travel, books, journals, online archives, subscriptions, etc.
- I did intend to hold off digging deep into quantum computing itself until I got a handle on quantum mechanics first. For the most part I stuck to that approach.
- I would on occasion take a peek at Wikipedia articles on aspects of quantum computing. But just a peek — the dense and cryptic nature of the material always pushed me away. I really needed the quantum mechanics first.
- I personally learn better by bits and pieces, collecting puzzle pieces and gradually fitting them together. Linear reading or lectures or videos aren’t optimal for me. Sometimes it’s advantageous to go deep early, while sometimes it works better to stay shallow and only go deep once a sufficient breadth of material has been covered, to get a perspective on how the many pieces of the puzzle fit together. There is no one size fits all approach that will work for everyone.
- Even at this early stage in early 2018 my real interests in quantum computing were clear to me: as a technologist, I sought to deeply comprehend its capabilities, limitations, and issues, although I didn’t explicitly and publicly articulate those interests until June 2020:
.*My Interests in Quantum Computing: Its Capabilities, Limitations, and Issues* - One of the things I decided to do in the winter and spring of 2018 was to go back and review the early history of modern electronic digital computers (especially in the 1940’s and 1950’s) to try to identify factors and trends which could be abstracted and might offer some guidance for evaluating how quantum computing might progress. I posted my notes as
(February 2018),*Knowledge Needed to Deeply Comprehend Digital Computing*(March 2018), and*What Knowledge Is Needed to Deeply Comprehend Quantum Computing?*(March 2018).*Criteria for Judging Progress of the Development of Quantum Computing* - Occasionally during 2018 I would ponder the possibility of pursuing some sort of income-producing opportunity in the emerging sector of quantum computing, but nothing ever stood out as appealing or appropriate for my interests, background, ability, and skills. I was really just interested as a technologist and an idea guy — not as a hands-on practitioner. Quantum computing became an area of
*fascination*for me, not a true career. - To some extent my interest in quantum computing from primarily the perspective of a technologist and an idea guy is a bit of a
*luxury*— I’m free to do what I want and focus on what I want without having to answer to anyone, no boss, no managers, no shareholders, no customers. It certainly provides me with a lot more flexibility than individuals with specific roles to go in a wide variety of directions based on my own discretion. - I finished viewing two of the three undergraduate MIT courses on quantum mechanics (8.04 Quantum Physics I (Spring 2013) and 8.04 Quantum Physics I (Spring 2016) which had additional material) by the middle of spring 2018. I had originally intended to go through only the first course since I really only wanted the basics, but the material was much harder than I expected, so I felt that I needed the extra course. I was able to pick up enough
*linear algebra*from these courses so that I didn’t need to take a full course on linear algebra or read a book on linear algebra. - Once I started getting into the quantum mechanics with details such as wave functions and probability amplitudes the concepts began to make a little more sense.
- Probability and uncertainty always felt natural to me — not a foreign concept — even before I ever heard about quantum anything, so once I grasped that these two concepts were fundamental to quantum mechanics and quantum computing, I felt more comfortable with the quantum world.
- The lack of strict determinism in quantum computing may put off a lot of people, but it actually felt natural to me. Even with classical computing there are limits to determinism. Even classical computing has adopted a variety of statistical methods to cope with real-world phenomena which are more statistical than deterministic. Monte Carlo simulation is a popular approach for solving very complex problems using statistical sampling. Beside, statistical approximations are good enough for many applications.
- Having at least a partial understanding of quantum mechanics in the Spring of 2018 gave me the confidence to start reading up more on quantum computing — whatever I could find online.
- Sometime in 2018 I became aware of Prof. Sott Aaronson’s blog, which focuses on quantum computing. I’ve occasionally found it interesting and useful, mostly for specific technical issues, but overall I don’t read it on a regular basis.
- Overall, I haven’t been reliant on blogs to any significant degree. On occasion something interesting and useful will show up on a Google search or a link on a LinkedIn post, but that’s the exception rather than the rule, at least for me.
- Occasionally I have found something interesting on quantum computing on Medium.com, but not very often. I do post all of my own writing on Medium.
- I haven’t found most of the online videos of quantum computing or quantum mechanics — other than those of the MIT quantum mechanics courses — to be of any significant value, to me, at least. Maybe I’ve just gotten enough from online text to need videos. Besides, I currently find academic papers far more interesting and informative.
- A key conceptual breakthrough for me in 2018 was grasping the notion of a Hadamard transform which places a qubit into an equal probability superposition of 0 and 1, which enables quantum parallelism with 2^n quantum states for n qubits. Finally, a hint as to where the power of quantum computing actually comes from!
- Meanwhile, my list of terms related to quantum computing and quantum mechanics was growing longer and longer, literally every day. I finally decided in June 2018 to start turning the list into a glossary. At first I was simply going to list the terms, but I was far enough along that I felt comfortable writing up definitions for at least some of the terms.
- I found a lot of interesting and useful information in IBM’s Qiskit and from IBM in general. It had its limits and issues, but it did in fact help me get going. If only the current Qiskit Textbook had been available back in 2018 or even 2017, it would have helped accelerate my learning.
- I finished viewing the third undergraduate MIT quantum mechanics course over June and July 2018 (8.05 Quantum Physics II (Fall 2013).) Some of the material was finally starting to sink in in a more meaningful way. This gave me the confidence to flesh out my glossary. And when I got stuck on a glossary entry, that was simply an incentive to dig deeper, which enhanced my understanding even further. I had originally intended to go through only a single course, but I ended up feeling that I really needed the extra depth to really make sense of the physics underlying quantum computing.
- I was disappointed that MIT didn’t have an online quantum computing course sequence comparable to the quantum mechanics courses. I did find one MIT graduate quantum computing course, by Peter Shor, but it had no videos and most of the lecture notes were missing, so it wasn’t terribly useful to me. I was surprised that they didn’t have more.
- I was also disappointed that I couldn’t find online graduate-level MIT quantum mechanics courses.
- Somehow, by the spring and summer of 2018 I finally had accumulated enough of an understanding about quantum computing and quantum mechanics that I was finally able to read Wikipedia articles related to quantum computing without flinching, cringing, and otherwise reacting in a very negative manner. This is horrible and backwards — the Wikipedia should always be the place where you can reliably depend on going to get started with any topic.
- One of the early lessons from the MIT quantum mechanics courses was the
*need to develop intuition*for the quantum world. That’s a constant struggle, but worth every ounce of effort. It’s not enough to simply memorize the rules, but to understand the rules deeply enough that you could re-derive them or even revise them as needed. Put simply, you need to understand the material well enough that it actually*makes sense*— at a gut and intuitive level. - I would have loved to go back and read the early quantum computing papers from the 1980’s for deep background, but they predate the Internet — and they’re mostly locked behind academic journal paywalls, so there is no free online access. Oh well.
- By the middle of 2018 I concluded that all of the Greek symbols and arcane terminology of quantum computing, much of it inherited or influenced by quantum mechanics, was very counterproductive — and completely unnecessary, in my opinion. I kind of knew that when I was getting started, earlier in 2018, but I initially presumed that it was just a learning-curve thing. Actually, I knew it many years earlier when I first started reading about quantum, which was a big part of what kept me away from quantum for so long.
- The steady flow of press releases was also a source of incentive for me and helped flesh out my growing glossary. I would read virtually every quantum-related press release word by word, writing down terms and phrases that I might not understand fully and adding them to my glossary, either with entries I developed by further reading, or with a TBD — To Be Determined — if I couldn’t quickly arrive at a reasonable entry.
- I started paying a lot more attention to Rigetti in the Spring and summer of 2018. They seemed to be coming on strong, with a decent 19-qubit machine available for remote access.
- I posted my initial cut at my glossary — with over 2,000 entries — late in July 2018. I’ve been updating it fairly regularly since. It has over 3,500 entries as of the moment I am writing this.
- By August 2018 I was beginning to feel that I had a semi-decent grasp of quantum computing.
- By the middle of the summer of 2018 I had enough background to feel comfortable starting to do some serious writing about quantum computing. See
. Everything is posted on Medium.com. I refer to everything I post on Medium as an*List of my writing on quantum computing**informal paper*. I’m certainly not adhering to strict academic journal standards, but I’m not writing mere blog posts either. - Since it was now clear to me that limited hardware and primitive algorithms were holding quantum computing back, in August 2018 I posted my thoughts on the topic:
.*The Greatest Challenges for Quantum Computing Are Hardware and Algorithms* - One intriguing possibility which I highlighted in that paper was the prospect of
*quantum-inspired algorithms*and*quantum-inspired computing*. The essence is that the design of great quantum algorithms requires incredible*out-of-the-box thinking*, and once you’ve done that you may in fact be able to implement a similar approach on a classical computer which is much more efficient than traditional approaches to the design of classical algorithms. - Another notion that I explored in that paper was that the essence of designing a quantum algorithm was to map a problem to a solution based on the
*raw physics of quantum mechanics*, in contrast to classical computing where most algebraic equations could be directly mapped to comparable classical mathematical operations on a classical computer. - One of the key concepts I learned about quantum algorithm design in 2018 was that whether designing an algorithm for a quantum computer or quantum-inspired computing, a key technique is the concept of
*reduction*— reducing a problem from a more complex form to a simpler form. Shor’s factoring algorithm does that — reducing factoring of semiprime numbers to the simpler and more readily-computed problem of order-finding. - It became clear to me during the spring and summer of 2018 that quantum computers, as then envisioned, were intended more as
*coprocessors*to perform a very limited amount of the overall computation of an application, with the bulk of the application running as classical software. - The more I thought about it, the more I realized that what was really needed in the long run was to fully and tightly integrate classical and quantum computing so that the quantum portions of applications could be invoked much more efficiently. I wrote up my ideas and posted them at the end of August 2018:
. That’s not something which will happen any time soon, but it is what is needed, eventually.*What Is a Universal Quantum Computer?* - I actually didn’t start on my quantum journey with the explicit intention of doing much writing — my main interest was simply to understand the concepts. But as I began to get into it, it just felt natural to write about what I had learned, what questions, issues, challenges, opportunities, and limitations I had identified, and to speculate about the future.
- Intel made announcements of working on a quantum computer chip in the fall of 2017 and winter of 2018, but so far that promise has not yet been fulfilled with a functioning quantum computer. I noticed these announcements, but they just didn’t register with me.
- Microsoft also made numerous announcements in the quantum computing field over the years, but little of this registered with me. They promised to produce quantum computer hardware, but so far they have not fulfilled that promise. So far all they have done is work on tools, simulators, and access to the quantum hardware of others.
- Early in my reading on quantum computing I became aware of the
*Bloch sphere*, and although it was a good introductory method to visualize some of the basics of rotation of quantum state around three axes for single-qubit gates, I quickly discovered that it had two major shortfalls: 1) it didn’t represent probability amplitudes, at least in a direct sense, and 2) it didn’t represent the combined quantum state of two or more entangled qubits. Also, I wasn’t pleased with the way they represented a superposition as a single unit vector rather than two distinct basis vectors, each with its own probability amplitude. And I wasn’t happy with the notion that the ground state, |0> was up, the north pole, while the excited state, |1> was down, the south pole. - As I was googling terms for my glossary, I discovered a wide range of sources for information on quantum computing, including academic papers, college course lecture notes, blog posts, Wikipedia articles, and even a few (illegally bootlegged) online books. There was no shortage of material, but there was a lot of inconsistency and gaps in how it was presented. I had to consult a large number of sources to develop a semi-coherent overall view of the field.
- At one point I discovered the
with questions and answers, but I found the utility and quality of answers to questions to be insufficient for my purposes. That was a major disappointment. Ditto for Quora.*Quantum Computing Stack Exchange* - Books on quantum computing and quantum mechanics are not a significant source of information for me. Lecture notes, academic papers, and web sites have most of what I need and have used.
- It didn’t take me long to develop a strong commitment to free and online text, media, and code — free papers (preprints on arXiv), specifications, documentation, books, videos, lectures, lecture notes, academic papers, and GitHub for code and project files. If it isn’t online, costs money, is hidden behind a paywall, or requires registration, then it doesn’t exist as far as I am concerned.
- I’ve run into some number of projects which use GitHub as a code repository — but not as many as I would have liked. This is probably the best way to share code. And it’s an easy way to browse projects.
- In August 2018 Rigetti announced that they expected to have a 128-qubit machine within a year. That got me excited and motivated. It seemed as if quantum computing was finally on the verge of a real breakout. Unfortunately, that was
*two*years ago, and still no sign of it, and their current best offering is a 32-qubit machine. My enthusiasm has since been tempered. - Sometime in the summer of 2018 I finally had the background and courage to actually read Feynman’s famous paper on quantum computing —
(sorry, but there is no reliable link to the full paper available since it’s published in a paywall-protected journal.) It finally made a little sense. It gave me some insight into his intentions — he wasn’t simply intending to make a faster computer, but a computer focused on simulating physics, especially quantum mechanics.*Simulating Physics with Computers* - I’m not sure exactly when I first became aware of the concept of
*NISQ devices*or their significance. Sometime in 2018. But even today I remain skeptical whether the concept is helpful or harmful. The most positive thing I can say is that it is reasonably descriptive — and that it will continue to be relevant for some years to come. - I also can’t recall for sure when I first became aware of Prof. John Preskill — the Richard P. Feynman Professor of Theoretical Physics at CalTech — and his pioneering work in quantum computing. He coined the term
*NISQ device*, so it may well have been at the same time I heard about NISQ. I did reference his name once in a paper I posted at the end of August 2018, so I was certainly aware of him before then. - By the summer of 2018 I had finally read enough detail about
*Grover’s search algorithm*to realize that much of the hype really was just hype. The algorithm was billed as being able to search a “database”, but a database is highly structured, while the actual Grover’s algorithm searches only linear unstructured data. A little bit of disenchantment began to set in on my part. At least there was still*Shor’s algorithm*, capable of factoring even the largest of public encryption keys, the veritable*Mount Everest*of early quantum computing algorithms to captivate all of us. Or at least that’s what I thought before I dug into Shor’s algorithm. - In the late summer of 2018 I finally felt comfortable trying to tackle Shor’s algorithm for factoring large semiprime numbers — not because I cared about cracking large public encryption keys, but because it seemed as if almost every academic paper would tout the algorithm as being the peak of quantum computing, so I figured that I needed to comprehend how it worked. I started by reading the original paper — or at least the preprint on arXiv.
- Initially I was very impressed by Shor’s algorithm, but the more I dug into it the more skeptical I became. The paper had too many gaps, leaps, hand waves, and lack of crystal clarity and specificity for my taste. And, worst of all, I grew concerned that it was not clear whether it will really work for very large numbers due to practical limitations such as phase granularity and concern about how many circuit repetitions might be needed to get statistically valid results. In other words, I had finally arrived in the world of pragmatic considerations for quantum computing!
- By the end of September 2018 I posted a list of all of my open questions about Shor’s algorithm:
. Most of them are still open, from my perspective. That paper included a lot of references for Shor’s algorithm for anyone wishing to dig deeply into it, including variations on the original algorithm and attempted implementations.*Some Preliminary Questions About Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer* - In October 2018 I posted an abbreviated summary of the various pieces of Shor’s algorithm:
.*Ingredients for Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer* - In October 2018 I combined all of my questions about quantum computing into a single paper,
. My intention was to eventually turn that into an FAQ, but I haven’t gotten around to it yet. I don’t want to start that task until I have a lot higher level of confidence that I have 100% correct answers. Quite a few of the questions on that list date back to early 2018 before I knew much at all about quantum computing.*Questions About Quantum Computing* - By the fall of 2018 I had gotten tired of the mediocre and spotty quality of the documentation and technical specifications — or their complete lack of existence at all — for the various quantum computers which were now publicly available. So, I wrote up a proposed framework for documenting the
*principles of operation*for quantum computers — all of the information which a quantum application developer would need to fully exploit the hardware. - People are always chattering about
*quantum advantage*and*quantum supremacy*and how long it might take to achieve either, but in November 2018 I realized that there is one key technical area where even the simplest of today’s quantum computers are able to achieve*quantum advantage*right now:*generation of true random numbers*. A classical Turing machine cannot “compute” a random number since true random numbers are not “computable”, even in theory, using a Turing machine The best we can do on a classical computer is generate*pseudo*-random numbers, or access external special-purpose hardware to gather*entropy*from the physical environment. But all quantum computers can trivially generate random bits by simply executing a Hadamard gate to put a qubit into a superposition of 0 and 1, and then measuring the qubit to cause the wave function to collapse to a 0 or 1, effectively generating a random bit. I wrote about this in November 2018:.*Quantum Advantage Now: Generation of True Random Numbers* - In the late fall of 2018 I noticed an announcement by IonQ, a quantum computing startup focused on trapped ions for qubits. That only added to my excitement and enthusiasm, and expectations of a real breakout for quantum in 2019 — which unfortunately didn’t happen. Progress continues, but hardware is proving to be a fair amount more challenging than it seemed in 2018, two years ago.
- At some point in 2018 I became aware of Xanadu, a Canadian startup, which was working on photonic (optical) quantum computing, using squeezed states of light and continuous value (CV) quantum states with qumodes rather than qubits. It sounded very intriguing, especially the prospect of operating at room temperature, but try as I might, I wasn’t able to figure out how real their hardware was. They do have software packages for AI, particularly machine learning (PennyLane), and a quantum simulator and interface library (Strawberry Fields), but it was the hardware I was interested in. I’ve seen them in the news a bunch of times over the past two years, but there hasn’t been any clarification of how the hardware is going.
- At some point in 2018 I became aware of theoretical physics researcher David DiVincenzo and his proposed
*five criteria (requirements) for a quantum computer*. His seminal paper from 2000:.*The Physical Implementation of Quantum Computation*

**2019**

- IBM announced the IBM
*Q System One*in January 2019. This system actually looked like a professional commercial product, as opposed to all of their previous quantum quantum computers which just looked as if they belonged in a lab, not in a commercial data center. Sure, it now*looked*professional, but to me that was only a superficial*skin*over the real hardware which still seemed as if it still belonged in a lab. The IBM press release didn’t even mention how many qubits the machine had, although I read elsewhere that it had 20 qubits. And they announced and showed the machine at the Consumer Electronics Show of all places, not an enterprise IT convention. That really stood out as odd in my mind — marketing out of control. In any case, I merely rolled my eyes on this one — many more qubits or much longer coherence time would have raised my eyebrows, but flashiness and visual appearance alone is a non-starter for me. - I was very intrigued by a preprint paper posted by IonQ in February 2019:
. I thought it showed a fair amount of progress both with hardware and algorithms. To me, it set a fairly high bar. Google may have advanced above that bar with their paper,*Ground-state energy estimation of the water molecule on a trapped ion quantum computer*, in April 2020, but the IonQ paper set the initial bar. It turns out that IBM had already set the bar anyway back in 2017 with their paper,*Hartree-Fock on a superconducting qubit quantum computer*, in April 2017, but that predated my dive into quantum computing, so I hadn’t noticed it until I saw the reference citation in the Google paper. But from February 2019 until April 2020 at least I had the IonQ paper to establish a foundation of what was possible.*Hardware-efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets* - In March 2019 IBM announced that it had a new quantum computer which had
*double*the*quantum volume*of its previous quantum computers. That sounded great, but what exactly was quantum volume and what was the effect of doubling it? In March 2017 IBM introduced a metric for measuring the net power of a quantum computer, called quantum volume (technical details), which combines number of qubits, coherence, gate errors, and connectivity into a single numeric metric. Sounds like a great idea, but… technically it is completely useless since there is no way to derive any useful metric from that single number which can be used by an algorithm designer or an application developer. I vaguely recall seeing mention of this concept previously, but it seemed too vague and unclear how to use it, so I ignored it. But now, with this announcement I started looking into it more closely, including the technical paper cited above which was posted in November 2018 — and didn’t like what I found, as I just noted. Basically, it’s just more marketing hype. - In March 2019 I posted a list of the major areas of my own lingering uncertainty about quantum computing:
. Many of the items are still open, although I have made some progress. In a lot of cases I do know the basics, at least at a superficial level, but I really want to understand quantum computing at a deeper, intuitive level.*Lingering Obstacles to My Full and Deep Understanding of Quantum Computing* - I spent most of 2019 waiting for major breakthroughs on a regular basis, especially those
*promised*in 2018, but that just didn’t happen. More incremental progress. More delays. Lots of disappointment. - Over the course of 2018 and early 2019 I had read so many companies touting the types of applications which they felt could be addressed by quantum computers that I decided to compile them all in a single document. I have a short overall summary list as well as the specific application categories which specific companies listed as being appropriate for quantum computing. I posted my compilation in April 2019 and have been updating it as I stumble across new claims by companies:
.*What Applications Are Suitable for a Quantum Computer?* - In April 2019 I pondered the question of when quantum computing would see it’s first substantial real application, comparable to what happened for classical computing when the ENIAC computer was unveiled in 1946:
.*When Will Quantum Computing Have Its ENIAC Moment?* - In May 2019 I pondered the question of when quantum computing would finally be ready for more average mere-mortal (non-elite) application developers, comparable to the introduction of the FORTRAN high-level programming language for classical computers in 1957 (eleven years after the ENIAC moment):
.*When Will Quantum Computing Have Its FORTRAN Moment?* - Reacting to a lot of the hype, I posted an informal paper in June 2019 to address the definitions of
*quantum advantage*and*quantum supremacy*:. This was three months before Google announced that they had achieved quanum supremacy, but Google had already announced in 2017 and 2018 that they were on track to achieve quantum supremacy fairly soon.*What Is Quantum Advantage and What Is Quantum Supremacy?* - Overwhelmed by all of the hype over quantum computing, I figured I’d try my hand at parody of the hype — in June 2020 I posted
. Along the lines of infamous predictions for classical computing which turned out to be false — like nobody would ever want a [quantum] computer in their home.*Fake Predictions for Quantum Computing* - As of June 2019, I concluded that quantum computing was stuck in the realm of the
*lunatic fringe*— usable only by the most-skilled elite. For more, see.*When Will Quantum Computing Be Ready to Move Beyond the Lunatic Fringe?* - In the summer of 2019 I spent literally several months reading all of the Nobel physics prize lectures which related in any way to quantum mechanics. That effort gave me a lot more intuitive feel about quantum mechanics than I got from the MIT undergraduate courses. Even if I still couldn’t be a practicing quantum scientist, at least I could have a vague but deeper sense of the major pieces of the puzzle.
- Reading all of those Nobel physics lectures highlighted that I actually do have a passion for physics at the subatomic level. It always intrigued me in a distant sort of way, but now I am attracted to it in a deep way, and not just because I need it to understand quantum computing.
- During 2019 I gradually came to realize the value of quantum simulators while real hardware is not yet available or is too limited or noisy. Even when hardware is available, simulators make life easier.
- I found a few online interactive simulators for quantum computing. These were interesting to some extent, and I learned a little, but overall they didn’t help me too much.
- Google announced that it had achieved
*quantum supremacy*in the fall of 2019, including the fact that they had a 53-qubit quantum computer operational in their lab. Actually, they didn’t officially announce it until October, but an advance copy of the paper was leaked in September. This was all quite exciting to watch in real time. As noted earlier, Google had telegraphed their intentions in 2017 and 2018, well in advance of the actual event, so this wasn’t an out of the blue surprise, but a pleasant surprise that it had actually finally happened. - Shortly
*before*the Google leak (two days*before*), IBM announced that they had a 53-qubit machine in their lab as well. Interesting coincidence. - Shortly
*before*Google’s official announcement (two days*before*), IBM denounced Google’s quantum supremacy effort in a preprint paper, presumably based on the leaked Google paper. - I took some time reading Google’s paper on quantum supremacy in October and November and after some thought wrote up my own impressions of their effort in late November 2019:
. With that, I could put this whole episode behind me and once again focus on*What Should We Make of Google’s Claim of Quantum Supremacy?**real*applications of quantum computing. Note that I had written about quantum advantage and quantum supremacy back in June 2019:.*What Is Quantum Advantage and What Is Quantum Supremacy?* - By October 2019 I had realized that my to-do list of topics to read, research, and write on was getting out of hand and scattered all over my notes, so I consolidated the topics in a list and posted it:
. Even now, in 2020, I have a lot more topics in newer notes that I need to add to that list.*Future Topics for My Writing on Quantum Computing* - In October 2019 I decided to read up on
*quantum computational chemistry*since that promised to be one of the main areas in which quantum computers could deliver significant value. I read two long papers:by McArdle, et al, and*Quantum computational chemistry*by Cao, et al. I’m certainly not an expert now, but at least I have a feel for the issues.*Quantum Chemistry in the Age of Quantum Computing* - One of the benefits of focusing on
*quantum computational chemistry*was to raise my understanding of*variational methods*and*hybrid quantum/classical algorithms*in general. I had encountered these concepts previously, but not comprehended them as deeply. I’ve read a number of papers related to these concepts since. - And with
*hybrid quantum/classical algorithms*we have*ansatze*,*state preparation and measurement*(SPAM),*classical postprocessing, classical optimization*, and*iteration*, as well as*shot count*or*circuit repetitions*. The quantum vocabulary*zoo*is rather overwhelming, to say the least. - One of the requirements for deeply understanding quantum computing is to become intimately familiar with
*unitary matrices*(and matrix math, of course.) A lot of the basics are reasonably simple, but developing a deeper intuition, especially for complex exponentials takes time. I’m not sure when I finally reached that stage, but it wasn’t in the early stages. But by the fall of 2019 unitary matrices were much more second nature for me than in the summer or fall of 2018. A two-qubit unitary matrix applied to a two-qubit column vector was no longer an inscrutable Chinese puzzle. Sure, for a math guy it would have happened much more quickly, but I’m not a hard-core math guy. - In November 2019 I was thinking about what factors would have to come together to achieve what I called
*quantum algorithmic breakout*— the right and critical mass combination of hardware and algorithm advances that exploit that hardware to achieve real-world, practical, production-quality, production-capacity applications so that their development becomes commonplace, even easy, as opposed to the heroic and severely limited efforts which are common today. I posted my thoughts as.*What Is Quantum Algorithmic Breakout and When Will It Be Achieved?* - I thought it would be helpful to propose an equivalent to Moore’s law for qubit growth of quantum computers. I posted my proposal in November 2019:
. My proposed rule: “*Proposed Moore’s Law for Quantum Computing**Qubit count of general-purpose quantum computers will double roughly every one to two years, or roughly every 18 months on average.*” Oops, we’re already falling behind — when I wrote this paper, I expected that the 128-qubit machine from Rigetti was coming soon. - Microsoft announced
*Azure Quantum*, their cloud quantum computing service, in November 2019. This was significant generally, but not to me personally — I’m much more interested in the technical capabilities of the actual quantum computers, not the logistics of how they are accessed. Microsoft is merely making it easier to manage access to the quantum computers of other companies, but wasn’t introducing their own proprietary quantum computers. Maybe this announcement legitimized the sector, or maybe it merely exacerbated the hype, it’s hard to say for sure. - In December 2019 Amazon announced their
*Braket*quantum computing cloud service, their response to Microsoft’s Azure Quantum. My personal response was about the same as with Azure Quantum — I was more interested in the technical capabilities of the actual quantum computers than the logistics or economics of accessing them. - As Christmas and the end of 2019 were approaching I decided to write up a list of advances and breakthroughs I wanted to see in the coming year:
.*My Quantum Computing Wish List for Christmas 2019 and New Year 2020* - People were chattering as if a quantum computer could compute anything, but I realized that as currently envisioned there were many tasks which weren’t a good fit for quantum computers. I posted my thoughts in December 2019:
.*What Can’t a Quantum Computer Compute?* - By the end of December 2019 I still hadn’t fully mastered the nuances of Shor’s algorithm, but since the algorithm is of continued interest, I posted an updated version of my long list of references for Shor’s algorithm from September 2018 for anyone wishing to dig deeply into it, including variations on the original algorithm and attempted implementations:
. As far as I can tell nobody has implemented Shor’s algorithm on a real quantum computer to factor more than 15 (3 x 5) and 21 (3 x 7). I found one simulated implementation that worked for larger two-digit numbers, but that’s about it. I had hoped to see an implementation for a 32 to 40-qubit simulation, but I’ve seen none so far. I think it’s worth noting that the various implementation efforts diverge significantly from the original paper. To this day, I don’t think there is a single clean reference implementation which is 100% according to Shor’s original paper. And this is for an algorithm which is still widely considered as*References for Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer**proof*of what a quantum computer can do.

**2020**

- I had heard a little about
*quantum communication*over the preceding two years, including efforts by the Chinese, but hadn’t dug into it deeply, so in the late fall of 2019 and early 2020 I started reading up more on*quantum information science*, which is the larger umbrella which covers both quantum computer and quantum communication, as well as quantum networking and quantum sensing. I organized a summary of the field and posted it in January 2020:*What Is Quantum Information Science?* - Part of the impetus for my interest in the larger area of quantum information science was that the U.S. government was ramping up investment in research in the area, so the field was getting more press, including more hype.
*Exponential speedup*is the whole point of quantum computing, but people chatter about it as if it was automatic for all quantum algorithms, which isn’t true. In fact, it takes a lot of careful analysis and attention to detail to achieve exponential advantage, and many algorithms won’t be as successful as their designers might have hoped. Even the much-vaunted*Grover search algorithm*achieves only a*quadratic speedup*. The simple truth is that each algorithm will have its own performance characteristics, and the designers or implementers will need to carefully document those performance characteristics, the algorithm’s*actual quantum advantage*. I discussed this topic at length in:posted in February 2020.*What Is the Quantum Advantage of Your Quantum Algorithm?*- Also in February 2020 I posted
. You’re asking for trouble if you don’t carefully characterize the*What Is Algorithmic Complexity (or Computational Complexity) and Big-O Notation?**algorithmic complexity*of your algorithm — how will its performance trend as you increase the size of the input. - My ears perked up when I read Honeywell’s announcement in March 2020 that their quantum computer would become available within a few months, targeting the June timeframe. It was a trapped-ion machine, as was IonQ’s machine, which I viewed as a superior hardware technology for qubits. Unfortunately, they provided virtually no technical details, not even the number of qubits, just the quantum volume — 64, but that’s more of a marketing concept, not telling developers anything they really need to know.
- In the spring of 2020 I finally wrote up a summary of
*quantum effects*—I never saw all of them put together in a coherent manner in one place. If I had seen them described in an integrated manner earlier in my efforts to learn about quantum, I’m sure the pace of my uptake of quantum mechanics and quantum computing would have been much more rapid.*What Are Quantum Effects and How Do They Enable Quantum Information Science?* - In June 2020 I finally decided to clearly and publicly state my interests in quantum computing:
.*My Interests in Quantum Computing: Its Capabilities, Limitations, and Issues* - Honeywell came through in late June 2020, announcing that their quantum computer was actually available (to select customers). But as with their March announcement, they provided very little technical information other than the quantum volume — 64, which has essentially no technical value. I was happy that a new trapped-ion quantum computer was available, but disappointed that I can find out so little about it.
- One technical issue which had bugged me for quite some time was the issue of needing to run a quantum circuit some significant number of times to get a statistical average result. A simple concept but the devil is in the details, and most published algorithms only mention it briefly in passing if at all. So I dug into it some more and finally posted my thoughts in July 2020:
. The big concern is twofold: 1) how to properly calculate the number of repetitions needed, and 2) whether the needed number of repetitions, especially for a NISQ device, might be so outrageous as to possibly render a quantum solution unworkable or yield performance far worse than expected, and maybe not achieve a true*Shots and Circuit Repetitions: Developing the Expectation Value for Results from a Quantum Computer**quantum advantage*over a sophisticated classical solution at all. - After pondering whether quantum computing was still stuck at the stage of being a
*mere laboratory curiosity*, in July 2020 I wrote some background material on the more generic concept of*laboratory curiosities*:. My thesis is that a technology is still a mere laboratory curiosity if It is not yet ready for prime time — for production-scale real-world applications. A new technology needs to offer clear, substantial, and compelling benefits of some sort over existing technology, whether they be new functions and features, performance, less-demanding resource requirements, or economic or operational benefits.*What Makes a Technology a Mere Laboratory Curiosity?* - With that foundation in place, I evaluated the status of quantum computing relative to being a
*mere laboratory curiosity*— concluding that it was, and posting my analysis in August 2020:.*When Will Quantum Computing Advance Beyond Mere Laboratory Curiosity?* - To be continued. I’m currently working on two more of my informal papers, with others at the idea stage.
- And I’m spending a fair amount of time each day keeping up with status posts on LinkedIn, glancing at fresh new academic papers, announcements, and news items.

**2021**

TBD…

# My watershed moment — November 2017

Just to reemphasize the key milestone that is buried in the timeline of my quantum journey, November 2017 was *my watershed moment*, the point where the light bulb clicked on and I knew that I needed to get deeper into quantum computing — that it would be a *now* issue for me rather than a *someday* issue.

In November 2017 IBM announced both 20-qubit and 50-qubit processors. That was still not enough to grab my full attention, but 50 qubits was at least tantalizing. In fact, that was the moment when I decided to try to focus *some* of my attention on quantum computing and try to figure out what it was really all about. **This was my watershed moment, so to speak.**

# Reflections from my journey

Now that I’ve recounted my specific activities during my quantum journey, what exactly did I learn, in a general sense?

# No formal education in quantum computing or quantum mechanics

It’s hard for me to say for sure whether this matters or not, but I had no formal education in quantum computing or quantum mechanics before I began my journey into the quantum world in earnest a few years ago. I did have a technical, STEM background, specifically in computer science, and some math and physics in college, but nothing specific to quantum computing or quantum mechanics.

I do believe that anyone with a general technical background such as mine can do what I have done, but it’s not possible for me to say for sure which aspects of my background really made the difference, or which parts someone else could do without before diving into quantum computing.

# Thinking of getting a college degree focused on quantum?

I’ll refrain from giving any specific career or education advice to others, with one exception — unless you are one of those rare elites (who don’t need my advice anyway, by definition!), don’t focus your undergraduate college education or even a master’s degree 100% on quantum computing at this stage of the sector. Make sure to at least have a minor, if not your major in some aspect of technology where great non-quantum jobs are plentiful today, in the here and now, since much of the promise of quantum computing is still *years down the road*.

You may or may not be able to get a quantum-specific job today, right out of school, and even if you do, the number of places you can go if you decide to leave that job is relatively limited.

Yes, demand for quantum technical people is very strong right now, but for a relatively limited number of slots, each with relatively specific technical requirements — not every quantum technical role is created equal.

Besides, many if not most aspects of the technology will probably be radically revised four or five years from now. Be prepared for non-stop *continuing education*. Invest the bulk of your education in timeless concepts which will still have great value even as technology evolves.

Look (and talk to people in the sector) before you leap.

# Is quantum computing real?

This is the biggest and hardest question of them all. There are several distinct perspectives:

- Do real quantum computers exist? Yes. Albeit very limited capabilities.
- Is quantum computing sufficient today for production-scale application deployment? No. Not even close.
- Is quantum computing on track to become real in some number of years? Maybe.
- Can I guarantee that quantum computing is real enough to bet on its arrival? No. Unfortunately I am unable to offer such a guarantee. I think it is likely to be real, but I still have too many questions to be sure.

The good news is that I cannot claim that quantum computing is *not* real.

# Is practical quantum computing imminent?

Even if quantum computing isn’t sufficient today for production-scale application deployment, is it imminent? Again, several perspectives or time frames:

- Within the next year or two? No.
- Within two to five years? Maybe.
- Within five to ten years? We certainly hope so. But still no certainty.
- Within ten to fifteen years? Feels a little more certain.
- Within fifteen to twenty or even twenty five years? If not then, when?

The real bottom line is that despite my journey, I remain unable to assess that quantum computing is real enough to bet on a timeframe for its arrival.

# Is the hype warranted?

In a word, ** No**, the hype is not warranted, but… it’s complicated. To be sure, there is a lot about quantum computing which is real, but so much of the hype seems disconnected from reality.

Maybe in two years a fraction of the hype will be warranted, but probably not the majority of the hype.

Maybe in five years a significant chunk of the hype will be warranted, but probably still not the majority.

So when will the majority of hype be warranted? That’s the crux of the problem — I can’t say when with any confidence.

I’d be a lot happier if certain elements of the hype were dropped:

- No more references to “now.”
- No more references to the present tense.
- No more references to “imminent.”
- No more references to “soon.”
- No more references to the near future.

That might eliminate the vast majority of the hype about quantum computing.

# Everything feels… premature

Unfortunately, after all of my quantum journey, it really does seem that just about everything surrounding quantum computing feels… premature, that everyone is jumping the gun, and that we need to let the fruit ripen on the (basic research) tree before harvesting it for commercial value and consumption.

# D-Wave is somewhat inscrutable

I haven’t met anybody yet who really knows what the quantum computer from D-Wave Systems is really all about — or that can explain it to the rest of us. I do know that it has qubits — a lot of them (512, 1024 or 2048 and 5,000 coming), that it has a fixed algorithm (quantum annealing), and that it doesn’t have the gate-style operations which all other quantum computers use.

To me it seems to be more of an *analog computer*, or at least more closely comparable to an analog computer than either a classical digital computer or other gate-oriented quantum computers.

You can’t readily move algorithms between D-Wave and the other quantum computers. If you want to run on D-Wave you have to use the algorithm that is built into the machine. It’s a very flexible algorithm which is parameterized, but you can’t change the algorithm itself.

To be clear, the D-Wave quantum computer *is real*, but it’s not comparable to any other quantum computer.

Some real-world problems involving optimization are a natural fit for D-Wave, many or most problems are not.

Although D-Wave does have a lot of qubits compared to other quantum computers, it is still very limited relative to many real-world problems which people would like to address.

People are doing a lot of experimentation with D-Wave, but it’s not clear to me where any of that is headed. Again, it is indeed a real machine, but it still seems to be a *mere laboratory curiosity* as far as I can tell.

# Not really ready for Quantum Ready

I’m not sure exactly when I first heard the term *Quantum Ready*, but I do recall that it was associated with IBM, such as this IBM blog post in December 2017 — ** Getting the World Quantum Ready**. I do recall that I felt that it seemed a bit odd since even in late 2017 quantum computing certainly felt a long way from being even remotely

*ready*for development and deployment of production-scale real-world applications. Even then, I felt that the technology would have to evolve dramatically to be ready for people to be trained to use it properly and effectively — it’s rather difficult to train for a moving objective.

The technology isn’t ready even today in 2020, isn’t close to being ready, and it isn’t even close to time for software developers to even think about getting ready to use a future version of the technology, which doesn’t exist even today in 2020, nor is it likely to be ready in the next couple of years.

There’s only one thing to *get ready* for, and that’s to get ready *to wait*. And it’s likely to be a long wait.

It’s one thing for a large technology organization to have a few elite members of their technical staff who keep an eye on advanced technologies such as quantum computing, but it’s another thing entirely to ramp up whole teams and large numbers of staff for a moving-target technology which won’t be ready for large-scale application development and deployment for five to seven years, at best.

I get that vendors want prospective customers to *get ready soon* for technologies which will become ready in the coming years, but it seems almost absurd in the case of quantum computing.

# Ambiguity of quantum ready

To be clear, there are two very distinct senses of *ready* for quantum computing:

- Prospective users and developers have been trained on concepts and experimented on a small scale, and now they are just waiting for real hardware that has enough capabilities — qubits, coherence, gate reliability, connectivity.
- Real hardware is finally ready to be used by developers and users. With enough capabilities — qubits, coherence, gate reliability, connectivity.

IBM’s marketing term *Quantum Ready* is referring to only the former. The hardware is *not* yet ready for production-scale applications which deliver substantial real-world value.

But the risk is that many executives and unsophisticated managers and analysts may be hearing all the hype and misguidedly believing that Quantum Ready is referring to the latter — that the hardware is ready when it isn’t and won’t be for who knows how many years.

Marketing… enough said.

# Quantum volume has no real technical value

In March 2017 IBM introduced a metric for measuring the net power of a quantum computer, called quantum volume (technical details), which combines number of qubits, coherence, gate errors, and connectivity into a single numeric metric. Sounds like a great idea, but… technically it is completely useless since there is no way to derive any useful metric from that single number which can be used by an algorithm designer or an application developer.

In November 2018 IBM posted the technical paper which detailed how quantum volume was calculated — actually measured by running randomly-generated quantum circuits:

The concept is indeed very technical at that level, but still doesn’t have any technical value for algorithm designers or application developers.

In March 2019 IBM announced that it had a new quantum computer which had *double* the *quantum volume* of its previous quantum computers (16 vs. 8.) That sounded great, but what did that mean? It turns out both machines had the same number of qubits (20), so it meant that they individual qubits were more reliable, but you couldn’t tell either fact from the raw quantum volume numbers.

In January 2020 IBM announced doubling of quantum volume again, from 16 to 32. The qubit count was 28 vs. 20. So, again, the quantum volume provided no specific technical information about the machine.

In June 2020, Honeywell announced the availability of their quantum computer with a quantum volume of 64. Now what did that mean? It turns out that the machine had far fewer qubits (6 qubits) than the current IBM machines which have a quantum volume of 32 (28 qubits). In short, Honeywell’s qubit and gate reliability were much better, but the numbers alone don’t tell you that.

In July 2020, IBM announced six new quantum computers, all with quantum volume of 32, to supplement the one machine they had at 32 in January. Some of the new machines had 27 qubits, some had 5 qubits, and one had 20 qubits. All with the same quantum volume of 32. Kind of confusing, if you ask me.

In August 2020, IBM announced that it had “upgraded” one of those 27-qubit machines to achieve a quantum volume of 64 — double its previous quantum volume of 32. IBM also posted a technical paper which described how they accomplished this achievement. It turns out that only 6 qubits — out of 27 — were used for the measurement, matching the achievement of the 6-qubit Honeywell machine. So much for trying to compare machines based on quantum volume!

You can compare two or more machines based on their respective quantum volumes, but that won’t tell you anything about how many qubits each machine has or even which machine has more qubits or which machine’s qubits are more reliable. Ditto for coherence time, gate errors, and connectivity. There is no particular utility here that I can fathom.

To my mind, the only value of quantum volume is marketing — a vendor can say that they have achieved a specified quantum volume, but that won’t tell the customer, user, or developer any of the key technical metrics that algorithm design and application development depend upon.

# Quantum ready and quantum volume are basically marketing scams

Maybe the concepts of *Quantum Ready* and *quantum volume* were well-intentioned, but they aren’t in any way providing useful technical information to technical staff. In essence, both concepts are mere *marketing scams*, providing a distraction and capturing attention, but not adding any technical value or technical benefit.

They’re just part of the background noise of the hype surrounding quantum computers.

There are two aspects of marketing scams:

- Their intent.
- Their actual impact.

I won’t attempt to judge the true intent of any marketing scam, but I can judge the effects on real people and real organizations.

Attention and focus are two of the most valuable intellectual and mental resources that any organization possesses. Diverting attention and focus to advanced technologies prematurely — what I have referred to as *laboratory curiosities* — can have the negative effect of missing other opportunities where those resources could have been applied for greater and more immediate benefit.

# Quantum Summer of Love — but might a Quantum Winter be coming?

Quantum computing is still in its *Summer of Love* — it’s difficult to say how long it will continue, and whether a *quantum winter* may be coming.

This “summer” could last for another two or more years. Or six months. Or three years. Or maybe it lasts a full five years before a hard-freeze sets in for a long winter.

And maybe we see an occasional breakthrough now and then — they’re so unpredictable. Sometimes they can lead to a full thaw and a renewed summer, but a lot of the time they just just cause temporary thaws followed by a re-freeze. Ultimately we need a chain of breakthroughs to prevent a descent into a quantum winter.

Quantum computing already had a quantum winter of sorts, from 1995 to 2016 as people waited desperately and (im)patiently for a real quantum computer to become available — a 5-qubit machine from IBM. That really was a major achievement, but it was like reaching the base camp for Mount Everest, and people are feverishly waiting to get to the top of the mountain.

# All it takes is a single massive breakthrough or two or three to break out and we’re off to the races

Breakthroughs are very unpredictable, both in their timing and their net impact. All it takes is a single massive breakthrough or two or three to break out and we’re off to the races. But you can’t predict or count on them. The important thing is to quickly jump on breakthroughs as they occur. And to be ready to jump at a moment’s notice.

Breakthroughs can be triggered in two ways:

- Fundamental discoveries in underlying research. Keep increasing basic research funding until you see results.
- Developments driven by use needs. Keep pushing the technology as hard as you can, trying to achieve application results. That can help guide research.

Generally, the most dramatic breakthroughs come when these two factors come together.. And when they come together with a reasonable frequency.

# But… my journey is far from complete

I still have so many nooks and crannies of quantum computing to dig into.

And there is so much research in the pipeline, like fresh five-year projects which are just getting started.

And who knows how many research projects to follow those projects.

And who knows when dramatic breakthroughs may occur which accelerate the pace of progress greatly.

# Ask me again in two years

There’s nothing magical or significant about two years, but progress is occurring at a reasonably rapid pace. It just feels that in two years we should have a lot better visibility on both hardware and algorithm feasibility.

# Much more basic research is needed

If there’s one thing that I am convinced of from my quantum journey, it’s that much more basic research is needed. Across the board. No exceptions.

It’s not just a matter of just engineering and product development. Scientists are needed. More work is needed in science labs.

It’s not just a matter of venture capital funding. Research grants are needed.

# General reflections

These are items which certainly came out of my quantum journey, but are not really associated with any particular milestone.

- It became clear to me during the spring and summer of 2018 that quantum computers, as then envisioned, were intended more as
*coprocessors*to perform a very limited amount of the overall computation of an application, with the bulk of the application running as classical software. - At some point I began to realize that I didn’t have any great confidence that a lot of the published quantum algorithms would necessarily scale well for significantly larger input data. This is discussed more in a subsequent section:
.*Lack of confidence that many current quantum algorithms will scale* - It didn’t take me long to realize what a low-level programming model was being used for quantum computing. It looked and felt comparable to
*assembly language*programming on a classical computer. - It’s super-clear to me that there is a crying need for a high-level programming model for quantum computing.
- Over time I came to the realization and a new appreciation of how intellectually powerful classical computing really is. There are so many things that classical computers do very well that even ideal quantum computers can’t do at all, particularly with complex logic and large volumes of data. See the section
for more details.*The awesome intellectual power of classical computers* - If you have a significant problem you need to solve today or this year or even next year, classical computing is the way to go. Even if you thought you needed a quantum computer because of its exponential advantage, you can probably make do with a sampling technique, such as Monte Carlo simulation, or a large distributed cluster, or by reducing the problem complexity using some clever techniques.
- Anothing thing that I realized about classical computing as I got deeper into quantum computing was how little I understood about the physics underlying classical digital logic gates and transistors. I gained a significant appreciation for this fact as I was reading the Nobel physics prize lectures related to quantum mechanics, particularly related to the invention of the transistor (surface effects.) I suddenly realized that the physics of classical and quantum computing is not so separate and far apart — the main difference being that classical computing depends on quantum effects but uses statistical aggregation to hide the probabilistics effects to achieve a fairly decent approximation of determinism while quantum computing exploits the raw probabilistic effects.
- I’ve gradually built up a fairly significant network of LinkedIn contacts for quantum computing and quantum mechanics and physics over 2019 and 2020. That provides me with a decent news flow and links to interesting new academic papers. And a fair amount of interesting conversations on quantum computing as well.
- I always keep two Google News windows open, one for “quantum computer” and one for “quantum computing”. They provide me with a significant news flow as well, although LinkedIn status flow is usually enough for my needs and interests.
- Phys.org is a good source of news on quantum computing, at least in terms of fresh academic papers. They monitor new additions of papers to arXiv and post readable plain language descriptions, which frequently show up in Google News for quantum computing.
- I’m very anxious to see some significant advances and breakthroughs in the near future, but the past year hasn’t been as eventful as I had hoped.
- I do actually find myself growing somewhat disillusioned and disenchanted with quantum computing, at least in its current state. Too many of the promises remain unfulfilled and progress is slower than I expected two years ago.
- Maybe it might be better for me to take a two-year or even five-year Rip van Winkle-like slumber to skip over all of the waiting, anxiety, and disappointment and cut directly to quantum advantage or at least a much more impressive stage on the way to quantum advantage.
- Alternatively, it may be time for me to draw a mark and say that everything I’ve done to date is a Phase 1 (or even Phase 0), and now consider that I need to
*level up*to whatever level needs to come next, a Phase 2 (or a real Phase 1.) - But by default I’ll just keep on keeping on as I have for the past three years, varying from day to day, week to week, and month to month, but surfing the waves of quantum computing as they come at me. Including monitoring and participating in the flow of information on LinkedIn.
- I may start to dig more into the basic research since that is what is going to drive a lot of what happens over the next five years. And maybe even more of the underlying physics. I may increase my pace of reading academic research papers on arXiv. Currently I’m limited to the papers I hear about from media reports on Google News and my LinkedIn contacts. I have the feeling that basic, fundamental research is where a lot of the action will be since I don’t feel that our current qubit technologies are going to be sufficient to achieve
*quantum advantage*. - There are still plenty of nooks and crannies of quantum computing which I don’t fully fathom, and even some fundamentals about which I have lingering questions (see my list from March 2019) — that are not easily answered, maybe simply because I’m diving too deep.
- As far as my list of my questions from 2018, I haven’t bothered to answer any of them (in writing) or begin an FAQ. I probably could answer a fair number of them, with some degree of confidence, but there are plenty that I can’t answer with 100% confidence even though I may have at least partial answers. I still feel that I have a lot to learn before I can answer most of them to 100% confidence.
- With all of my writing, I am still not ready to write a decent
paper. I have plenty of the puzzle pieces, but there are still too many nooks and crannies that feel a little bit too mysterious to me. Maybe I’m being a little too ambitious — sure, I could write yet another*What is Quantum Computing?**puff piece*, as many others have, but another puff piece is not needed. What I really want to write about is*how*a software developer can transform a problem into a solution which really does exploit the exponential power of quantum computing, but I don’t think even the best algorithm designers know that yet — I think we need a much higher-level programming model first, which is not available right now, nor even on the near horizon. - In hindsight, I wish I could have been able to start out in 2016 by reading what I wrote in my own paper,
, which I posted in January 2020. That would have given me a broader perspective and gotten me started faster.*What Is Quantum Information Science?* - And I wish IBM had had it’s current
available back in 2016. Although I have to hedge there because as good as the textbook is with current technology, we really need a quantum leapfrog beyond where we are today — my preference would be to start in 2016 with what we will likely have three to five years from today. Of course we can’t do that, but it really is what’s needed.*Qiskit Textbook* - I’m at the stage where I don’t expect that anybody will be able to adequately answer my questions to my own satisfaction — I’ll have to answer them myself by digging even deeper.
- Even now, the possibility of pursuing some sort of income-producing opportunity in the sector is
*vaguely intriguing*, but nothing has stood out as appealing or appropriate for my interests, background, ability, and skills. - Even now, I’m trying my best to refrain from doing anything
*hands-on*with quantum computing. I’m staying focused on being a*technologist*and an*idea guy*, not a*developer*. I don’t want to waste any of my energy on mere implementation which can be much better spent on working with ideas. - Quantum computing is still in its
*Summer of Love*— it’s difficult to say whether a*quantum winter*may be coming, but this “summer” could last for another two or more years. Quantum computing already had a quantum winter of sorts, from 1995 to 2016 as people waited desperately and (im)patiently for a real quantum computer to become available — a 5-qubit machine from IBM. - What does the future hold? Anything goes, literally. Stay tuned. My most recent relevant writing:
— short answer, not soon. Yes, I personally consider quantum computing as still being a*When Will Quantum Computing Advance Beyond Mere Laboratory Curiosity?**mere laboratory curiosity*, not ready for production-scale practical applications which can deliver substantial real-world value. - One of the lessons I’ve learned is to never underestimate the boundless cleverness of algorithm designers. Even given very severe limits, algorithm and software designers always seem to be able to come up with very interesting, even exotic, techniques to bypass or work around all manner of limits and obstacles. Even as quantum algorithm designers come up with new and more clever approaches, designers of classical algorithms and applications are also constantly rising to the challenge and coming up with novel techniques to deal with the limits of classical computers. The net result is that it is far too soon to judge that classical computing has reached its limits.
- I’m patiently waiting for the development of a much richer collection of algorithmic building blocks, design patterns, application frameworks, and credible, real-world example algorithms and applications. What we have right now just doesn’t impress me that much anymore.
- I’m rather disappointed that some of the early algorithmic efforts turn out to be not very practical, such as quantum phase estimation, quantum Fourier transform, and order finding.
- Yes, I do still believe that quantum computing is still stuck in the realm of the
*lunatic fringe*— usable only by the most-skilled elite. Sure, plenty of people can run fairly trivial examples using current cloud-based quantum computing services, but I’m talking about the need for production-scale applications which deliver substantial real-world value. For more, see, posted in June 2019.*When Will Quantum Computing Be Ready to Move Beyond the Lunatic Fringe?* - My bottom line is that I still don’t have convincing evidence that quantum computing is indeed a
*real thing*. At least a few of the great promises will have to be fulfilled. I remain hopeful, but pragmatic. - I’m not persuaded that we have the best qubit technology to achieve production-scale applications which deliver substantial real-world value. It feels as if better technologies will be coming over the next few years.
- It gradually became clear to me that we are faced with twin qubit challenges: isolation of the quantum state of individual qubits and maintaining entanglement of entangled qubits. See more in the section entitled:
.*Twin qubit challenges: isolation and maintaining entanglement* - My biggest technical concern at the moment I write this is not how many qubits we have or coherence, but the fact that so many algorithms are depending on the granularity of phase and probability amplitude when we have no visibility or transparency into what the theoretical or practical limitations of phase granularity will be in future quantum computers. See more in the section entitled:
.*We know too little about the granularity of phase* - An issue that popped up at times but I haven’t dug into deeply is what I call the RISC vs. CISC issue from classical computing (Reduced Instruction Set Computing vs. Complex Instruction Set Computing) — should the firmware of a quantum computer support only the simplest, most basic operations, or should the firmware support more higher-level, complex operations, such as quantum Fourier transform, phase estimation, swap networks, and order finding, which the firmware itself then decomposes into the lower-level raw operations supported by the hardware. See more in the section entitled:
.*RISC vs. CISC issues for firmware* - What’s my endgame? Where do I think I’m going with all of this? It’s complicated. I don’t have any good answers. As long as a lot of interesting things are happening, I’m in. If it ever gets boring, I’m out. See more in a subsequent section —
.*What’s my endgame?*

# Many exciting advances have been going on in classical computing

Despite the great promise and incremental advances of quantum computing, meanwhile classical computing has had many real advances by leaps and bounds over the decades since 1980 right up to today:

- Minicomputers were getting more powerful.
- A new and much more powerful supercomputer every time you blinked.
- Semiconductor memory. Capacity growing, price declining, reliability increasing.
- Small disk drives. Capacity growing, price declining, physical size declining.
- Personal computers. Every year faster and cheaper.
- Networking.
- ARPANET.
- Powerful workstations.
- Servers.
- High-end servers.
- Even larger supercomputers built using large numbers of commodity microprocessors.
- The Internet.
- The Web.
- Distributed computing.
- GPUs and FPGAs to accelerate classical processing.
- Neuromorphic and tensor processing unit (TPU) chips to accelerate AI.
- And Moore’s Law continued to crank out major advances on a regular basis.
- All very, very,
*extremely intoxicating*. A never-ending rocket to the moon and the stars. - And quantum computing had…
*nothing*, nothing but theory and a bunch of lab experiments.

Granted, quantum will eventually zoom past classical computing with its *exponential advantage* (at least for a subset of applications), but that day is still years in the future. People with real problems to solve over the next two to five (or even ten) years must and can continue to ride the rising tide of classing computing.

# The awesome and unparalleled intellectual power of classical computers

Independent of raw performance, classical computers, based on the flexibility of the *Turing machine* coupled with a lot of clever but simple hardware, provide a myriad of extremely powerful features which leverage human intellect to a phenomenal level that even the best of today’s quantum computers cannot match. Features include:

- Algebraic expressions and variables. Directly parallel the language of mathematics.
- Sophisticated control structures. Conditionals, loops, functions and function calls, case statements.
- Multiple parallel processes and threads.
- Rich data types. Far beyond simple bits — integers, real numbers, text, images, audio, video, hypertext.
- Rich data structures. Arrays, tables, lists, trees, object-oriented programming.
- I/O.
- File storage.
- Database access.
- Network access.
- Web services.
- High-level programming languages.
- Database query languages.
- Interactive access.
- Artificial intelligence interfaces.

And the list just keeps growing.

How can quantum computing compete, other than raw performance for niche problems?

# Seminal role of Feynman

There’s no question about CalTech Prof. Richard Feynman’s pioneering role in getting quantum computing started. That said, he failed to get me started, at least back in the 1980’s.

I tried to read a magazine article by Feynman on quantum computing (early or mid 1980’s), but the physics was too opaque for me to make any sense out of at the time, and it gave me no sense of where exactly the power of quantum computing came from or how it worked. Rather than inspiring me, this article turned me off to quantum computing. Besides, there were no real quantum computers anyway.

After my failed attempt at Feynman, I pretty much pushed quantum computing aside as being a vague future, unworthy of my immediate attention.

Somewhere in the late 1990’s to early 2000’s I read something attributed to Feynman — that it is very hard for us to simulate *n-body problems* with traditional computers, even 3-body problems, but that nature is able to do so instantly, and this was his rationale for promoting quantum computing. That was finally some insight that stuck with me, from that moment on, through today. Unfortunately we can’t actually do that with a quantum computer — as they are envisioned today, but the sentiment was appealing. Unfortunately, I have been unable to find a citation for that reference to n-body problems from Feynman. I guess it predated news on the searchable Internet.

Sometime in the summer of 2018 I finally had the courage and background to actually read Feynman’s famous paper on quantum computing — ** Simulating Physics with Computers** (sorry, but there is no reliable link to the full paper available since it’s published in a paywall-protected journal.) It finally made a little sense. It gave me some insight into his intentions — he wasn’t simply intending to make a faster computer, but a computer focused on simulating physics, especially quantum mechanics.

As important as simulating physics is, it simply wasn’t on my radar back into the 1980’s — or any time until I started digging deep into quantum computing and quantum mechanics in 2018.

# Quantum computer as a coprocessor

It became clear to me during the spring and summer of 2018 that quantum computers, as then envisioned, were intended more as *coprocessors* to perform a very limited amount of the overall computation of an application, with the bulk of the application running as classical software.

I believe that it is still possible to more tightly integrate quantum computation with classical computation, but there’s no real possibility that quantum computation as currently defined would wholesale subsume all of classical computation. Many applications require determinism and deterministic results, which is not the forte of quantum computation.

The open question is the extent to which quantum computation can be broadened so that a much larger fraction of the overall application will be ripe for exponential advantage.

Currently, even the more advanced algorithms, such as those using variational methods, have only a rather small fraction of the overall algorithm implemented as a *quantum circuit*.

As things stand now, it will be very difficult to achieve *quantum advantage* with *exponential speedup* for any application when such a small fraction of even an ideal candidate application has that potential.

# Must map problems to solutions using the raw physics of quantum mechanics

By the middle of the summer of 2018 in had become clear to me that the essence of designing a quantum algorithm was to map a problem to a solution based on the *raw physics of quantum mechanics*, in contrast to classical computing where most algebraic equations could be directly mapped to comparable classical mathematical operations on a classical computer.

I explored this in a paper I posted in August 2018: ** The Greatest Challenges for Quantum Computing Are Hardware and Algorithms**.

Granted, the physics operations of quantum mechanics are actually fairly simple — rotations in three dimensions plus operations to cause superposition and entanglement, all using unitary matrices and column vectors, but mapping real-world concepts to raw physics is a very nontrivial problem — a problem that only a physicist could love.

I have yet to see a clear exposition of real-world design patterns and rules that make it easy — let alone trivial — to map real-world problems to quantum logic gates or unitary matrices. We need much better if quantum computing is to have any hope of entering the mainstream of computing.

# Lack of confidence that many current quantum algorithms will scale

At some point I began to realize that I didn’t have any great confidence that a lot of the published quantum algorithms would necessarily scale well for significantly larger input data. Shor’s algorithm was a good case in point on that score.

The uncertainty of scaling of quantum phase became a concern of mine as well.

I was also concerned that I didn’t see any significant discussion of scaling in published algorithms. Scaling hasn’t seemed to be a priority.

In short, scaling of existing algorithms may be a major obstacle to achieving production-scale quantum applications.

# Tedium of a low-level programming model

It didn’t take me long to realize what a low-level programming model was being used for quantum computing. It looked and felt comparable to *assembly language* programming on a classical computer.

While it’s true that any computer needs a low-level machine language, that doesn’t mean that is the level that algorithm designers and application developers should be working at.

The solution is that we need a high-level programming model.

# Need a high-level programming model

It’s super-clear to me that there is a crying need for a high-level programming model for quantum computing. Programming at the low level of individual quantum logic gates is far too tedious and error-prone to use for development of production-scale algorithms and applications.

What would a high-level quantum programming model look like? Unknown at this stage.

In truth, it’s not even clear to me that we fully understand the most appropriate machine architecture, let alone the most appropriate low-level programming model.

Although, technically, the high-level programming model shouldn’t necessarily be driven by the machine architecture or low-level programming model. In fact, it should probably be driven in the opposite direction, from the top down. The designers of the early classical computers had classical mathematics, with algebraic equations to drive their decisions about what low-level machine operations would be needed as building blocks for implementing high-level mathematical models.

Maybe for the niche of simulating physics the current gate model is optimum, but for other application niches other high-level abstractions are needed.

So, for now, it’s simply a gigantic void that will need to be filled at some later date.

# Potential for quantum-inspired algorithms and quantum-inspired computing

One intriguing possibility which I stumbled upon in the Summer of 2018 was the prospect of *quantum-inspired algorithms* and *quantum-inspired computing*. The essence is that the design of great quantum algorithms requires such incredible *out-of-the-box thinking*, and once you’ve done that you may in fact be able to implement a similar approach on a classical computer which is much more efficient than traditional approaches to the design of classical algorithms.

I explored this briefly in a paper I posted in August 2018: ** The Greatest Challenges for Quantum Computing Are Hardware and Algorithms**.

There is no free lunch — no direct and efficient mapping from a quantum solution to a classical solution, but the thinking needed to unlock a quantum solution is so intense, intuitive, and insightful that some of that intensity, intuition, and insight has a fair prospect of unlocking a comparable degree of innovation in classical computing as well.

The downside or limitation is that a classical computer or even a large cluster of classical computers won’t have the raw exponential potential of a quantum computer.

But, the full potential of an exponential advantage may not be needed to achieve a dramatic improvement for a classical solution. In some cases, a quantum algorithm may use a fully general exponential solution simply because it is easier than trying to craft a narrow quantum solution that does only as much computation as needed. A classical, quantum-inspired algorithm could afford to invest additional effort to find a narrow, carefully-crafted solution, even if it takes more effort if it has a dramatic advantage over a traditional classical approach.

# Odd, cryptic, and poorly-defined terminology and heavy reliance on greek symbols

The odd, cryptic, and poorly-defined terminology and heavy reliance on greek symbols of both quantum mechanics and quantum computing did nothing to make quantum computing more appealing — to me or many others. And in my mind, none of this oddness was required — plain language, clear terminology, and sensible naming would have sufficed. If not for these key deficiencies, I might have gotten more deeply into quantum computing at an earlier stage. Especially if interactive simulators had been available much sooner as well — lack of real hardware was not the critical stumbling block per se.

# Lack of a quantum simulator in the early years

I have to wonder: Why didn’t they have a quantum simulator in the early days, long before real quantum hardware became feasible?

In truth, it wouldn’t have helped much seeing as how there was no high-level programming model available then (or now!). But still… Having a low-level quantum simulator would have incentivized development of a high-level quantum programming model.

Just imagine if there had been a high-level quantum programming model and supporting simulator back in the early 1980’s. That would have inspired the development of a lot of quantum algorithms — *practical* quantum algorithms. And those algorithms would have incentivized a higher level of investment in research for real quantum hardware.

It would have incentivized *quantum-inspired algorithms* as well, designed to run efficiently on classical computers as well as quantum computers.

Missed opportunities. Oh well. Hindsight is 20/20.

# Probabilistic results, statistical aggregation, and approximate determinism

Quantum computing makes little sense until you grasp the probabilistic nature of quantum mechanics, coupled with the fact that to get results which will be meaningful in the real world it is necessary to perform statistical aggregation, and only then can you approximate the degree of determinism which we need in the real world.

The superficial puff pieces which you read about quantum computing never paint this essential picture of the true nature of quantum computing — and the true nature of everything in the real world, all of which is based on the same quantum mechanics of probabilistic outcomes.

# Shot count and circuit repetitions

One technical issue which had bugged me for quite some time was the issue of needing to run a quantum circuit some number of times to get a statistical average result. A simple concept but the devil is in the details, and most published algorithms only mention it briefly in passing if at all.

None of the introductions to quantum computing which I have encountered have even mentioned the essential need to perform *statistical aggregation* to get results from a quantum computer which can be used in the non-quantum, classical real world.

So I dug into it some more and finally wrote and posted an entire informal paper on this topic in July 2020:

The big concern is twofold: 1) how to properly calculate the number of repetitions needed, and 2) whether the needed number of repetitions, especially for a NISQ device, might be so outrageous as to possibly render a quantum solution unworkable or yield performance far worse than expected.

The real truth is that the results of a quantum computation will be a *probability distribution* of results. There may indeed be a single result which has the highest probability, but it’s also possible that the distribution has no single clear peak, in which case the application must decide whether to pick one of the peaks at random, average the peaks, try again, accept multiple results, or conclude that no deterministic result is possible for this particular input data and parameters. That’s the reality of quantum computing — the probabilistic nature of quantum mechanics.

It could be that the quantum computing hardware simply isn’t up to the task, or that the algorithm designer or application developer is not utilizing the hardware properly, or that no deterministic result is possible for the problem as designed.

Most commonly, it is simply that the task of designing quantum algorithms and developing applications which use those algorithms is itself not a completely deterministic process, requiring iteration and experimentation to figure out the most optimal parameters for yielding a sufficiently deterministic result suitable for the real world.

# Exponential speedup — isn’t free, easy, and automatic

*Exponential speedup* is the whole point of quantum computing, but people chatter about it as if it was free, easy, and automatic for all quantum algorithms and applications, which isn’t true. In fact, it takes a lot of careful analysis and attention to detail to achieve exponential advantage, and many algorithms won’t be as successful as their designers might have hoped.

Even the much-vaunted *Grover search algorithm* achieves only a *quadratic speedup*.

The simple truth is that each algorithm will have its own performance characteristics, and the designers or implementers will need to carefully document those performance characteristics — the algorithm’s *actual quantum advantage* or *effective quantum advantage*. I discussed this topic at length in: ** What Is the Quantum Advantage of Your Quantum Algorithm?** posted in February 2020.

# Much more algorithmic building blocks, design patterns, and application frameworks are needed

I’m patiently waiting for the development of a much richer collection of algorithmic building blocks, design patterns, and application frameworks. What we have right now just doesn’t impress me that much anymore.

Quantum algorithms and applications must be crafted in too painstaking a manner today. That won’t cut it for mainstream adoption of the technology.

# Need credible, real-world example algorithms and applications

The current raft of quantum algorithms are too esoteric to demonstrate how credible algorithms can be designed to meet the joint challenges of real-world problems and quantum computing.

Give people a handful of credible models to follow, and progress will be amazing. But without credible models for algorithms and applications, progress will remain excruciatingly slow and painful.

Lack of credible models for real-world algorithms and applications is a clear obstacle to rapid progress.

# Twin qubit challenges: isolation and maintaining entanglement

It gradually became clear to me that we are faced with twin qubit challenges: isolation of the quantum state of individual qubits and maintaining entanglement of entangled qubits.

On the one hand we want a qubit to maintain its state without being affected by the states of other qubits or the general environment, including robust operations on individual qubits which do not affect other qubits, while we also want selected qubits to be explicitly entangled in a way that maintains that entanglement over time, regardless of operations performed on qubits which are not entangled with the entangled qubits. And we certainly don’t want entanglement operations to either affect qubits other than those being entangled or to inadvertently entangle more qubits than specified.

Maintaining isolation and coherence of entanglement are both very difficult challenges. Much more fundamental research is required. And plenty of engineering cleverness as well.

# We know too little about the granularity of phase

My biggest technical concern at the moment I write this is not how many qubits we have or coherence or connectivity of qubits, but the fact that so many algorithms are depending on the granularity of phase and probability amplitude when we have no visibility or transparency into what the theoretical or practical limitations of phase granularity will be in future quantum computers.

These algorithms may work for a relatively small number of qubits and gates and small input data, but for larger application problems phase may not scale well or even scale at all beyond toy applications.

This impacts quantum phase estimation, quantum Fourier transform, probability amplitude estimation, and order finding. This is a huge question mark over the quantum computing venture. A lot is at stake and a lot is at risk.

# RISC vs. CISC issues for firmware

An issue that popped up at times but I haven’t dug into deeply is what I call the RISC vs. CISC issue from classical computing (Reduced Instruction Set Computing vs. Complex Instruction Set Computing) — should the firmware of a quantum computer support only the simplest, most basic operations, or should the firmware support more higher-level, complex operations, such as quantum Fourier transform, phase estimation, swap networks, and order finding, which the firmware itself then decomposes into the lower-level raw operations supported by the hardware.

Sometimes optimization is easier to do closer to the hardware, but sometimes it’s better to do the optimization higher up in the software, such as the compiler, where more information is available to guide the optimization of scarce hardware resources.

Another issue is that some of these higher-level operations result in very large numbers of lower-level gate operations, which can consume a lot of network bandwidth, which wouldn’t be needed if the decomposition into gates occurred right in the firmware.

The issue is unresolved, but worthy of research.

# Emphasis on Shor’s algorithm seems unwarranted

As highlighted in my quantum journey timeline, I have significant reservations about the feasibility of Shor’s algorithm for factoring very large semiprime numbers such as public encryption keys. Despite lacking a credible and robust implementation, people are constantly referring to it as if it were a done deal, when it isn’t.

This misguided emphasis on Shor’s algorithm as being the apex of *accomplishment* in quantum computing only serves to highlight the extent to which the hype has gotten much too far ahead of reality.

As I noted on my timeline in 2018:

- In the late summer of 2018 I finally felt comfortable trying to tackle Shor’s algorithm for factoring large semiprime numbers — not because I cared about cracking large public encryption keys, but because it seemed as if almost every academic paper would tout the algorithm as being the peak of quantum computing, so I figured that I needed to comprehend how it worked. I started by reading the original paper — or at least the preprint on arXiv.
- Initially I was very impressed by Shor’s algorithm, but the more I dug into it the more skeptical I became. The paper had too many gaps, leaps, hand waves, and lack of crystal clarity and specificity for my taste. And, worst of all, I grew concerned that it was not clear whether it will really work for very large numbers due to practical limitations such as phase granularity and concern about how many circuit repetitions might be needed to get statistically valid results. In other words, I had finally arrived in the world of pragmatic considerations for quantum computing!
- By the end of September 2018 I posted a list of all of my open questions about Shor’s algorithm:
. Most of them are still open, from my perspective. That paper included a lot of references for Shor’s algorithm for anyone wishing to dig deeply into it, including variations on the original algorithm and attempted implementations. - In October 2018 I posted an abbreviated summary of the various pieces of Shor’s algorithm:
.*Ingredients for Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer*

And then in 2019:

- By the end of December 2019 I still hadn’t fully mastered the nuances of Shor’s algorithm, but since the algorithm is of continued interest, I posted an updated version of my long list of references for Shor’s algorithm from September 2018 for anyone wishing to dig deeply into it, including variations on the original algorithm and attempted implementations:
. As far as I can tell nobody has implemented Shor’s algorithm on a real quantum computer to factor more than 15 (3 x 5) and 21 (3 x 7). I found one simulated implementation that worked for larger two-digit numbers, but that’s about it. I had hoped to see an implementation for a 32 to 40-qubit simulation, but I’ve seen none so far. I think it’s worth noting that the various implementation efforts diverge significantly from the original paper. To this day, I don’t think there is a single clean reference implementation which is 100% according to Shor’s original paper. And this is for an algorithm which is still widely considered as*References for Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer**proof*of what a quantum computer can do.

Until something changes dramatically, either on the hardware front or the algorithm front, I think Shor’s algorithm should be retired as an exemplar of what quantum computing can *actually do*. I don’t think anybody seriously believes that Shor’s algorithm will be practical in even five years, or maybe even seven. And as I indicated, I have serious concerns about whether it will even theoretically work for very large numbers on even an ideal conceivable practical quantum computer.

# Emphasis on Grover’s algorithm seems unwarranted

Also as highlighted in my quantum journey timeline, I have significant reservations about the utility of Grover’s search algorithm. Back in 2018:

- By the summer of 2018 I had finally read enough detail about
*Grover’s search algorithm*to realize that much of the hype really was just hype. The algorithm was billed as being able to search a “database”, but a database is highly structured, while the actual Grover’s algorithm searches only linear unstructured data. A little bit of disenchantment began to set in on my part. At least there was still*Shor’s algorithm*, capable of factoring even the largest of public encryption keys, the veritable*Mount Everest*of early quantum computing algorithms to captivate all of us. Or at least that’s what I thought before I dug into Shor’s algorithm.

Although, unlike Shor’s algorithm, it may have some credible implementations for today’s limited quantum computing hardware and simulators, the benefits are rather dubious — the whole point of quantum computing is a *quantum advantage* through an *exponential advantage*, but at best, Grover’s algorithm promises only a *quadratic advantage*, and that’s compared to a simpler linear search while classical developers have long since developed many indexing approaches which *deliver* better performance than Grover could even in theory deliver.

There may be some niche applications where Grover’s algorithm may have value in quantum computing, but *database search* is not one of them.

Again, if there are some niches where Grover’s algorithm could excel, fine, but the rhetoric which suggests that Grover’s algorithm is a *really big deal* and a *tremendous advantage* over classical computing is very unwarranted.

# DiVincenzo’s five criteria (requirements) for a quantum computer

At some point in 2018 I became aware of theoretical physics researcher David DiVincenzo and his proposed *five criteria (requirements) for a quantum computer*, given in his seminal paper from 2000: ** The Physical Implementation of Quantum Computation**.

The five criteria (requirements):

- A scalable physical system with well characterized qubits.
- The ability to initialize the state of the qubits to a simple fiducial state, such as |000…>.
- Long relevant decoherence times, much longer than the gate operation time.
- A “universal” set of quantum gates.
- A qubit-specific measurement capability.

Those five criteria suffice for *computing* alone, but diVincenzo recognized the need or desirability of quantum *communication* as well, so he proposed two additional criteria:

- The ability to interconvert stationary and flying qubits.
- The ability faithfully to transmit flying qubits between specified locations.

A *stationary qubit* is a hardware device, as is common in all existing quantum computers. It remains in a single location.

A *flying qubit* is typically a photon which carries its quantum state — typically entangled with the quantum state of a second photon which remains at the source location — from a source location to a destination location. This is needed to implement a *quantum communication channel* (or simply *quantum channel*.)

# Boundless cleverness of algorithm designers

One of the most inspiring aspects of algorithm design and software design in general is the *boundless cleverness* of the human mind. Even given very severe limits, algorithm and software designers always seem to be able to come up with very interesting, even exotic, techniques to bypass all manner of limits and obstacles.

Quantum computers clearly present all manner of limits and obstacles. Much cleverness is required to accomplish anything non-trivial.

By the same token, designers and developers of classical algorithms and applications are also constantly rising to the challenge and coming up with novel techniques to deal with the limits of classical computers.

The net result is that it is far too soon to judge that classical computing has reached its limits.

Even if a clever algorithm is designed to achieve a quantum advantage on a quantum computer, never underestimate the ability of classical algorithm designers and application developers to come up with even more clever approaches that may in fact compete with or even exceed the results of the cleverest known quantum algorithm.

But even then, quantum algorithm designers have the capacity to rise to the challenge and outdo both their own best existing quantum algorithms and the cleverest of classical algorithms.

Rinse and repeat.

Where will it end? At some point classical algorithms will no longer be able to compete with quantum algorithms, right? Yes, in theory, but exponential advantage is quite elusive, and quantum hardware is quite limited, so it’s a fool’s errand to count classical algorithms out just yet.

# My sources of information

In no particular order — because it varies greatly depending on time and context, I’ve gotten information related to quantum computing and quantum mechanics from:

- News stories.
- Press releases.
- Google keyword searches.
- Wikipedia articles.
- Free academic paper preprints on arXiv.
- IBM Qiskit.
- LinkedIn status updates by quantum people in my connections. Links to papers, news, etc.
- Specifications. Rare!
- Documentation. Spotty.
- Books.
- Google Books search results.
- On rare occasions, Google searches would find results on
with questions and answers, but I found the utility and quality of answers to questions to be insufficient for my purposes. That was a major disappointment. Ditto for Quora.*Quantum Computing Stack Exchange* - Blogs. Occasionally, but not a primary source for me.
- Medium. Longform articles. All of my writing is there. Occasionally I find something interesting as well.
- Videos.
- Online courses.
- Lectures. Including videos, or at least lecture notes.
- Lecture notes.
- Websites. Venders. Organizations. Academic projects. Professors.
- GitHub repositories for code and project files.

Generally, if it isn’t online, costs money, is hidden behind a paywall, or requires registration, then it doesn’t exist as far as I am concerned.

I always keep two Google News windows open, one for “quantum computer” and one for “quantum computing”. They provide me with a significant news flow as well, although LinkedIn is usually enough for my needs and interests.

Phys.org is a good source of news on quantum computing, at least in terms of fresh academic papers. They monitor new additions of papers to arXiv and post readable plain language descriptions, which frequently show up in Google News for quantum computing.

# Wikipedia articles related to quantum computing

These are the Wikipedia articles which I consulted fairly heavily in the spring and summer of 2018:

- https://en.wikipedia.org/wiki/Quantum_computing
- https://en.wikipedia.org/wiki/Quantum_mechanics
- https://en.wikipedia.org/wiki/Quantum_logic_gate
- https://en.wikipedia.org/wiki/Timeline_of_quantum_computing

# IBM Qiskit

I definitely got a lot of information from the various web pages IBM posted as part of its Qiskit effort. The website has evolved significantly in just the few years since I first accessed it — 2017, but even in 2017 it had a lot of interesting content on quantum computing.

The IBM Qiskit Text Book wasn’t available when I started, but individual web pages on the IBM website had a lot of comparable content.

# IBM Qiskit Text Book

Although the IBM Qiskit Text Book wasn’t available when I started and I’ve since outgrown the need for a lot of its content, it is probably the place I would start if I was getting into quantum computing today starting from scratch.

# MIT online quantum courses

I slogged through three online (OCW) MIT undergraduate *quantum physics* (quantum mechanics, but emphasizing its relevance to physics) courses:

- 8.04 Quantum Physics I (Spring 2013) — Prof. Allan Adams. Did cover quantum computing very briefly in one lecture.
- 8.04 Quantum Physics I (Spring 2016) — Prof. Barton Zwiebach. Same course, but differences in material covered. Good refresher.
- 8.05 Quantum Physics II (Fall 2013) — Prof. Barton Zwiebach.

All three courses provided full video, and some lecture notes.

In truth, you really need to also read the recommended textbooks if you really want to become fully proficient in all of the material.

There was a next course, but it didn’t have video, so I skipped it:

- 8.06 Quantum Physics III (Spring 2016) — Prof. Aram Harrow. Includes a lecture on quantum computing.

I could only find one course on quantum computing, but it had no video and was missing most lecture notes:

- 18.435 Quantum Computation (Fall 2003) — Prof. Peter Shor.
- Additional web page for 18.435.

# MIT xPro Quantum Computing Fundamentals — beyond my budget

My budget for training and education in quantum? $0. Absolutely zero. There is a lot of online information available for free, so there’s no great incentive for me to pursue an expensive course. If I were an employee at a Fortune 500 company, sure, then I would have considered some expensive courses or professional training, but I’m not, so I didn’t.

For example, MIT xPro Quantum Computing Fundamentals — two courses for only $2,2,149.

I have no experience with those two courses, so I cannot offer a recommendation one way or the other.

There may be other comparable courses from other institutions or professional organizations.

# My budget in general: $0

My budget in general for quantum is $0. Absolutely zero. That includes courses, seminars, conferences, travel, books, subscriptions, etc.

# Books

Books on quantum computing and quantum mechanics have not been a significant source of information for me. Lecture notes, academic papers, Wikipedia articles, online courses, and web sites have most of what I need and have used.

That said, my Google keyword searches would occasionally encounter book results in books indexed by Google Books. These are good for reading short passages or even a couple of pages — for free. Google Books also has a feature to directly search within a particular book or read previews of portions of the book — for free.

Sometimes my Google keyword searches would occasionally encounter book results which were actually *illegal bootleg copies* of books — uploaded PDF files. I did in fact occasionally consult such sources, but I won’t provide any details about them or identify or link to them since they are in fact *illegal*. I actually contacted an author for two of them and confirmed that they were not authorized uploads.

There is one free online book of sorts that I can link to, the *IBM Qiskit Textbook*:

I can’t personally speak to the relative value of the Qiskit Textbook compared to any of the popular paper and electronic textbooks you can find on Amazon, other than that it is free and readily available online.

# Blogs

Sometime in 2018 I became aware of Prof. Sott Aaronson’s blog, which focuses on quantum computing. I’ve occasionally found it interesting and useful, mostly for specific technical issues, but overall I don’t read it on a regular basis.

Overall, I haven’t been reliant on blogs to any significant degree. On occasion something interesting and useful will show up on a Google search or a link on a LinkedIn post, but that’s the exception rather than the rule, at least for me.

# Medium

Medium.com is a great source for longform writing (not short blog posts). All of my own writing is posted on Medium.

Occasionally I will find something interesting on quantum computing, quantum mechanics, or physics in general on Medium, but it’s not so common for me to find anything particularly useful to me.

# Videos

I haven’t found most of the online videos of quantum computing or quantum mechanics — other than those of the MIT quantum mechanics courses — to be of any significant value, to me, at least. Maybe I’ve just gotten enough from online text to need videos. Besides, I currently find academic papers far more interesting and informative.

# Lecture notes

I encountered so many lecture notes related to quantum computing during the early days of my quantum journey that I gave up keeping track of them. But maybe that’s okay since generally I only used fragments from each to answer specific technical questions. Maybe an exception is for the MIT quantum mechanics course lecture notes, but even they were incomplete and inconsistent at times.

Lecture notes are actually very easy to find. Just type in the keywords that you’re interested in, such as the name of an algorithm or a quantum concept, and tack on the keywords “*lecture notes*” and Google will do the rest. And if you want to add a keyword for the name or abbreviation of an academic institution to focus your search, that will work as well. Or even the name of a preferred professor.

One exception to my habit was that I did keep a record of lecture notes which I found helpful for understanding Shor’s algorithm for factoring large semiprime numbers:

*References for Shor’s Algorithm for Cracking Strong Encryption Using a Quantum Computer*- https://medium.com/@jackkrupansky/references-for-shors-algorithm-for-cracking-strong-encryption-using-a-quantum-computer-6c87edec0d9e

But even there I recall encountering many other lecture notes which touched on Shor’s algorithm which I didn’t record in my notes.

# My writing on quantum computing

I actually didn’t start on my quantum journey with the explicit intention of doing much writing — my main interest was simply to understand the concepts. But as I began to get into it, it just felt natural to write about what I had learned, what questions, issues, challenges, opportunities, and limitations I had identified, and to speculate about the future.

My motivation for writing is driven in large part by my belief that the best way to learn is to try to write about a topic and then read and research to fill in the gaps. I treat reading, research, and writing as one integrated activity.

All of my writing is posted on *Medium*. I call my writing *informal papers* — lacking the formalities of formal academic papers, such as structure, formatting, and formal citations, but long and detailed nonetheless, far beyond mere blog posts. I rarely write anything less than ten to twenty pages long. Twenty to sixty pages is more common for me.

I post a status on LinkedIn to let people know about my latest writing.

And I keep a list of all of my informal papers: ** List of My Papers on Quantum Computing**.

As I read and research I keep notes in a private Google Doc — *Quantum Computing Notes* (not public), especially basic information and links for papers I stumble across, or particularly notable news stories. Currently I have 169 pages of notes. I have a separate private Google Doc for *To Do* items and miscellaneous notes that don’t seem worthy of my main notes which I reserve for the important stuff, which is currently 145 pages.

My to-do list of topics to read, research, and write on was getting out of hand and scattered all over my notes, so I consolidated the topics in a list and posted it — for public access: ** Future Topics for My Writing on Quantum Computing**. Even now, in 2020, I have a lot more topics in newer notes that I need to add to that list.

One key thing about my writing: I’m a diehard text guy — graphics, diagrams, and images are *not* my thing. Although lots of short sections, short paragraphs, numbered and bullet point lists, and links are my things. I know that lack of graphics severely limits the appeal of my writing, but that’s who I am.

# What is Quantum Computing?

With all of my writing, I am still not ready to write a decent ** What is Quantum Computing?** paper. I have plenty of the puzzle pieces, but there are still too many nooks and crannies that feel a little bit too mysterious to me so far. Maybe I’m being a little too ambitious — sure, I could write yet another

*puff piece*, as many others have, but another puff piece is not needed.

What I really want to write about is *how* a software developer can transform a problem into a solution which really does exploit the exponential power of quantum computing, but I don’t think even the best algorithm designers know that yet — I think we need a much higher-level programming model first, which is not available right now, nor even on the near horizon.

I know a big part of the focus needs to be *quantum parallelism*, but I’m not happy with the way most people are approaching it, and I personally haven’t yet settled on my own preferred approach.

Exactly how to present the concept of *phase* is an open issue for me as well. The current vagueness about the *granularity* of phase, especially for current NISQ machines, as well as its scalability, even for ideal quantum computers, is a major stumbling block for how to accurately present quantum computing.

Every month or so I once again am tempted to take a shot at writing ** What is Quantum Computing?**, but so far each time I’ve decided to put it off. Maybe next month will be the time that I go for it. Rather than expecting it to be the ultimate, end-all, maybe I’ll just accept that this will simply be version 1.0 and that next year I can always revise it or replace it with a version 2.0.

# My social media presence

I had been active on *Twitter* in the early days (2005), but gave up on it over ten years ago (2010) since I didn’t find it providing me with any value.

I’m not on all social media properties, but I am on some.

All of my main, longform writing, which I refer to as *informal papers*, is posted on *Medium*.

I post status updates and announcements of all of my Medium writings on LinkedIn.

I post announcements of all of my Medium postings on Facebook as well.

That’s it. No Instagram or Pinterest, or any of the others.

# In hindsight, what path would I take if I was starting from scratch in 2020?

Where would I start and what path would I take if I was starting from scratch today? Tough question! It’s complicated. It all depends — on where I wanted to end up.

Honestly, if I was starting over from scratch, my frank advice to myself would be to *wait 2–5 years* before starting since there will be so many advances in the technology required before it will be *practical*, I might as well find something *useful* to do with my time in the meantime. Especially since classical computing, including large distributed clusters, GPUs, and FPGAs will provide a lot of additional horsepower over the next 2–5 years and smart people will be needed to exploit it.

But if I wanted to dip my toe in the quantum waters, first, I’d get the 30,000-foot view by reading my overview of *quantum information science (QIS)*:

That doesn’t do a deep dive into quantum computing per se, but it should give you a feel of its context.

And I know I would have been much better prepared to dive in if I had a better understanding or at least a better appreciation for *quantum effects* which are what enable everything quantum and in fact everything we call the real world. I pulled together a relatively concise summary:

Then I’d start reading IBM’s ** Qiskit Textbook** and see how far I could get before I either felt exhausted or felt the need for a lot more depth.

Beyond that, I’d have to think a lot more carefully about what I wanted to achieve — there are so many different directions to go in terms of roles and outcomes.

# What path should a newcomer take in 2020?

What if it wasn’t me starting over but a newcomer who had no quantum background?

Honestly, my frank advice would be to *wait 2–5 years* before starting since there will be so many advances in the technology required before it will be *practical*, you might as well find something *useful* to do with your career in the meantime. Besides, classical computing, including GPUs and FPGAs will provide a lot of additional horsepower over the next 2–5 years and smart people will be needed to exploit it.

But if a newcomer is absolutely *committed* to pursuing quantum *now* since it is the *wave of the future* beyond the next 5–10 years, then…

First, get the 30,000-foot view by reading my overview of *quantum information science (QIS)*:

That doesn’t do a deep dive into quantum computing per se, but it should give you a feel of its context.

And you really do need to have some degree of understanding or at least appreciation for *quantum effects* which are what enable everything quantum and in fact everything we call the real world. I pulled together a relatively concise summary:

Were those two overviews too much? Then you are in real trouble!!

Before continuing, it all depends on which role you wish to play, such as:

- Theoretical quantum physicist.
- Basic quantum research physicist.
- Applied quantum research physicist.
- Quantum hardware engineer.
- Quantum firmware engineer.
- Quantum operating system engineer.
- Quantum compiler engineer.
- Quantum tools developer.
- Quantum simulator software engineer.
- Quantum algorithm designer. Generalist or domain specialist. Such as quantum computational chemistry.
- Quantum application developer. Wide range of application domains.
- Quantum solutions specialist.

That list is not intended to be exhaustive, but simply to highlight the diversity of paths, each with its own knowledge requirements.

Okay, so you just want to *dip your toe in the water*? Fine, you won’t go too far wrong by picking either of the following two entry points:

- IBM’s
. Online and free.*Qiskit Textbook* - Robert Sutor’s
. Neither free nor online (except Packt ebook.) It’s 516 pages! “*Dancing with Qubits: How quantum computing works and how it can change the world**Not for the fainthearted or math adverse — a thorough computer science introduction to quantum computing.*”

The simplest entry is to start reading IBM’s ** Qiskit Textbook** and see how far you get before you either get exhausted or feel you want a lot more depth.

There are plenty of other paths, plenty of other books, plenty of online videos, plenty of online courses, plenty of online lecture notes, and plenty of online academic papers.

If you’re lucky, you have a non-quantum background but are being hired by a firm which is willing to train you in the aspects of quantum that they require, in which case they will guide you through the thicket/minefield that’s best suited for their needs.

And if you can find a *mentor* to work with you and guide you, that’s a really good thing.

# Maybe I should take a two to five-year Rip Van Winkle-like slumber

Maybe it might be better for me to take a two-year or even five-year Rip Van Winkle-like slumber to skip over all of the waiting, anxiety, and disappointment and cut directly to quantum advantage or at least a much more impressive stage on the way to quantum advantage.

Imagine what it would be like to skip over all of the hype and promises and awaken only when promises have been solidly delivered.

But then I would miss out on all of the drama, especially the parts where people finally realize that they bought into so many wild promises that were not fulfilled in a timely manner.

Of course I can’t actually take such a slumber, but I can and might well put quantum computing on the *back burner* until significant promises begin to be fulfilled and reality has substantially caught up with the hype. Meanwhile, I could focus my attention on other technological issues which could use my more immediate attention.

# Monitoring for advances and breakthroughs

And if I did step back from everyday focus on quantum computing, a multiyear Rip Van Winkle slumber, I would be faced with the challenge of some minimal level of monitoring for developments so that I wouldn’t miss the boat if and when major breakthroughs occur.

The good news is that it can take years from the time a breakthrough occurs and when it finally finds its way into the marketplace as a viable and proven technology that is no longer a *mere laboratory curiosity*.

In any case, active focus or slumber, I will continue to do some level of monitoring advances in quantum computing.

# My endgame?

So, what’s my endgame? Where do I imagine my quantum journey taking me?

- My ultimate goal is simply to understand quantum computing deeply, to understand
*how real it is*, its capabilities, limitations, and issues. - I’d like to
*share*what I learn with others, primarily through my writing, but I’m not interested in*persuading*anyone of anything. - I’d really like to get answers to most of my questions.
- I’d like to see amazing new developments which open the doors to a lot of interesting ideas that I can work with.
- If I could earn a little extra income on the side, that would be nice, but that’s not a primary goal or requirement. If I don’t thoroughly enjoy it, it’s not worth it.
- The eventual prospect of profiting from investment in the public stocks of quantum computer companies is intriguing, but not imminent.
- I’d like to see quantum computing become a
*real thing*— capable of delivering substantial real-world value for production-scale practical applications. - I’d like to see some applications of quantum computing which completely blow me away.
- I’d like to see a fair number of the great promises of quantum computing actually fulfilled.
- I’d like to see all of the rampant, unjustified hype go away. All of it. I doubt that will happen, but that’s my preference. The more hype that I can dispel, the better.
- I’m intrigued by mysteries and getting to the bottom of them, so as long as quantum computing has unresolved mysteries, I’m in, but the day when there are no longer any significant unresolved mysteries, I’m out.
- If some technology that is much more interesting than quantum computing comes along, I’ll switch my focus in a heartbeat. Human-level AI is one potential area, but it has a lot of the same difficulties as quantum computing — and may in fact require quantum computing, so I can pursue it in parallel. It is quite possible that a sufficiently-advanced quantum computer might enable a whole new level of AI, but that’s unlikely any time soon.
- It’s possible that quantum computing could eventually fizzle out or be superseded by something even more grand, or possibly hit some real difficult obstacles which stop it dead in the water for decades more than I have left to live. Such possibilities could end my interest in quantum computing.

# What else?

What else is there that was going on for me during my journey into quantum computing? What did I miss? What did I forget?

I’ll incrementally update this paper as I recall or run into other items that I missed.

My memory is not perfect — it seems more probabilistic like a quantum computer than solidly and reliably deterministic as a classical computer. Maybe that’s evidence that the human mind relies on quantum effects!

# Conclusions

Geez, what a tortured and meandering journey. There had to be another way! Maybe, but I did it my way, and it worked well-enough for me. I certainly wouldn’t recommend my particular journey for anybody else, but I’m not so sure that I would change much if I had to do it again. Of course, with hindsight a lot would change, but we never have hindsight in the moment.

I firmly believe that the whole field of quantum computing desperately needs a hard reset and revamp to make quantum computing palatable and workable for mere mortals. As things stand right now, fairly elite staff are needed to accomplish anything of significance. Actually, as the hardware stands, it’s actually *not possible* to do anything of significance. But in addition to hardware advances we need a dramatically higher-level programming model to enable advanced algorithms and to facilitate application development by mere mortals, rather than being limited to the most-elite staff.

Sure, just about anybody can now compose and execute short quantum circuits using quantum simulators and even real quantum computers running in the cloud, but that’s not sufficient to achieve production-scale deployment of quantum applications.

Overall, it just feels *premature* to treat quantum computing as a technology that is *here and ready* for deployment of production-scale applications which deliver substantial real-world value.

It just feels to me that quantum computing remains a *mere laboratory curiosity*. Ask me again in a couple of years.

The best a lot of us can do is simply monitor the evolution of the quantum computing sector and hope that in two to five years the hardware and programming model will have advanced enough to enable what I call *the ENIAC moment*, when an elite team is finally able to execute a production-scale application and achieve *quantum advantage* over classical computing.

# What’s next?

- I intend to continue keeping an eye on the maximum number of qubits and circuit depths for algorithms and applications. My goal is to understand how close we are to producing algorithms and applications of significant complexity, limited not by the hardware or our basic understanding of algorithms, but our ability to address problems of non-trivial size and complexity — where
*quantum advantage*offers a dramatic benefit over classical solutions. - My network of LinkedIn contacts for quantum computing and quantum mechanics and physics provides me with a decent news flow and links to interesting new academic papers. And a fair amount of interesting conversations on quantum computing as well.
- I’ll continue to monitor news with two Google News windows, one for “quantum computer” and one for “quantum computing”. They provide me with a significant news flow as well, although LinkedIn is usually enough for my needs and interests.
- Monitor Phys.org for news on quantum computing, at least in terms of fresh academic papers. They monitor new additions of papers to arXiv and post readable plain language descriptions, which frequently show up in Google News for quantum computing.
- There are really three categories of papers I am interested in monitoring: 1) basic research into qubit technologies and overall system architectures, 2) basic research into generic algorithms and algorithm issues, and 3) practical algorithms for real-world applications.
- I’m very anxious to see some significant advances and breakthroughs in the near future, but the past year hasn’t been as eventful as I had hoped.
- I do actually find myself growing somewhat disillusioned and disenchanted with quantum computing, at least in its current state. Too many of the promises remain unfulfilled and progress is slower than I expected two years ago.
- Maybe it might be better for me to take a two-year or even five-year Rip van Winkle-like slumber to skip over all of the waiting, anxiety, and disappointment and cut directly to quantum advantage or at least a much more impressive stage on the way to quantum advantage.
- Alternatively, it may be time for me to draw a mark and say that everything I’ve done to date is a Phase 1 (or even Phase 0), and now consider that I need to
*level up*to whatever level needs to come next, a Phase 2 (or a real Phase 1.) - But by default I’ll just keep on keeping on as I have for the past three years, varying from day to day, week to week, and month to month, but surfing the waves of quantum computing as they come at me. Including monitoring and participating in the flow of information on LinkedIn.
- I may start to dig more into the basic research since that is what is going to drive a lot of what happens over the next five years, and beyond. And maybe even more of the underlying physics. I may increase my pace of reading academic research papers on arXiv. Currently I’m limited to the papers I hear about from media reports on Google News and my LinkedIn contacts. I have the feeling that basic, fundamental research is where a lot of the action will be since I don’t feel that our current qubit technologies are going to be sufficient to achieve
*quantum advantage*. - I might decide to focus on the underlying physics for the next two years as well as reading paper preprints on basic algorithms, both on arXiv. Qubit control at the physical and electronic level is an area where I would like to have a deeper understanding, including lasers, microwaves, and FPGAs — what are the capabilities, limitations, and issues. Also how exactly does a unitary matrix of complex numbers get translated into direct control of the hardware.
- Continue to add terms to my quantum glossary.
- Start filling in the many TBD’s in my quantum glossary.
- One of these days I’ll be tempted to take a shot at writing
*What is Quantum Computing?*

For more of my writing: ** List of My Papers on Quantum Computing**.