# What Is Quantum Advantage and What Is Quantum Supremacy?

--

*Quantum advantage* loosely means that a quantum computer can perform some particular computation *significantly* faster than a classical computer or that no classical computer can perform it at all. *Quantum supremacy* is commonly simply a synonym for quantum advantage, or it may be quantum advantage *on steroids* — a much more dramatic or more pervasive advantage, over a broader range of applications and computations.

Yes, these are indeed *buzzwords* which are devoid of any particularly coherent and definitive technical meaning, the kind of terms which appeal to marketeers, promoters, and other purveyors of *hype*. Still, they do have at least some passing merit and relevance to technical folks, at least until more definitive terms become available.

This informal paper will attempt to provide workable definitions for both terms, as well as detail some of the issues and opportunities relating to the usage of these terms. It won’t endeavor to provide a deep and absolutely definitive analytical framework, but will endeavor to provide a comprehensive high-level view.

I’ll offer more detailed definitions in a moment, but first I’ll offer very simplified definitions for more general consumption:

**quantum advantage.**A*quantum computer*can perform a particular*computation*significantly faster than even the best*classical computer*. And in some cases, a*quantum computer*can perform*computations*which no*classical computer*can perform at all — also referred to as*quantum supremacy*.**quantum supremacy.**A*quantum computer*is able to*compute*a*solution*to a particular*problem*when no*classical computer*is able to do so at all or in any reasonable amount of time. Does not imply either an advantage or supremacy for any other problems beyond the particular*problem*or niche of closely related*problems*. Alternatively,*quantum advantage*across a wide range of applications and computations. Or, possibly simply a synonym for*quantum advantage*.

There are plenty of nuances and details to attach to those over-simplified definitions, which we will come to shortly.

Alas, sometimes the two terms are used as synonyms, with no clarity as to the true intended meaning. Welcome to the world of marketing, buzzwords, and hype.

And if you’re interested in *quantum supremacy* in particular, also read the section **Breadth of quantum supremacy** since quantum supremacy for one person or app does not necessarily mean quantum supremacy for everyone and all apps. So, when you hear a claim of *quantum supremacy*, simply ask “** How broad is that claim?**”.

Now, from my ** Quantum Computing Glossary**, I offer the following detailed definitions for

*quantum advantage*and

*quantum supremacy*:

**quantum advantage.**At least in the context of a particular*application*of interest, a*quantum computer*can perform a*computation*significantly faster than even the best*classical computer*or no*classical computer*may be able to perform the*computation*at all — or at least not in some reasonable amount of time or with a reasonable number of*classical computers*in a*distributed*or*networked*configuration. Note that a*quantum advantage*in one or more*applications*does not necessarily imply an overall*quantum advantage*in any other*application*or across all*applications*. The central essence of*quantum advantage*is*quantum parallelism*which enables*quantum algorithms*to*execute*with a*computational complexity*which is*polynomial*in contrast with*classical algorithms*which tend to have a greater (worse)*computational complexity*which is*superpolynomial*, such as*exponential*. Or in general, the*computational complexity*of an*algorithm*on a*quantum computer*grows significantly more slowly than for the best comparable*algorithm*on the best*classical computer*as the size of the*input*or*complexity*of the*problem*to be solved grows —*Big-O*for a*quantum algorithm*on a*quantum computer*is much smaller than*Big-O*for the best*algorithm*on a*classical computer*. May sometimes be used as a synonym for*quantum supremacy*,*quantum preeminence*, or*quantum ascendency*. May refer to a specific characterization of how much faster or better a*quantum computer*can*execute*a particular*algorithm*compared to a comparable*algorithm*on a*classical computer*— the specific advantage. See also:*quantum speedup*. May also refer to the*eventual*advantage and*promise*of*quantum computers*, as opposed to*capabilities*which are available today or likely will be in the fairly near-term future.**quantum supremacy.**A*quantum computer*is able to*compute*a*solution*to a*problem*when no*classical computer*is able to do so — or at least not in some reasonable amount of time or with a reasonable number of*classical computers*. Implies*quantum advantage*. Does not imply either an advantage or supremacy for any other*problems*beyond the particular*problem*or niche of closely related*problems*. Alternatively,*quantum advantage*across a broad range of*applications*and categories of*computations*, rather than limited to a particular*problem*or niche of closely related*problems*. May also be merely a synonym for*quantum advantage*unless clear from context. A synonym for*quantum computational supremacy*,*quantum preeminence*, or*quantum ascendency*. See the Wikipediaarticle and the*Quantum supremacy*paper by Boixo, et al. Also see the definition and discussion in the*Characterizing Quantum Supremacy in Near-Term Devices*paper by Preskill.*Quantum computing and the entanglement frontier*

If you’re confused about *computational complexity*, the concept will be explained in subsequent sections of this paper, including so-called *Big-O notation*.

Again, there are no clear, definitive, and authoritative definitions for these terms, but my definitions are intended to capture the essence of current usage and apparent current best practice.

That said, the reader must take *every* use of these terms with a very large grain of salt, paying close attention to the particular context.

So, if someone refers to *quantum advantage*, inquire as to the *specific* advantage, and whether the advantage is for a particular application or particular computation, or applies more broadly. And whether they really mean *quantum supremacy*.

And if someone refers to *quantum supremacy*, inquire as to whether they are referring to a broad range of applications and computations, or whether they are simply referring to the *quantum advantage* for a single, particular application or computation.

# Breadth of quantum supremacy

It is quite possible and even reasonable for someone to claim *quantum supremacy* for a single, particular, niche problem or application, even if they have neither claimed nor demonstrated quantum supremacy for any other problems or applications. That’s the nature of this *loose* jargon we are saddled with.

So, when you hear a claim of *quantum supremacy*, simply ask “** How broad is that claim?**”.

I’ve identified some distinctions of *breadth of quantum supremacy*, a *spectrum* if you will:

- Single, particular, niche problem or application — of theoretical, non-practical interest only.
- Single, particular, niche problem or application — of practical, real-world interest.
- Narrow niche of related problems or applications. Of practical, real-world interest, by definition. Be sure to clearly identify the niche. Ditto for all of the remaining distinctions.
- General niche of related problems or applications.
- Broad niche of related problems or applications.
- Multiple niches of problems or applications.
- Many niches of problems or applications.
- Narrow category of related problems or applications. Be sure to clearly identify the category.
- General category of related problems or applications.
- Broad category of related problems or applications.
- Multiple categories of problems or applications.
- Many categories of problems or applications.
- All categories of problems or applications.

# Single vs. multiple classical computers

Technically, any quantum advantage or supremacy should probably be evaluated by comparing a single quantum computer to a single classical computer, but it’s not that simple, for two reasons:

- Cost is a real issue. A single quantum computer may be very expensive, while a single classical computer is now a very cheap commodity. It may make more sense to compare any advantage or supremacy on a cost or cost-adjusted basis. So, we could compare a single $1.5 million dollar quantum computer to a distributed network of 1,000 $1,500 classical computers.
- Distributed computing is now the accepted paradigm for high-performance classical computing. So-called
*fat nodes*— a single machine with lots of memory and processors — are now less common and even discouraged in many contexts. A supercomputer these days is really usually a very large number of commodity processors wired together. Any computationally-intense application which is a candidate for quantum computing would tend to have its classical equivalent implemented by a significant number of relatively cheap distributed or networked classical processors — anywhere from 4 to 8, 16 to 64, 256, or 1,024 or even more.

Nobody is going to suggest a network of a million or a billion classical computers, or even 50,000 or 10,000, but 64, 128, 256, 512, or 1,024 are very practical configurations today for high-end applications.

Oops… minor correction there — I just read something about Netflix running “*millions of containers on *** thousands of machines**” and even three years ago it was reported that “

*Netflix operates “many tens of thousands of servers and many tens of petabytes of storage” in the Amazon cloud*”, so excluding 10,000 or even 50,000 from the range of reasonable network sizes is not so far-fetched. And a recent report has Apple running 160,000 servers for the Cassandra database! But… so far, nobody is talking about a single company with a

*million*machines. Okay,

*maybe*Google has over a million — one report speculates 900,000 or even over a million servers, and that was in 2011.

In any case, for the purposes of judging quantum advantage and quantum supremacy, I would suggest using the criteria of:

- Comparable total system cost.
- A reasonable number of machines for a single application — excluding the mega-huge Internet giants such as Google, Apple, Facebook, and Microsoft.

Assuming a quantum computer cost in the $1 million to $25 million range (D-Wave Systems currently has a system for $15 million), and a typical server cost on the order of $1,500 per year, that would suggest a cost-based comparison of 10,000 servers to a single quantum computer.

But just for simplicity, I’ll suggest that quantum advantage and quantum supremacy should generally presume that one quantum computer be compared to anywhere from eight to 1,000 classical servers.

Sure, for some applications it might make sense to use 10,000 or more classical servers, but that seems a bit extreme.

I suspect that in most cases 8 to 128 classical computers would be sufficient to maximize classical compute throughput, and that going beyond that will frequently hit the wall of diminishing returns for many applications.

But if you want to pick one number to simplify discussion, I’d pick 256 or 1024 classical computers in a distributed network as a sweet spot to target.

# Computational complexity

*Computational complexity* — also known as *algorithmic complexity* — is a measure of how long an application or algorithm will take to run to complete its intended task. It also measures amount of resources, such as memory and storage, but ultimately time is the primary factor of concern.

Time can be measured using a *wall clock* (or calendar) — hours, minutes, seconds, or even days or months — or by counting *operations* — the individual steps in an algorithm.

Operations on a classical computer are *statements* in a high-level programming language or machine language *instructions*.

Operations on a quantum computer are *quantum logic gates*, although *circuit depth* can be a better measure since it takes into account the fact that many gates can be executed in parallel, at least in theory.

*Greater* computational complexity means the application or algorithm will take *longer* to run — it will require the execution of *more* operations, which usually translates into longer wall clock time.

For more detail, see the Wikipedia ** Computational complexity theory** article.

# Hybrid mode execution

Quantum computers are not currently fully general-purpose computers since they don’t have conditional execution, looping, function calls, I/O, database access, interprocess communication, access to network services, or most of the rich data types of even the most basic classical computers. This means that most applications for quantum computers must be split into two sets of parts — the classical parts and the quantum parts, with the quantum parts executed as *subroutines* from the classical parts.

This complicates *computational complexity analysis* since the classical code has its own computational complexity and the total complexity will be a blend of the two separate computational complexities.

A primary challenge will be identifying how many times the quantum parts must be executed (quantum subroutine calls) to complete the full application.

Alternatively, one can simply analyze the computational complexity of the quantum parts alone and compare them to comparable classical implementations of comparable algorithms.

But even then, the whole point is the net impact on the full application — how much of an advantage is gained by throwing a quantum computer into the mix compared to a purely classical implementation.

# Big O notation

Computational complexity is commonly expressed using the shorthand of *Big-O notation*.

“O” is shorthand for *on the order of*, indicating the approximate magnitude rather than a very specific value.

I won’t go into the details here (see the Wikipedia *Big O notation* article for detail), but simply indicate that ** O(n²)** and

**are examples of**

*O(n³)**polynomial complexity*and

**and**

*O(2^n)***are examples of**

*O(3^n)**exponential complexity*.

** n** refers to the length or size of the input — how many

*data elements*must processed. Or, how many values must be evaluated in the range of a variable.

The expression inside parentheses indicates how many operations must be performed to process those *n* data elements.

“^” is a shorthand for *exponent* or *power* — the *base* is raised to some exponent or power.

For more detail, see the Wikipedia *Big O notation* article.

# Polynomial vs. exponential complexity

As already mentioned, an application or algorithm which has *polynomial* computational complexity is superior to one which has *exponential* computational complexity.

*Polynomial complexity* or *polynomial time *means the input size is raised to some constant power or exponent, typically 2 or 3, but it could be any relatively small number — O(n²), O(n³), or O(n^k).

*Exponential complexity* or *exponential time* means some relatively small constant number, such as 2 or 3, raised to a power or exponent proportional to the input size — O(2^n), O(3^n), or O(k^n).

To be clear, polynomial complexity can grow very rapidly as input size grows, but the point is that exponential complexity will grow *much, *** much** faster.

Some brief examples for both.

Let’s use example values of 10, 100, and 1,000 for *n*, the input size.

On a quantum computer we focus on algorithms which have polynomial complexity. That’s not to say that *all* quantum algorithms will have polynomial complexity, but that’s the *norm* when *quantum parallelism* can be used.

For an exponent of 2, polynomial complexity would require O(10²), O(100²), or O(1000²) operations.

- 10² is 100 operations.
- 100² is 10,000 operations.
- 1000² is 1,000,000 operations. One million.

For an exponent of 3, polynomial complexity would require O(10³), O(100³), or O(1000³) operations.

- 10³ is 1,000 operations — 10 times 100.
- 100³ is 1,000,000 operations. One million — 100 times 10,000.
- 1000³ is 1,000,000,000 operations. One billion — 1,000 times 1,000,000.

Meanwhile, over on a classical computer, even if we have a substantial network of classical computers, we tend to encounter *exponential complexity*. That’s not to say that *all* classical algorithms are exponential — and many are indeed simply polynomial — but the applications and algorithms which *are* exponential are generally the only ones worth moving to a quantum computer.

For a base of 2, exponential complexity would require O(2¹⁰), O(2¹⁰⁰), or O(2¹⁰⁰⁰) operations.

- 2¹⁰ is 1,024 operations.
- 2¹⁰⁰ is 10 followed by 29 zeros operations. That’s a
*huge*number. That’s why a quantum computer with an algorithm of polynomial complexity is needed - 2¹⁰⁰⁰ is 10 followed by
operations. That’s*300 zeros**way beyond huge*. Only a quantum computer could handle that.

How big of an exponential problem can a classical computer handle?

- 2²⁰ is about one million operations. Not such a big deal
- 2³⁰ is about one
*billion*operations. A moderate deal, doable, but likely requires more than one machine. - 2⁴⁰ is about one
*trillion*operations. Still technically feasible, but likely requiring a larger network of computers and/or a very long running time. - 2⁵⁰ is about one
*quadrillion*operations. The realm of*petaflops*, the realm of the largest supercomputers. - 2⁶⁰ is about a
*million quadrillion*. 1,000 of the largest classical supercomputers. - 2⁷⁰ is about a
*billion quadrillion*. A*million*of the largest classical supercomputers. - 2⁸⁰ is about a
*trillion quadrillion*. A*billion*of the largest classical supercomputers. - 2⁹⁰ is about a
*quadrillion quadrillion*. A*trillion*of the largest classical supercomputers. Or, 1,000 of the largest classical supercomputers running for a*billion*seconds — 32 years. Imaginable, yes, but not very likely to happen. Really need a quantum computer. - 2¹⁰⁰ … a
*million*of the fastest classical supercomputers running for 32 years. Or, 1,000 of those machines running for 32,000 years. - 2¹¹⁰ … a
*million*of the fastest classical supercomputers running for 32,000 years. Certainly feels*beyond imaginable*. - 2²⁷⁵ is roughly the number of atoms in the universe. Don’t even think about trying it on even the largest imaginable network of largest classical supercomputers.

Technically, a modern classical supercomputer would be in the range of 10 to 200 petaflops, so my numbers above should be adjusted by a factor or 10 to 200 since I assumed a single petaflop. For more on the top (classical) supercomputers, see the Wikipedia ** TOP500** article or the

**web site.**

*TOP500*And just to be clear, 2¹¹⁰ simply means there are *110 input items*. That’s not a lot of input to require such prodigious processing.

For a base of 3, exponential complexity would require O(3¹⁰), O(3¹⁰⁰), or O(3¹⁰⁰⁰) operations.

- 3¹⁰ is about 59,000 operations.
- 3¹⁰⁰ is about 5 followed by 47 zeros operations.
- 3¹⁰⁰⁰ is … too big for even Google to figure out!
- 3⁶⁴⁶ is about 1 followed by 308 zeros.

So, even for input of only 100 items, an algorithm with exponential complexity O(3¹⁰⁰) is unimaginably beyond the capacity of even a large number of the fastest classical supercomputers.

But for a quantum computer with polynomial complexity of O(1000³), 1,000 input values would only be about *one billion operations*.

# Quadratic complexity (quadratic speedup)

There are many other *degrees* of computational complexity, but one other bears consideration in the context of quantum computing — *quadratic speedup*.

*Quadratic complexity* (not *speedup*) would be O(n²) — *squaring* the input size. So, 100 items would require 10,000 operations. That’s *not* what is section is referring to. Technically, quadratic complexity is covered by polynomial complexity.

Rather, a *quadratic *** speedup** speaks not of the complexity itself, but of the

*improvement*compared to

*linear complexity*.

With linear complexity, the number of operations is *proportional* to the input size — O(n).

A *quadratic speedup* means the inverse of quadratic complexity — O(sqrt(n)) vs. O(n²), and is a lot less than linear complexity.

*Grover’s algorithm* for searching a linear list of items is the popular example of a quantum algorithm which has a quadratic speedup.

Some examples:

- sqrt(100) is 10. 10 times faster than sequentially processing all items.
- sqrt(10000) is 100. 100 times faster.
- sqrt(1,000,000) is 1,000. 1,000 times faster.
- sqrt(1,000,000,000,000) is 1,000,000. A million times faster.

A quadratic speedup is generally not as big an advantage compared to an exponential speedup, but still quite dramatic.

# Dramatic quantum advantage is the real goal

The ultimate goal of quantum computing is *dramatic quantum advantage*, a quantum advantage that is truly mind-boggling. How big is that?

**Not just 10 times faster.****Not just 100 times faster.****Not just 1,000 times faster.****Not just a million times faster.****Not just a billion times faster.****Not just a trillion times faster.****But at least***one quadrillion*times faster — and that’s just the starting point.

Why that much faster? Who really needs that?

The goal is not simply to get your computer to run faster, but to get a computer to run *fast enough* that you can tackle application problems which are so much more complex than problems which can be tackled today.

The goal is not simply to do things faster, but to make the impossible possible. To make the impractical practical.

For more detail on dramatic quantum advantage, see my paper:

*What Is Dramatic Quantum Advantage?*- https://jackkrupansky.medium.com/what-is-dramatic-quantum-advantage-e21b5ffce48c

# Quantum advantage — make the impossible possible, make the impractical practical

Just to emphasize that point more clearly:

**The goal of quantum computing is not to make computing faster.****But to make the impossible possible.****To make impractical applications practical.**

# Fractional quantum advantage

As previously mentioned, although the ultimate goal of quantum computing is mind-boggling *dramatic quantum advantage*, it may still be quite reasonable to be able to achieve only a fraction of that, a so-called *fractional quantum advantage*.

If dramatic quantum advantage of quantum computing *starts* at *one quadrillion* times a classical solution, *fractional quantum advantage* might be some not insignificant fraction of that and still have significant utility:

- 10 times.
- 100 times.
- 1,000 times.
- 10,000 times.
- 100,000 times.
- One million times.
- One billion times.
- One trillion times.

For more detail on fractional quantum advantage, see my paper:

*Fractional Quantum Advantage — Stepping Stones to Dramatic Quantum Advantage*- https://jackkrupansky.medium.com/fractional-quantum-advantage-stepping-stones-to-dramatic-quantum-advantage-6c8014700c61

# Three levels of quantum advantage — minimal, substantial or significant, and dramatic quantum advantage

Just to keep things simple, overall there are three categorical levels of *quantum advantage*:

**Minimal quantum advantage.**A**1,000X**performance advantage over classical solutions. 2X, 10X, and 100X (among others) are reasonable stepping stones.**Substantial or significant quantum advantage.**A**1,000,000X**performance advantage over classical solutions. 20,000X, 100,000X, and 500,000X (among others) are reasonable stepping stones.**Dramatic quantum advantage.**A**one quadrillion X**(one million billion times) performance advantage over classical solutions. 100,000,000X, a billion X, and a trillion X (among others) are reasonable stepping stones.

To put it most simply, anything less than a dramatic quantum advantage would be considered a *fractional quantum advantage*.

For more detail on *fractional quantum advantage*, see my paper:

*Fractional Quantum Advantage — Stepping Stones to Dramatic Quantum Advantage*- https://jackkrupansky.medium.com/fractional-quantum-advantage-stepping-stones-to-dramatic-quantum-advantage-6c8014700c61

For more detail on *dramatic quantum advantage*, see my paper:

*What Is Dramatic Quantum Advantage?*- https://jackkrupansky.medium.com/what-is-dramatic-quantum-advantage-e21b5ffce48c

# Net quantum advantage — discount by repetition needed to get accuracy

Quantum advantage can be intoxicating — just k qubits gives you a *computational leverage* of 2^k, but… there’s a tax to be paid on that. Since quantum computing is inherently probabilistic by nature, you can’t generally do a computation once and have an accurate answer. Rather, you have to repeat the calculation multiple or even many times, called *circuit repetitions*, *shot count*, or just *shots*, and do some statistical analysis to determine the likely answer. Those repetitions are effectively a *tax* or *discount* on the *raw computational leverage* which gives you a *net computational leverage*, the *net quantum advantage*.

Shot counts of 100, 1,000, 10,000, or more are not uncommon.

For example:

**10 qubits and a shot count of 100 means a raw quantum advantage of 2¹⁰ = 1024, but divide by the shot count of 100 and you get a net quantum advantage of… 10.24 or 10 X***net advantage*over a classical solution.**20 qubits and a shot count of 1,000 means a raw quantum advantage of 2²⁰ = one million, but divide by the shot count of 1,000 and you get a net quantum advantage of… 1,000 or 1,000 X***net advantage*over a classical solution.**20 qubits and a shot count of 10,000 means a raw quantum advantage of 2²⁰ = one million, but divide by the shot count of 10,000 and you get a net quantum advantage of… 100 or 100 X***net advantage*over a classical solution.**10 qubits and a shot count of 1024 means a raw quantum advantage of 2¹⁰ = 1024, but divide by the shot count of 1024 and you get a net quantum advantage of… 1.0 or***no net advantage*over a classical solution.

For more detail on circuit repetitions and shots, see my paper:

*Shots and Circuit Repetitions: Developing the Expectation Value for Results from a Quantum Computer*- https://jackkrupansky.medium.com/shots-and-circuit-repetitions-developing-the-expectation-value-for-results-from-a-quantum-computer-3d1f8eed398

# What is the quantum advantage of your quantum algorithm or application?

It’s easy to talk about quantum advantage in the abstract, but what’s really needed is for quantum algorithm designers to explicitly state and fully characterize the quantum advantage of their quantum algorithms. Ditto for quantum application developers and their quantum applications.

It’s also important for quantum applications using quantum algorithms to understand their own actual input size since the actual quantum advantage will depend on the actual input size.

For smaller input sizes the *actual quantum advantage* or *net quantum advantage* might be small enough that there is negligible benefit to using a quantum computer at all, or maybe a relatively minor advantage over a classical solution.

It may be necessary for the input size to get fairly substantial before the *net quantum advantage* becomes reasonably impressive. So, it’s important to know what input sizes the quantum algorithm or quantum application is likely to encounter in real-world usage.

When posting or writing about your quantum algorithm or application, this kind of information should be clearly and fully disclosed.

For more detail and discussion, see my paper:

*What Is the Quantum Advantage of Your Quantum Algorithm?*- https://jackkrupansky.medium.com/what-is-the-quantum-advantage-of-your-quantum-algorithm-8f481aefe8d0

# Be careful not to compare the work of a great quantum team to the work of a mediocre classical team

When judging the *quantum advantage* of a quantum solution over a classical solution to an application problem, one should be mindful that quantum computing is so new and so challenging that the technical team needed to design and implement a quantum solution needs to be very elite, while there’s a fair chance that the technical team (or nontechnical team of subject matter experts) which implemented the original classical solution might have been more mediocre or at least not as capable as the elite quantum team.

It could well be that if you simply redesigned and reimplemented the classical solution with an elite technical team of caliber comparable to the quantum team, the classical solution might be much improved, so that any quantum advantage might be significantly diminished — or even completely eliminated.

It’s just something to be aware of — is the *apparent* quantum advantage *real* or just an *illusion* because the two technical teams are not of comparable caliber.

And in simple truth, even if you simply had the same, original classical technical team redesign and reimplement their own code — or simply do some performance analysis and some code optimization — they might produce a replacement classical solution which is much improved, possibly significantly reducing the *actual* quantum advantage — or possibly completely eliminating it. This isn’t a slam dunk, but a possibility. Again, just something to be aware of.

# To be clear, quantum parallelism and quantum advantage are a function of the algorithm

A quantum computer does indeed enable quantum parallelism and quantum advantage, but the actual and net quantum parallelism and actual and net quantum advantage are a function of the particular algorithm rather than the quantum computer itself.

The quantum computer may have n qubits, but the particular algorithm running at a given moment may only be coded to use k of those n qubits.

For example, the quantum computer may have 50 qubits, but an algorithm might only use 20 qubits, giving it a *quantum advantage* of 2²⁰ rather than 2⁵⁰. Some other algorithm might use 10 bits, giving it a quantum advantage of 2¹⁰, while yet another algorithm might use 30 qubits giving it a quantum advantage of 2³⁰.

Also, k is the number of qubits used in an entangled *parallel* computation, not the total qubits used by the entire computation of an algorithm. The algorithm which uses 20 qubits in a parallel computation might use a total of 23 or 43 qubits. It is only the qubits which are used in an entangled parallel computation which give the quantum advantage, the quantum parallelism.

# Quantum supremacy

*Quantum supremacy* is sometimes simply a synonym for *quantum advantage*, or it expresses the fact that a quantum computer can accomplish a task that is *impossible* on a classical computer, or that could take so long, like many, many years, that it is outright *impractical* on a classical computer.

In 2019 Google claimed that it had achieved *Quantum Supremacy*, although it was for more of a contrived computer science experiment than a practical real-world application. I reported on this in detail in my paper:

*What Should We Make of Google’s Claim of Quantum Supremacy?*- https://jackkrupansky.medium.com/what-should-we-make-of-googles-claim-of-quantum-supremacy-630c76fffac7

# What applications might be the most ripe for quantum advantage?

I’m a technologist rather than an application developer, so my focus is on raw technology rather than specific applications. It’s so difficult to speculate as to which application categories might achieve a quantum advantage first.

That said, I did post an informal paper that details a variety of application categories that appear to be appropriate for quantum computing — ** What Applications Are Suitable for a Quantum Computer?**.

# When will quantum advantage and quantum supremacy be achieved?

It’s anybody’s guess when quantum advantage will be achieved by one or more significant applications, let alone quantum supremacy across a relatively wide swath of applications.

It’s *always safe* to pontificate that we won’t see quantum advantage and quantum supremacy for at least five to ten years — people have been saying that for the past… five to ten years.

I’ll go out on a limb and assert that we could see at least a few preliminary examples of quantum advantage in **two to four years**. We should start to see some non-toy applications once we get to **256 to 1024 qubits** with decent coherence and decent connectivity.

I’m reasonably confident about that timeframe, but it’s also not an absolute slam dunk.

That said, we will likely see a fair number of *claims* of quantum advantage between now and then, but usually there will be less than meets the eye. For example, you can do some interesting things with very large numbers of random numbers on a quantum computer, but they don’t represent the kinds of real-world applications that most non-mathematics professionals would relate to.

For more of my writing on quantum: ** List of My Papers on Quantum Computing**.