What Should We Make of Google’s Claim of Quantum Supremacy?
Google should be lauded for achieving a significant technical milestone with their recent claim of quantum supremacy, but it was a very narrow technical milestone that still leaves us far short of practical production-scale applications for quantum computers. This informal paper provides my own personal impressions of Google’s milestone accomplishment.
The key takeaway, overall:
- Google achieved a significant technical milestone, but we have much further to go to achieve practical, production-scale applications using quantum computers. Quantum supremacy in a niche, specialized, contrived “task” does not imply the achievement of quantum supremacy for any real-world application let alone for all potential applications of quantum computing.
Proof of concept projects using quantum algorithms are all the rage right now, but with only limited and unreliable hardware, it will be some years before production-scale applications will be even technically feasible, let alone common. Google’s achievement of quantum supremacy doesn’t really change any of that.
The technical details of Google’s efforts will not be covered in any depth here, but are available in Google’s technical paper in Nature, “Quantum supremacy using a programmable superconducting processor”, especially in the Supplementary Information. Or view the video of Prof. John Martinis’ presentation at Caltech, Quantum Supremacy Using a Programmable Superconducting Processor, which is easier to follow but is not as detailed as the published papers.
What exactly is quantum supremacy?
Oversimplifying, quantum supremacy is the ability of a quantum computer to solve a computational problem very quickly which cannot be solved in a reasonable amount of time on any classical computer, even the largest supercomputers. For more detail, see my informal paper — What Is Quantum Advantage and What Is Quantum Supremacy?.
In addition quantum supremacy generally means that a quantum solution offers an exponential speedup compared to a classical solution, meaning that far fewer quantum resources are needed to process a given input than a classical solution would require as the size of the input grows.
What exactly is Google’s quantum supremacy program doing?
Nothing useful in any practical sense — in fact, it is randomly generated, a random quantum circuit, a random sequence of quantum logic gates. The whole, sole purpose of this randomly-generated circuit is to compute a bit sequence which cannot be either predicted or easily simulated on a classical computer, even the largest supercomputers, in any reasonable amount of time.
Although, Google’s paper does suggest that their algorithm might apply to the generation of certifiable random numbers, but no details are given. In fact, the best they can do is cite an unpublished manuscript. That’s more than a little bit too vague for my taste in science.
Google does also claim that other potential uses of their algorithm “may include optimization, machine learning, materials science and chemistry”, but again they offer no details. This sounds more like a vague, speculative marketing claim than hard, verifiable science.
The Google algorithm is actually a hybrid algorithm, part classical code and part quantum code.
The essence of the algorithm is:
- Generate a random sequence of quantum logic gates (called a circuit). This is classical code.
- Run the randomly-generated circuit. This is really the only part of the algorithm which is actually a quantum algorithm.
- Measure the results (qubits). They call this sampling. This transitions from the quantum code to classical data, producing one classical bit for the quantum state of each qubit.
- Rerun the circuit and measure the qubits a million times — since quantum computers are probabilistic rather than deterministic as classical computers are. They collect a million samples.
- Compare the result to a simulation of the quantum circuit on a classical computer to make sure they agree.
- You can only do that up to a certain circuit size before it becomes very slow on a classical computer.
- Then they extrapolate how long that simulation would take for the maximum circuit size which could be run on the quantum computer, and that’s where Google gets the 10,000 years number reported in the press.
That’s it. They’re not designing a new molecule, a new drug, or a new material, or a new aircraft wing, or finding a cure for cancer. They’re just sampling the result of a randomly-generated quantum circuit. This is indeed a legitimate technical milestone, but it’s still not a practical application of quantum computing.
If it seems a bit contrived and something that would appeal only to a theoretical computer scientist, well… it is.
But they are indeed executing a quantum program which produces results which could not be produced on a classical computer in any practical amount of time. And that had never been done before, until now.
The essence of the problem is that the number of quantum states for a quantum computer with n qubits is 2^n, so for 53 qubits you have 2⁵³ quantum states, which is more than you can readily store and manipulate — or simulate — on a classical computer, even the largest classical supercomputers. IBM disputes this — see their blog post and paper. 2³⁰ is about 1 billion, 2⁴⁰ is about 1 trillion, 2⁵⁰ is about 1 quadrillion, and 2⁵³ is about 9 quadrillion. A petabyte is one quadrillion bytes.
Note: Unlike classical computers where circuits and gates are hardware, they are actually software for a quantum computer — a quantum logic gate is the equivalent of a classical instruction or operation, and a quantum circuit is equivalent to a classical program or a code sequence of classical instructions, operations, or statements. Quantum logic gates are operations which are performed on qubits. Qubits are hardware devices, unlike classical bits which are not hardware devices but merely information. A qubit is not information — a qubit stores information. The information contained in a qubit is called quantum state and the zeroes and ones of quantum state, |0> and |1> (known as kets) or sequences of them, are known as computational basis states. A qubit is more directly equivalent to a classical flip flop or a memory cell, a hardware device or physical medium where information can be stored or manipulated. Sorry for the confusing terminology — I’m just the messenger!
There are pros and cons for Google’s accomplishment:
- A notable milestone in quantum computing. The first time a quantum computer performed a computation which is not easily performed on a classical computer.
- Great to show that any algorithm can achieve supremacy over classical computing, even if contrived.
- Very nice hardware advance, distinct from supremacy itself.
- A notable feat of engineering.
- Great to achieve 50 working qubits.
- Great to achieve longer coherence time.
- Great to have a complete grid topology for qubit connectivity.
- Will enable a wider and deeper range of proof of concept implementations.
- The hype and attention will likely boost quantum computing R&D investment, venture capital funding for startups, corporate spending on quantum computing, and overall attention to the technology and its potential applications.
- There is still no universal, mathematically precise, and technically verifiable definition for the term quantum supremacy. Even Prof. John Preskill, who coined the term in 2012, does not have a precise, technically-verifiable definition of the term. Most recently he referenced the term “to describe the point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful.” We get the general idea, but generality and verifiable science are not quite the same thing. I like my own definition, but the essential pair of problems with the term is that classical computers are a moving target, constanting improving, and our technical abilities to develop algorithms to exploit classical computers are also constantly improving. Even if physics and hardware may have limits, our cleverness and problem-solving abilities don’t appear to have any limits.
- Not quite as dramatic a milestone as the hype suggests. Too contrived. Not a commonly recognized real-world application.
- IBM has also managed to engineer a 53-qubit machine — and it’s now available, rather than simply a research project.
- IBM disputes Google’s claim of quantum supremacy. It’s debatable and depends on your perspective. Technically, I think IBM has a good point, but Google’s results are still impressive.
- Not really heralding a new era yet. Incremental evolution rather than a true quantum leap. Besides I don’t think any credible scientist should be talking that way, using such hyperbolic language — scientists do best when focusing on the science, not… marketing.
- Barely achieved quantum supremacy, by just a few qubits.
- Disappointed that they failed to get their previously announced 72-qubit machine working properly. There was a media report that “linking together 72 qubits proved too difficult to control.”
- Not a real algorithm which solves a real problem. Contrived — simply to achieve the narrow theoretical goal of quantum supremacy, in a very narrow sense.
- More of a clever ruse than a sincere attempt to show applicability to well-known real-world problems.
- Niche use case only, for now.
- Unclear when the work might be extended to a real algorithm which solves a recognizable real-world problem.
- Unclear if the work is ever likely to extend to real algorithms which solve real problems.
- Not a universal solution for all applications and all problems.
- Overly-broad term which misleads people and suggests we are further along than we actually are.
- Unclear when or if the machine can become a production-quality machine publicly available in the cloud.
- Unclear how much further the current technology can be evolved.
- We’re still in the proof of concept stage for applications, with no sign of when we might transition to production-scale applications.
Google did in fact achieve quantum supremacy, but only for a single, particular algorithm that is in no way representative of the kinds of application categories that people commonly talk about as being the key beneficiaries of quantum computing.
So, even though Google did technically achieve quantum supremacy, they did it for a narrow use case which does not allow one to safely conclude that quantum supremacy is possible for any other use cases, let alone all use cases. So, most potential users of quantum computers will still have to wait patiently for quantum supremacy to be achieved for their own use cases.
Each distinct application category, and in fact sub-categories as well, will need its own efforts to achieve quantum supremacy for that category or even sub-categories of each category as each problem domain has its own computational requirements.
When might quantum supremacy be achieved for real, practical categories of applications?
I discuss this matter in my informal paper, What Is Quantum Advantage and What Is Quantum Supremacy?, but the short answer is not any time soon, but possibly within two to four years.
That said, there might be some narrower niches or specific application algorithms which might achieve quantum supremacy sooner, such as with Google or IBM’s new 53-qubit machines. But hardware alone is not the key to quantum supremacy — it is simply really hard to design algorithms of any complexity for any quantum computer, and more qubits means an even greater challenge.
See the section at the end of this paper. But first, let’s review how we got here.
Timeline — How it all played out
Google’s achievement of quantum supremacy was not actually a surprise at all — Google did a good job of telegraphing their quantum supremacy intentions well in advance of the deed.
In July 2016 Google posted the first draft of their paper on their quantum supremacy plans on arXiv:
The preprint was updated in April 2017.
They referred to “the task of sampling from the output distributions of (pseudo-)random quantum circuits.”
On March 5, 2018 a Google blog post reported that Google was working on a 72-qubit quantum computer (codenamed Bristlecone) and that it would likely achieve quantum supremacy, saying “We are cautiously optimistic that quantum supremacy can be achieved with Bristlecone”:
- A Preview of Bristlecone, Google’s New Quantum Processor
The 2016 paper was finalized and published in Nature on April 23, 2018:
- Characterizing quantum supremacy in near-term devices
To be clear, this was not a publication of achieved results, but a statement of their intentions.
Google posted an announcement of that paper in a blog post on May 4, 2018:
- The Question of Quantum Supremacy
Over a year went by with not a peep about this project or the 72-qubit quantum computer on which it was supposed to run.
Curiously, on September 18, 2019, two days before news leaked about Google achieving quantum supremacy, and without even rumors that Google was working on a 53-qubit machine, IBM announced the imminent availability of their own 53-qubit machine:
- IBM Opens Quantum Computation Center in New York; Brings World’s Largest Fleet of Quantum Computing Systems Online, Unveils New 53-Qubit Quantum System for Broad Use
That announcement indicated that IBM’s new 53-qubit machine would be available online in the cloud the next month, October 2019, while Google offered no news or even hints as to when their rumored 53-qubit machine would be available at all in any form. So, IBM gets credit for beating Google to the punch (official announcement and general availability) for 53-qubit machines. But to be clear, IBM made no mention or claim concerning quantum supremacy.
Then a leaked version of Google’s soon-to-be-published paper for actual results using a real quantum computer (codenamed Sycamore) appeared online on September 20, 2019 — again, two days after the IBM announcement. I reference two representative examples of popular media coverage:
From The Financial Times:
- Google claims to have reached quantum supremacy
- Google Claims ‘Quantum Supremacy,’ Marking a Major Milestone in Computing
To me, the more interesting news from the leaked paper was the fact that the 72-qubit machine had problems and Google was going with a 53-qubit machine.
A number of web sites posted the leaked Google paper, such as Inverse on September 23, 2019:
- Here’s Google “Quantum Supremacy” paper it pulled from NASA’s website — Read the full paper that Google researcher say describes their “milestone.”
Caltech Professor John Preskill, who coined the term quantum supremacy in 2012, penned an essay in Quanta on October 2, 2019 in which he explains quantum supremacy and the significance of Google’s rumored achievement:
- Why I Called It ‘Quantum Supremacy’
He references the leaked paper cited above.
Wired reran that same story by Preskill on October 6, 2019:
- Why I Coined the Term ‘Quantum Supremacy’
Google’s official paper publishing their results appeared in Nature a little over a month later, on October 23, 2019:
- Quantum supremacy using a programmable superconducting processor
That was the real bombshell in the race to quantum supremacy, although the leak in September took most of the surprise out of the ultimate announcement.
Hmmm… how’s this for a conspiracy theory: might Google, et al have leaked their paper on September 20th to distract attention away from the September 18th IBM announcement? Hey, the shoe does fit. But I have no reason that this is what transpired. It would be nice for somebody to eventually set the record straight as to who leaked the paper, how it happened, and what intentions they had.
Google also published supplemental information, linked to their paper, with deeper technical details of their work:
- Supplementary information: Quantum supremacy using a programmable superconducting processor
And Google posted a blog post announcing publication, also on October 23, 2019:
- Quantum Supremacy Using a Programmable Superconducting Processor
IBM did not agree with Google’s claim. They published their own paper rebutting the claim. In fact, they did so on October 21, 2019, two days before Google actually officially made the claim in public. Again, curious timing.
IBM’s summary blog post:
IBM’s rebuttal paper:
- Leveraging Secondary Storage to Simulate Deep 54-qubit Sycamore Circuits
Professor Scott Aaronson has an excellent blog post which dives into various aspects of the dispute, posted on October 23, 2019:
That’s the same date the Google paper was published, but IBM’s paper had been out for five days, the leaked version of Google’s paper had been out for over a month, and… Aaronson had been one of the reviewers of the Google paper, so the quick reaction is not a big surprise.
Professor John Martinis, leader of the Google project and research scientist for Google, gave an excellent presentation of the project on November 1, 2019 at Caltech:
- Quantum Supremacy Using a Programmable Superconducting Processor
Although IBM gets credit for announcement and availability of a 53-qubit machine, Google gets credit for disclosing technical details and actually achieving and publishing results for running a significant algorithm. As of November 25, 2019 IBM has not published technical details for their machine and no technical papers have been published on running algorithms on more than 20 qubits.
What should we look for next?
- Basically, now we wait — for use cases in each application category to gradually achieve quantum supremacy, one use case at a time, slowly over time, as hardware and ability to design algorithms that exploits that hardware gradually advances. Which practical use case will be first? Nobody knows, yet.
- Will Google turn this research project into a production machine? More than one machine?
- Will someone attempt to reproduce Google’s results on the IBM 53-qubit machine?
- Will someone attempt to reproduce Google’s results on some other machine? IonQ with trapped ions? Honeywell? After all, reproducibility is supposed to be a hallmark of science.
- What will be the next incremental advance in qubits and coherence time?
- Might we see some dramatic technological breakthrough over the next year or two, or only gradual, incremental progress?
- When will we see significant quantum supremacy — an interesting number of applications where quantum supremacy has been achieved?
- When will we see true quantum supremacy — quantum supremacy across a wide range of categories of applications, for most of the compute-intensive application categories.
- When will we see the ENIAC moment — a significant production application on beefier hardware?
- When will we see the FORTRAN moment — an expressive higher-level programming language which dramatically simplifies algorithm development and enables widespread mainstream usage?
- When will we see quantum algorithmic breakout — both hardware and algorithm development, as well as a trained workforce achieve a critical mass which enables widespread mainstream usage?
- When will we see a true universal quantum computer, which combines classical and quantum computing in the same machine with zero latency to transition between the two modes of computing?
What application categories might become ripe for quantum computing in the coming years?
See my paper, What Applications Are Suitable for a Quantum Computer?.
For more of my writing on quantum: List of My Papers on Quantum Computing.