Proof emerges a quantum computer can outperform a classical one
Leaked paper gives the game away / Sep 26th 2019
“In an article published in 2012 John Preskill, a theoretical physicist, posed a question: “Is controlling large-scale quantum systems merely really, really hard, or is it ridiculously hard?” Seven years later the answer is in: it is merely really, really hard.
An unexpected find: the Q and A following Stephen Hawking's Caltech Public Lecture on 11 March 1998. Back then, Stephen would answer audience questions in real time, so Kip Thorne and I would entertain the audience while Stephen prepared his response. https://t.co/dskAMKdLFp
— John Preskill (@preskill) August 10, 2019
Last week a paper on the matter was—briefly and presumably accidentally—published online. The underlying research had already been accepted by Nature, a top-tier scientific journal, but was still under wraps. The leak revealed that Google has achieved what Dr Preskill dubbed in his article, “quantum supremacy”. Using a quantum computer, researchers at the information-technology giant had carried out in a smidgen over three minutes a calculation that would take Summit, the world’s current-best classical supercomputer, 10,000 years to execute.
A credible demonstration of quantum supremacy, which few disagree that the leaked paper represents, is indeed a milestone. It will divide the history of the field into two eras: a “before”, when quantum computers were simply hoped to outpace even the best classical kind, and an “after”, when they actually did so. There has been much talk, including in this newspaper, about the latter era. Now it has arrived.
Google’s experiment was “circuit sampling”: checking whether numbers their machine spits out, given random inputs, fit a particular pattern. This niche task was chosen to be easy for a quantum computer while still being checkable—just—by a classical one. It does, though, confirm that quantum computers may in time be able to handle long-standing matters of practical importance. These include designing new drugs and materials, giving a boost to the field of machine learning, and making obsolete the cryptographic codes that lock up some of the world’s secrets. Quantum computers employ three counterintuitive phenomena. One is “superposition”, the idea behind Schrödinger’s famous dead-and-alive cat. Unlike classical bits, which must be either one or zero, “qubits” may be a mixture of both.
We'll discuss John @preskill's paper "Quantum Computing in the NISQ era and beyond" during our next meeting, which will be Thursday, 29 August, over lunch. Come say hi and learn about quantum computing in the near term! https://t.co/TqzNXJc5b8
— Papers We Love Columbus (@PapersWeLoveCMH) August 7, 2019
Google’s machine has 53 qubits, which between them can represent nearly ten million billion possible superposed states. The second is “entanglement”, which ties quantum particles together across time and space. In standard computers each bit is rigorously sequestered from the next. Quantum machines like their qubits entangled. Mathematical operations on superposed and entangled qubits can act, to a greater or lesser degree, on all of them at once. A quantum calculation starts by addressing qubits individually: making one of them mostly zero, say, and then entangling it with its neighbour by a certain amount. That done, it lets the rules of physics play out, with the qubits’ states and linkages evolving over time. At the end (but not before, which would ruin the calculation), the qubits are examined simultaneously to obtain an answer.
The trick is to maximise the chance of choosing the right answer instead of one of the zillions of wrong ones. This is where the third counterintuitive idea comes in. In classical physics, probabilities must be positive—a 30% chance of rain, say. Quantum mechanics uses a related concept, called “amplitudes”. These can be negative as well as positive. By ensuring that amplitudes which represent wrong answers cancel each other out, while those that represent the right one reinforce, programmers can home in with high confidence on the correct solution. That is the explanation which textbooks present, anyway. In the laboratory, things are rather more difficult. Superpositions and entanglements are exceedingly delicate phenomena. Even the jiggling of adjacent molecules can interrupt them and sully a calculation. Most designs for quantum computers require the machines to be stored at temperatures colder than that of deep space, and to be tended by a basement full of PHDs, to keep things on track.
No height of education or depth of cold, though, can altogether preclude errors creeping in. The biggest problem facing quantum engineers is how to spot and correct these, because most of the useful applications of quantum computing will require many, many more qubits than current devices sport—with a concomitant increase in the risk of errors. That has spurred a huge effort, both by well-known firms such as IBM, Intel and Microsoft, and by an eager band of newcomers, such as Rigetti, to build better, less error-prone kit. There is also, in parallel with this race to build better machines, a race to develop useful quantum algorithms to run on them. The most famous example so far is probably Shor’s algorithm. This is the piece of quantum-turbocharged maths that allows rapid factorisation of large numbers into their component primes, and thus scares cryptographers, a group whose trade depends on this being a hard thing to do.
— John Preskill (@preskill) September 26, 2019
But if quantum computers are really to earn their keep, then other algorithms will be needed. Developing them will be assisted by the fact that a lot of the proposed applications (drug design, materials science and so on) themselves depend on quantum processes. This, indeed, is why those applications have been so intractable until now. Despite the promise of quantum computing, many in the field are uncomfortable with the phrase “quantum supremacy”, for it implies a threshold that, once crossed, leaves decades of existing computer science in the dust for something weird and wonderful. And for all the “before” and “after” that Google’s paper represents, building practical, error-corrected machines will be far from easy.
John Martinis: "Everyone understands that in the long term we want to do error correction." After thinking about quantum error correction for 25 years, I'm excited that we now have the hardware to explore it seriously. https://t.co/35vdQDdrZ8
— John Preskill (@preskill) September 29, 2019
It is therefore a mistake, most people think, to believe that quantum computing will replace the classical sort. The practicalities of low-temperature operation alone are likely to see to this. Governments, big firms and the richer sorts of university will, no doubt, buy their own machines. Others will rent time on devices linked to quantum versions of the cloud. But the total number of quantum computers will be limited. And that will be fine. But it is worth bearing in mind a similar prediction of limited demand made in the early days of classical computing. In 1943 Thomas Watson, then boss of IBM, is alleged to have said, “I think there is a world market for maybe five computers.” He was out by a factor of perhaps a billion.”
What the Hell Is a Quantum Computer and How Excited Should I Be?
by Ryan F. Mandelbaum / 11/07/17
“They will never sit on your desk, and they will most certainly never fit in your pocket. Today, they’re fragile, and need to be kept at temperatures close to absolute zero. Quantum computers aren’t much like the desktop PCs we’re all so familiar with—they’re a whole new kind of machine, capable of calculations so complex, it’s like upgrading from black-and-white to a full color spectrum.
Lately, you’ve been hearing a lot about quantum computing. There are news stories about how it “could change the world” and “open new dimensions.”Universities are hyping up their quantum microchip prototypes, demonstrations of quantum mechanical ideas in silicon, and other devices and theories. But come on, how does it work? What does it do? Who’s doing it? And, most importantly, why should you care?
Despite what you’ve heard, right now, quantum computing is more or less in the era that classical computing was in the ‘50s, when room-sized hulks ran on vacuum tubes. But it could revolutionize computing. Potentially. Maybe. Before you learn what a quantum computer is and why it matters, let’s break down the mathematical theory of quantum mechanics. It may sound esoteric, but the rules of quantum mechanics govern the very nature of the particles that make up our universe, including those of your electronics and gadgets.
In our universe, we are used to a thing being one thing. A coin, for example, can be heads, or it can be tails. But if the coin followed the rules of quantum mechanics, the coin would be flipping midair. So until it lands and we look at it, we don’t know if it’s heads or tails. Effectively, it’s both heads and tails at the same time. We do know one thing about this coin. There is a probability for the flipping coin to be either heads or tails.
So the coin isn’t heads, it’s not tails, it’s—for example—the probability of 20% heads and 80% tails. Scientifically speaking, how can a physical thing be like this? How do we even begin describe it? The most mind-boggling part of quantum mechanics is that for some reason, particles like electrons seem to act like waves, and light waves like particles. Particles have a wavelength. The most basic experiment demonstrating this fact is the double slit experiment:
If you put a pair of parallel slits in a partition between a beam of particles and a wall, and put a detector on the wall to see what happens, a strange pattern of stripes appear. It’s called an interference pattern.
Like waves, the particles-waves that travel through one slit interfere with those that travel through the other slit. If the peak of the wave aligns with a trough, the particles cancel out and nothing shows up. If the peak aligns with another peak, the signal in the detector would be even brighter. (This interference pattern still exists even if you only send one electron at a time.) If we were to describe one of these wave-like particles (before they hit the wall) as a mathematical equation, it would look like the mathematical equation describing our coin (before it hits the ground and lands on heads or tails). These equations can look kind of scary, like this:
But all you need to know is that this equation lists the particle’s definite properties but doesn’t say which one you’ll get. (We don’t know that yet.) You can use this equation to find the probabilities of some of the particle’s properties. And because this math involves complex numbers—those containing the square root of -1, or i—it doesn’t just describe the probability of a coin being heads or tails, it describes an advanced probability, which could include the way the face of the coin will be rotated. From all this crazy math, we get a couple of crazy things. There’s superposition—the midair coin being heads and tails at the same time.
There’s interference—probability waves overlapping and cancelling each other out. And there’s entanglement, which is like if we tied a bunch of coins together, changing the probability of certain outcomes because they’re, well, entangled now. These three crazy things are exploited by quantum computers to make whole new kinds of algorithms. “In some sense we’ve been doing the same thing for 60 years. The rules we use to compute have not changed—we’re stuck with bits and bytes and logic operations,” Martin Laforest, Senior Manager of Scientific Outreach at the Institute for Quantum Computing at the University of Waterloo in Canada, tells Gizmodo. But that is all about to change.
“Quantum computers turn the rules of computers on their heads.” Traditional computers do their computation using bits, which can be stored as electrical charges in processors or even tiny pits drilled into CDs. A bit only has two choices, which we represent as one and zero. Anything with two choices you can pick from is a bit. All computing is done via setting and relating bits, with operations like “if this bit is a zero and this bit is a one, make this third bit a one, otherwise make it a zero,” and so on and so forth.
The qubit, short for quantum bit, is like a regular bit, but it’s both a zero and a one at the same time (before you look at it). It’s that coin flipping in midair. A quantum computer is like flipping multiple coins at the same time—except while these coins are flipping, they obey the wacky rules of superposition, interference and entanglement. The quantum computer first bestows the qubits with this quantum mechanical version of probability of what will happen once you actually peep the qubit. (Once you peep the mysterious qubit though, it stops being mysterious and becomes a defined bit.)
Quantum mechanical computations are made by preparing the qubits (or adding weights to a coin before you flip it to manipulate the probability of the outcome), then interacting them together (or flipping a bunch of entangled coins at once) and then measuring them (which causes the coins to stop flipping and produces the final value). If done properly, all of this mid-air interaction should result in a best answer (the value) to whatever question you’ve asked the computer.
"Many researchers have cautioned that NIST should take its time to evaluate the new class of candidates for post-quantum cryptography before selecting any finalists."https://t.co/EcYRwopNx2
— John Preskill (@preskill) September 7, 2019
Quantum computing is special. As we said before, because its math uses complex numbers, it computes a special version of probabilities—not just heads vs. tails but also the orientation of the coin. So as you throw these coins up in the air, they bump into each other with their different sides and orientations, and some of this bumping changes the probability of the side revealed by the outcome. Sometimes they bump into each other and cancel each other out, making certain outcomes less likely. Sometimes they push each other along, making certain outcomes more likely.
All this is interference behavior. “The idea with a quantum computer is that you take this phenomenon and exploit it on a massive scale,” said Scott Aaronson, theoretical computer scientist at the University of Texas, Austin. “The idea is to choreograph a pattern of interference” so that everything cancels out except for the answer you were looking for. You want the coins to interfere in the air. To the observer, the answer just looks like the output of regular bits. The quantum mechanics happens in the background.
It was famous physicist Richard Feynman who’s credited as dreaming up the first quantum computer in a 1982 paper—a computer that could use quantum mechanics to solve certain problems. But it was like first coming up with a new way of notating music, but no instrument to play it on and no compositions written. It wasn’t until mathematicians began devising algorithms for this computer to use that it became a more reasonable dream to pursue. Theorists wrote the compositions (the algorithms), while physicists worked on building the instruments (the physical quantum computers).
But okay, now you just have these weird quantum bits whose output you can’t guess beforehand. Now you have to figure out how you can use them. Today, there are several places where researchers think using a quantum computer could solve certain problems better than a classical computer. Most obviously, you can use these quantum bits to create simulations of other things that follow the crazy rules of quantum mechanics: namely, atoms and molecules. Scientists can use qubits to model entire molecules and their interactions. This could help drug companies devise new medicines, or create new materials with desired properties, before ever setting a foot into a lab.
Scientists have already been able to model these molecules using classical computing, but quantum mechanics offers a huge speedup. Fully representing the behavior of the caffeine molecule, including the relevant quantum mechanical rules of its individual particles, might take 160 qubits, explained Robert Sutor, vice president of Cognitive, Blockchain, and Quantum Solutions at IBM. Doing so with a classical computer to that level of detail would require around the same number of bits (10^48) as there are atoms on planet Earth (between 10^49 and 10^50). IBM has already modeled the far lighter beryllium hydride molecule using a six qubit quantum computer. Researchers at Lawrence Berkeley National Laboratory determined all of the energy state of a hydrogen molecule with their own two qubit quantum computer.
Quantum computation center opens https://t.co/4IOYZnJUzh
— Jay Gambetta (@jaygambetta) September 19, 2019
There are other algorithms that researchers think might provide some sort of speedup over classical computers. Grover’s algorithm, for example, can help optimize searching. Some are working on using quantum computing in artificial intelligence, or in optimization problems such as “find the biggest mountain in this mountain range” and “find the fastest route between these two points separated by several rivers crossed by several bridges.”
But perhaps the most talked-about quantum computer algorithm is something called Shor’s algorithm, which could change the way almost all our data in encrypted. “I suppose on that level, it’s like the Cold War.” Devised by Peter Shor in 1994, its purpose is to factor numbers into primes. I literally mean the factoring you learned in elementary school, the way that you can break 15 into its factors, 3 and 5. Multiplying numbers together is a simple computational task, but breaking big numbers into their factors takes a far longer time. Modern cryptography is based on this knowledge, so lots of your data is, in its most simplified form, encrypted “securely” by converting things into numbers, multiplying them together and associating them with a “key”—instructions on how to factor them. RSA encryption is used almost everywhere, from passwords to banking to your social media. But if a quantum computer can come along that can run Shor’s algorithm and break the encryption, then that old encryption method is no longer secure.
According to everyone I spoke with, breaking RSA encryption is decades away, but scientists are well on their way looking for post-quantum cryptography, new math that can be used for encoding data. The idea is that encryption based on these new ideas would be based on mathematics not easier to run with a quantum computer. Meanwhile, other researchers are scrambling to break the popular RSA encryption system with quantum computers before a hacker does. “I suppose on that level, it’s like the Cold War,” said Stephan Haas, University of Southern California theoretical physicist. “You’re getting nuclear weapons because the other guy is getting nuclear weapons.”
The quantum revolution is coming, and Chinese scientists are at the forefront. China's progress worries some U.S. officials and scientists. https://t.co/jsp9dj4kWp
— Jeanne Whalen (@JeanneWhalen) August 18, 2019
Scientists needed transistors, teeny electrical switches, to store bits and make regular computers. Similarly, they need hardware that can store a quantum bit. The key to producing a quantum computer is finding a way to model a quantum system that folks can actually control—actually set the probabilities and orientations of those flipping coins. This can be done with atoms trapped by lasers, photons, and other systems. But at this point, most everyone in the industry who’s presented a quantum computer has done so with superconductors—ultra-cold pieces of specially-fabricated electronics. They look like teeny microchips. Except these microchips get placed into room-sized refrigerators cooled to temperatures just above absolute zero.
These superconducting qubits stay quantum for a long time while performing quantum computing operations, explained Irfan Sidiqqi from the University of California, Berkeley. He said that other types of systems can stay quantum for longer, but are slower.
Rob Schoelkopf: "[Lego bricks] are really cheap. They can be mass-produced. And they always plug together the right way."https://t.co/phe5jT1GCm
— John Preskill (@preskill) September 14, 2019
There are three kinds of qubits made from these electronics. They’re called flux, charge, and phase qubits, differing by the specifics of their constructions and their physical properties. All of them rely on something called a Josephson junction in order to work. A Josephson junction is a tiny piece of non-superconducting insulator placed between the superconducting wires, places where electrons travel without any resistance and begin to show off obvious quantum effects in larger systems.
Manipulating the current through the wires allows physicists to set up qubits in these systems. As of today, these systems are very fragile. They fall apart into classical bits through any sorts of noise. And every additional qubit adds more complexity. The biggest quantum computers today have less than 20 qubits, with an exception, the D-Wave computer, whose 2,000 qubits operate on a separate, more specific principle that we’ll dig into later.
Actually performing calculations with these qubits can be a challenge. Regular computers have error correction, or built-in redundancies, places where multiple bits perform the same function in case one of them fail. For quantum computers to do this, they need to have extra qubits built into their system specifically to check errors. But the nature of quantum mechanics makes actually doing this error correction more difficult than it does in classical computers.
It could take around two thousand physical qubits working in tandem, in fact, to create one reliable “error-corrected” qubit resistant to messing up. But we’re getting closer. “There’s a lot of healthy progress that wouldn’t have been imaginable two years ago,” said Debbie Leung on the faculty at the Institute for Quantum Computing at the University of Waterloo.
— John Preskill (@preskill) September 18, 2019
“A quantum computer will always have errors,” said Laforest. Thankfully, modeling molecules doesn’t need quite the same level of accuracy, said Siddiqi, which is why researchers have plowed forward with these types of simulations in few-qubit systems. “Now we’re at the junction where the theoretical demand versus the reality of experiments are converging together.” Better qubits and further research continue to bring us closer to the threshold where we can construct few-qubit processors. “Now we’re at the junction where the theoretical demand versus the reality of experiments are converging together,” said Laforest.
Universities, national labs, and companies like IBM, Google, Microsoft and Intel are pursuing qubits set-ups in logic circuits similar to regular bits, all with less than 20 qubits so far. Companies are simultaneously simulating quantum computers with classical computers, but around 50 qubits is seen as the limit—IBM recently simulated 56 qubits, which took 4.5 terabytes of memory in a classical computer.
Each company we spoke to has a slightly different approach to developing their superconducting machines. Sutor from IBM told Gizmodo the company is taking a long-term approach, hoping to one day release a general-purpose quantum computer that classical computers rely on, when needed, through the cloud. Intel has just entered the race with their 17-qubit processor released in October. Microsoft showed off their consumer-facing software suite to Gizmodo, and described a similar long-term goal for quantum computing involving scalable hardware.
Rumors are swelling that before the end of this year, Google will unleash a quantum computer that will achieve “quantum supremacy” with 49 or 50 qubits. Quantum supremacy simply means finding one single algorithm for which a quantum computer always win, and for which a classical workaround can’t be found to solve the same problem. This is just one milestone, though. “It will probably be a contrived task, something not classically important,” said Aaronson.
Still, he said, “I think at that point it raises the stakes for the skeptics, for the people who have said and continue to say that it’s a pipe dream.” The other companies seemed to agree and stressed their long-term goals for quantum computing. Google did not respond to a request for comment. While 2017 seems to be a year amidst a sort-of quantum boom, everyone I spoke to was realistic about just how far from a consumer-facing product quantum computing is. “Looking at 2020, 2021 we’ll start seeing the advantage for real users, corporations, and scientific research,” Sutor said.
But one controversial company, D-wave, is instead doing a different kind of quantum computing called adiabatic quantum computing. Rather than just a dozen to a few dozen qubits, they’ve announced a computer with 2,000. And rather than rely on quantum logic circuits like the rest of the pack, their computer solves one type of problem—optimization problems, like finding the best solution from a range of okay solutions, or finding the best taxi route from point A to point B staying as far as possible from other taxis. These kind of problems are potentially useful in finance. Unlike the competitors, D-wave doesn’t need its qubits to be error-corrected. Instead, it overcomes the error correction by running the algorithm many times per second. “Is it a general purpose machine that could run any problem? No,” Bo Ewald, D-Wave’s president, told Gizmodo. “But there aren’t any computers that can run these problems anyway.”
At this point, people agree that D-Wave’s computer is a quantum computer, but are unsure if it’s better than a classical computer for the same problem yet (some of its users report beating classical algorithms, said Ewald). But Ewald just wanted to get quantum computers in front of people now. “If you want to get started with real-world quantum computing today, this is how you do it. NASA, Google, and Los Alamos National Labs have all purchased models or computing space,” said Ewald. Everyone, even Ewald at D-Wave, agrees that we’re far from seeing quantum computers used in everyday life—there’s a lot of excitement but we’re still in the early days. There are hordes of challenges, like error correction. Then comes the related problem of transmitting quantum information between distant computers or storing quantum information long term in memory.
Quite ironic for those of use who have been following quantum technology since the mid-90s:
"Out of all the qubit types, superconducting is the most common. However, trapped ion qubits, a relatively new qubit technology, shows a great deal of promise."https://t.co/ZzHDY7JrDd
— John Preskill (@preskill) September 18, 2019
I asked Aaronson whether he thought some startup or some secret effort might come along from out of nowhere and present a super advanced model—he said probably not. “We know who the best scientists are and we’d expect them to be vacuumed up the way physicists were in the Manhattan project,” he said. “I think it remains a very healthy field, but at the same time it’s true that actually building a useful quantum computer is a massive technological undertaking.” You can’t just build one in your garage.
So no, you cannot own a quantum computer now, nor is it likely that you will ever own a quantum computer. It’s more likely that when your classical computer needs quantum help, you won’t notice it working. You may hear about some benefits of quantum computing in the next few years, like biochemical advances, but other advantages could be 20 years down the line. And overall, there’s no proof a quantum computer is any better than a classical computer. Yet.”
John Martinis: "Everyone understands that in the long term we want to do error correction." After thinking about quantum error correction for 25 years, I'm excited that we now have the hardware to explore it seriously. https://t.co/35vdQDdrZ8
— John Preskill (@preskill) September 29, 2019
REGARDLESS of WHETHER it’s USEFUL
Why I Called It ‘Quantum Supremacy’
by John Preskill
“Researchers finally seem to have a quantum computer that can outperform a classical computer. But what does that really mean? A recent paper from Google’s quantum computing lab announced that the company had achieved quantum supremacy. Everyone has been talking about it, but what does it all mean? In 2012, I proposed the term “quantum supremacy” to describe the point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful. With that new term, I wanted to emphasize that this is a privileged time in the history of our planet, when information technologies based on principles of quantum physics are ascendant.
We recover this really interesting and well explained interview from @preskill for @ycombinator. He explains how quantum error correction takes advantage of entanglement to protect information in quantum processors. https://t.co/JwgemlwtKd
— Q-LION (@qlion_FT) August 7, 2019
The words “quantum supremacy” — if not the concept — proved to be controversial for two reasons. One is that supremacy, through its association with white supremacy, evokes a repugnant political stance. The other reason is that the word exacerbates the already overhyped reporting on the status of quantum technology. I anticipated the second objection, but failed to foresee the first. In any case, the term caught on, and it has been embraced with particular zeal by the Google AI Quantum team.
Chris Monroe, on studies of dynamical phase transitions using a trapped-ion quantum simulator: "We don't run around calling it 'quantum supremacy' because it wasn't as programmable as a fully programmable quantum computer."https://t.co/PI0uaYxPd1
— John Preskill (@preskill) September 30, 2019
I considered but rejected several other possibilities, deciding that quantum supremacy best captured the point I wanted to convey. One alternative is “quantum advantage,” which is also now widely used. But to me, “advantage” lacks the punch of “supremacy.” In a race, a horse has an advantage if it wins by a nose. In contrast, the speed of a quantum computer vastly exceeds that of classical computers, for certain tasks. At least, that’s true in principle. The recent Google paper illustrates the point. They used a device with 53 qubits (the quantum analogues of a classical computer’s bits), and they report that it took just minutes to perform quantum computations that would take today’s most powerful supercomputers thousands of years. Assuming it’s true, this is a remarkable achievement in experimental physics and a testament to the brisk pace of progress in quantum computing hardware; I offer my hearty congratulations to everyone involved.
The catch, as the Google team acknowledges, is that the problem their machine solved with astounding speed was carefully chosen just for the purpose of demonstrating the quantum computer’s superiority. It is not otherwise a problem of much practical interest. In brief, the quantum computer executed a randomly chosen sequence of instructions, and then all the qubits were measured to produce an output bit string. This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative. However, the demonstration is still significant. By checking that the output of their quantum computer agrees with the output of a classical supercomputer (in cases where it doesn’t take thousands of years), the team has verified that they understand their device and that it performs as it should. Now that we know the hardware is working, we can begin the search for more useful applications.
Why is it so important to verify the performance of the hardware? It’s because precisely controlling a quantum computer is notoriously difficult. In a sense, merely looking at a quantum system unavoidably disturbs it, a manifestation of Heisenberg’s famous uncertainty principle. So if we want to use such a system to store and reliably process information, we need to keep that system nearly perfectly isolated from the outside world. At the same time, though, we want the qubits to interact with one another so we can process the information; we also need to control the system from the outside and eventually measure the qubits to learn the results of our computations. It is quite challenging to build a quantum system that satisfies all of these desiderata, and it has taken many years of progress in materials, fabrication, design and control to get where we are now.
The quantum supremacy milestone allegedly achieved by Google is a pivotal step in the quest for practical quantum computers. I thought it would be useful to have a word for the era that is now dawning, so I recently made one up: NISQ. (It rhymes with risk.) This stands for “noisy intermediate-scale quantum.” Here “intermediate-scale” refers to the size of quantum computers that are now becoming available: potentially large enough to perform certain highly specialized tasks beyond the reach of today’s supercomputers. “Noisy” emphasizes that we have imperfect control over the qubits, resulting in small errors that accumulate over time; if we attempt too long a computation, we’re not likely to get the right answer. The Google team has apparently demonstrated that it’s now possible to build a quantum machine that’s large enough and accurate enough to solve a problem we could not solve before, heralding the onset of the NISQ era.
Where do we go from here? Naturally, Google and other hardware builders hope to find practical applications for their quantum devices. A much larger quantum computer might help researchers design new materials and chemical compounds or build better tools for machine learning, but a noisy quantum computer with a few hundred qubits might not be capable of anything useful. Still, we have ideas about how to use NISQ computers that we’re eager to try, which might yield better methods for optimization or more accurate physical simulations, but we’re not sure if any of these will pan out. It will be fun to play with NISQ technology to learn more about what it can do. I expect that quantum computers will have a transformative effect on society, but this may still be decades away.
Following the tweets from #QEC19 (which was a great conference) prompted me to discover that the video from my #QEC17 summary talk is online. Much of it still seems to be relevant. https://t.co/HknaFb1nHP
— John Preskill (@preskill) August 2, 2019
In the 2012 paper that introduced the term “quantum supremacy,” I wondered: “Is controlling large-scale quantum systems merely really, really hard, or is it ridiculously hard? In the former case we might succeed in building large-scale quantum computers after a few decades of very hard work. In the latter case we might not succeed for centuries, if ever.” The recent achievement by the Google team bolsters our confidence that quantum computing is merely really, really hard. If that’s true, a plethora of quantum technologies are likely to blossom in the decades ahead.”