2 years ago

317

Like the confusing math behind quantum computing, some of the expectations surrounding this still impractical technology might make you dizzy. If you look out of a flight window at SFO right now, you’ll see a haze of quantum hype floating over Silicon Valley. But the enormous potential of quantum computing is undeniable, and the hardware needed to exploit it is advancing rapidly. If ever there was a perfect time to train your brain around quantum computing, it’s now. Say “Schrödinger superposition” three times quickly and we can dive into it.

Explaining the history of quantum computing

The prehistory of quantum computing begins in the early 20th century, when physicists began to feel they had lost touch with reality.

First, conventional explanations for the subatomic world have been found to be incomplete. Electrons and other particles didn’t just move around neatly, like Newtonian billiard balls, for example. Sometimes they acted like a wave instead. Quantum mechanics has emerged to explain such quirks, but has brought its own troubling questions. To take just one wrinkled example: this new mathematics implied that the physical properties of the subatomic world, such as the position of an electron, exist as *probabilities* before they were noticed. Before you measure the location of an electron, it is neither here nor there, but with some probability everywhere. You can think of it like tossing a quarter in the air. Before it lands, the quarter is neither heads nor tails, but has some probability of both.

If this confuses you, you are in good company. One year before receiving the Nobel Prize for his contributions to quantum theory Richard Feynman of the California Institute of Technology noticed that “no one understands quantum mechanics”. Our perception of the world is simply incompatible. But some people have understood this well enough to redefine our understanding of the universe. And in the 1980s, some of them, including Feynman, began to wonder if quantum phenomena, such as the probabilistic existence of subatomic particles, could be used to process information. The underlying theory or blueprint for quantum computers that took shape in the 80s and 90s still guides Google and other companies working on this technology.

Before we plunge belly-dumping into the 0.101 quantum computing mudflats, we need to brush up on our understanding of ordinary old computers. As you know, smartwatches, iPhones, and the world’s fastest supercomputer basically do the same thing: they perform calculations by encoding information in the form of digital bits, that is, zeros and ones. The computer can turn the voltage on and off in a circuit, for example, to represent ones and zeros.

Quantum computers also perform calculations using bits. After all, we want them to connect to our existing data and computers. But quantum bits, or qubits, have unique and powerful properties that allow a group of them to do much more than an equivalent number of ordinary bits.

Qubits can be built in many different ways, but they are all digital 0s and 1s, using the quantum properties of something that can be controlled electronically. Popular examples – at least among a very select part of humanity – include superconducting circuits or individual atoms levitating within electromagnetic fields. The magical power of quantum computing is that this arrangement allows qubits to do more than just switch between 0 and 1. Handle them properly and they can switch into a mysterious additional mode called superposition.

You may have heard that a qubit is in superposition. *both* 0 and 1 at the same time. This is not entirely true and not entirely false. A qubit in a superposition has some *probability* can be 1 or 0, but does not represent either state, just like our quarter tossed into the air is not heads or tails, but some probability of both. In the simplistic and, dare we say, perfect world of this explainer, it is important to know that the mathematics of superposition describes the probability of finding a 0 or 1 when reading a qubit. The operation of reading the value of a qubit brings it out of the mixture of probabilities into one clear state, similar to a quarter falling on the table one side up. A quantum computer can use a set of qubits in superpositions to play with different possible paths in the computation. If done correctly, the pointers to the wrong paths are reduced, leaving the correct answer when the qubits are read as 0 and 1.

This allows a quantum computer to solve some very time-consuming problems for conventional computers in far fewer steps than a conventional computer would take. Grover’s Algorithm, a famous quantum search algorithm, can find you in a phone book of 100 million names in just 10,000 operations. If a classic search algorithm just went through all the lists to find you, it would take an average of 50 million operations. For Grover and some other quantum algorithms, the bigger the original problem—or the phone book—the further back in the digital dust the regular computer remains.

The reason we don’t have useful quantum computers today is because qubits are extremely finicky. The quantum effects they have to manage are very delicate, and diffuse heat or noise can flip 0s and 1s or erase an important superposition. The qubits must be carefully protected and operate at very low temperatures—sometimes only fractions of a degree above absolute zero. A major area of research involves the development of quantum computer algorithms to correct their own errors caused by qubit failure. Until now, it has been difficult to implement these algorithms because they require so much quantum processor power that there is practically nothing left to solve problems. Some researchers, especially at Microsoft, are hoping to get around this problem by developing a type of qubit made up of clusters of electrons, known as a topological qubit. Physicists predict that topological qubits will be more resistant to environmental noise and therefore less error-prone, but so far they have struggled to create even one. After announcing a hardware breakthrough in 2018, Microsoft researchers abandoned their work in 2021 after other scientists found experimental errors.

However, the companies have demonstrated the promise of their limited machines. In 2019, Google used a quantum computer with 53 qubits to generate numbers that follow a certain mathematical pattern faster than a supercomputer could. The demonstration kicked off a series of so-called “quantum advantage” experiments, with a research team in China announcing their own demo in 2020 and Canadian startup Xanadu announcing theirs in 2022. the researchers decided to change the name so as not to repeat “white supremacy”). forward.

Meanwhile, researchers have successfully modeled small molecules using multiple qubits. These simulations don’t yet do anything beyond the reach of classical computers, but they could if they were scaled up, potentially helping to discover new chemicals and materials. While none of these demonstrations have direct commercial value yet, they have bolstered confidence and investment in quantum computing. After 30 years of tormenting computer scientists, practical quantum computing may not be quite close, but it has begun to seem much closer.