Skip to content
We can fix that

IBM compensates for errors, gets usable results out of quantum processor

May be a way to squeeze useful work out before we get quantum error correction.

John Timmer | 42
Image of a processor chip split into layers and expanded.
IBM's Eagle processor has reached Rev3, which means lower noise qubits. Credit: IBM
IBM's Eagle processor has reached Rev3, which means lower noise qubits. Credit: IBM
Story text

Today's quantum processors are error-prone. While the probabilities are small—less than 1 percent in many cases—each operation we perform on each qubit, including basic things like reading its state, has a significant error rate. If we try an operation that needs a lot of qubits, or a lot of operations on a smaller number of qubits, then errors become inevitable.

Long term, the plan is to solve that using error-corrected qubits. But these will require multiple high-quality qubits for every bit of information, meaning we'll need thousands of qubits that are better than anything we can currently make. Given that we probably won't reach that point until the next decade at the earliest, it raises the question of whether quantum computers can do anything interesting in the meantime.

In a publication in today's Nature, IBM researchers make a strong case for the answer to that being yes. Using a technique termed "error mitigation," they managed to overcome the problems with today's qubits and produce an accurate result despite the noise in the system. And they did so in a way that clearly outperformed similar calculations on classical computers.

Living with noise

If we think of quantum error correction as a way to avoid the noise that keeps qubits from accurately performing operations, error mitigation can be viewed as accepting that the noise is inevitable. It's a means of measuring the typical errors, compensating for them after the fact and producing an estimate of the real result that's hidden within the noise.

An early method of performing error mitigation (termed probabilistic error cancellation) involved sampling the behavior of the quantum processor to develop a model of the typical noise and then subtracting the noise from the measured output of an actual calculation. But as the number of qubits involved in the calculation goes up, this method gets a bit impractical to use—you have to do too much sampling.

So instead, the researchers turned to a method where they intentionally amplified and then measured the processor's noise at different levels. These measurements are used to estimate a function that produces similar output as the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all.

To test this system, the researchers turned to what's called an Ising model, which is easiest to think of as a grid of electrons where each electron's spin influences that of its neighbors. As you step forward in time, each step sees the spins change in response to the influence of their neighbors, which alters the overall state of the grid. The new configuration of spins will then influence each other, and the process will repeat as time progresses.

While an Ising model involves simplified, idealized behavior, it has features that show up in a variety of physical systems, so they've been studied pretty widely. (D-Wave, which makes quantum annealers, recently published a paper in which its hardware identified the ground state of these systems, so they may sound familiar.) And as the number of objects in the model increases, their behavior quickly becomes complex enough that classical computers struggle to calculate their state.

It's possible to perform the calculations on a quantum computer by performing operations on pairs of qubits. To simplify matters, IBM used an Ising model where the grid was configured in a way that coincides with the physical arrangement of qubits on its processor. But this wasn't a case where the processor was simply being used to model its own behavior; as mentioned above, Ising models existed independently of quantum hardware.

Quantum vs. classical

To start with, the researchers limited the number of spins they were modeling to ensure that its behavior could be calculated on a traditional computer. This work showed that the error mitigation procedure worked; once the noise was compensated for, the numbers from the quantum calculation matched those from the classical computation, even out to over a dozen time steps. But it was relatively simple to scale up the model to where the classical computer (a 64-core processor with 128GB of memory) started to struggle.

This happened at the point where the system required 68 qubits to model. From there, the researchers used software that estimated the system's behavior on the classical computer, which allowed it to keep up for longer at the cost of some accuracy. Even so, it was possible to scale up the model's size to where 127 qubits were needed, which is well past the point where classical calculations can keep up.

"The mitigation technique was basically able to run for a larger size than I could do an exact simulation on a classical computer using an exact classical method simulation," IBM's Jay Gambetta told Ars. "So it's outside the reach of exact classical methods. But then in classical methods, you also have approximate methods that can scale more efficiently. [The quantum calculation] was also more accurate than those approximate classical methods."

The researchers estimate that tracing the system's behavior through 20 time steps would require over 400 petabytes of memory.

So, this appears to be a clear case of quantum computers outperforming classical computers on a potentially interesting problem. And Gambetta suggested that showing noise mitigation works on a simple system like the Ising model is essential if we're going to start applying the approach to more complicated problems that could have practical applications. But pushing things further isn't going to simply be a matter of adding more qubits.

The complications

IBM has a quantum processor with over 400 qubits, so why limit things to the 127 qubits used here? There are a couple of reasons for this. IBM chose its smaller, 127-qubit Eagle processor for the work because that has already reached Revision 3, while its larger Osprey processor is still in its first iteration. The two revisions have been used to improve the performance of the qubits, cutting down on the noise that needs to be compensated for.

And that brings us to the second reason: compensating for noise is computationally expensive and needs to be done using classical computers. Doing the sampling of the noise on the quantum computer only took about five minutes. But even for a smaller problem, the full noise mitigation process required four hours; that's compared to eight hours to simply model the system on a classical computer. Still, the scaling was better; a somewhat larger problem required about 30 hours to model, while the noise mitigation took 9.5 hours.

One consequence of this means that adding enough qubits can also potentially make error mitigation computationally intractable. "Error mitigation still scales exponentially," Gambetta said, "but it's a weaker exponential than the simulation cost."

But IBM thinks that there are two reasons for optimism here. For starters, the research team says the algorithms involved in the error mitigation are "dominated by classical processing delays that stand to be largely eliminated through conceptually straightforward optimizations." So there's likely to be a speedup there. The second is that the time involved scales as a function of the error rates in the quantum hardware—lower those, and it will speed up the classical portion of the calculation.

Gambetta is especially excited about a processor called Heron, which is on IBM's road map for this year. Regarding speed-of-gate operations, Gambetta said, "Our preliminary results suggest that [Heron will] be somewhere like five times better than Eagle." Since the possibility that the system suffers decoherence (the loss of its quantum state) is a function of time, less time on operations means that the calculation is more likely to be completed before decoherence becomes an issue. And fewer errors also means a faster error mitigation process.

All of this makes people at IBM optimistic that error mitigation is a route toward performing useful calculations on quantum hardware long before we reach the point where error-corrected qubits are possible. As the paper concludes, "A noisy quantum processor, even before the advent of fault-tolerant quantum computing, produces reliable expectation values at a scale beyond 100 qubits and non-trivial circuit depth leads to the conclusion that there is indeed merit to pursuing research towards deriving a practical computational advantage from noise-limited quantum circuits."

Nature, 2023. DOI: 10.1038/s41586-023-06096-3  (About DOIs).

Listing image: IBM

Photo of John Timmer
John Timmer Senior Science Editor
John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
42 Comments