Today, Google announced a demonstration of quantum error correction on its next generation of quantum processors, Sycamore. The iteration on Sycamore isn’t dramatic—it’s the same number of qubits, just with better performance. And getting quantum error correction isn’t really the news—they’d managed to get it to work a couple of years ago.
Instead, the signs of progress are a bit more subtle. In earlier generations of processors, qubits were error-prone enough that adding more of them to an error-correction scheme caused problems that were larger than the gain in corrections. In this new iteration, adding more qubits and getting the error rate to go down is possible.
We can fix that
The functional unit of a quantum processor is a qubit, which is anything—an atom, an electron, a hunk of superconducting electronics—that can be used to store and manipulate a quantum state. The more qubits you have, the more capable the machine is. By the time you have access to several hundred, it’s thought that you can perform calculations that would be difficult to impossible to do on traditional computer hardware.
That is, assuming all the qubits behave correctly. Which, in general, they don’t. As a result, throwing more qubits at a problem makes it more likely you’ll encounter an error before a calculation can complete. So, we now have quantum computers with more than 400 qubits, but trying to do any calculation that required all 400 would fail.
Creating an error-corrected logical qubit is generally accepted as the solution to this problem. This creation process involves distributing a quantum state among a set of connected qubits. (In terms of computational logic, all these hardware qubits can be addressed as a single unit, hence “logical qubit.”) Error correction is enabled by additional qubits neighboring each member of the logical qubit. These can be measured to infer the state of each qubit that’s part of the logical qubit.
Now, if one of the hardware qubits that’s part of the logical qubit has an error, the fact that it’s only holding a fraction of the information of the logical qubit means that the quantum state isn’t wrecked. And measuring its neighbors will reveal the error and allow a bit of quantum manipulation to fix it.
The more hardware qubits you dedicate to a logical qubit, the more robust it should be. There are just two problems right now. One is that we don’t have hardware qubits to spare. Running a robust error correction scheme on the processors with the highest qubit counts would leave us looking at using fewer than 10 qubits for a calculation. The second issue is that the error rates of the hardware qubits are too high for any of this to work. Adding existing qubits to a logical qubit doesn’t make it more robust; it makes it more likely to have so many errors at once that they can’t be corrected.
The same, but different
Google’s response to these issues was to build a new generation of its Sycamore processor that had the same number and layout of hardware qubits as its last one. But the company focused on reducing the error rate of individual qubits so that it could do more complicated operations without experiencing a failure. This is the hardware Google used to test error-corrected logical qubits.
The paper describes tests of two different methods. In both, the data was stored on a square grid of qubits. Each of those had neighboring qubits that were measured to implement error correction. The only difference was the size of the grid. In one method, it was three qubits by three qubits; in the second, it was five by five. The former required 17 hardware qubits in total; the latter 49 qubits, or nearly three times as many.
The research team performed a wide variety of measurements of performance. But the key question was simple: Which logical qubits had the lower error rate? If the errors in hardware qubits dominated, you’d expect tripling the number of hardware qubits to increase the error rate. But if Google’s performance tweaks improved hardware qubits sufficiently, the larger, more robust layout should drop the error rate.
The larger scheme won out, but it was a close thing. Overall, the larger logical qubit had an error rate of 2.914 percent versus 3.028 percent in the smaller one. That’s not much of an advantage, but it’s the first time any advantage of this sort has been demonstrated. And it has to be emphasized that either error rate is too high to use one of these logical qubits in a complex calculation. Google estimates that the performance of the hardware qubits would need to improve by another 20 percent or more to provide a clear advantage to large logical qubits.
In an accompanying press package, Google suggests it will get to that point—running a single, long-lived logical qubit—in “2025-plus.” At that point, it’ll face many of the same problems IBM is currently working on: There are only so many hardware qubits you can fit on a chip, so some way of networking large numbers of chips into a single compute unit will have to be sorted out. Google declined to assign a date for when it will test solutions there. (IBM indicates it will test various approaches this year and next.)
So, to be clear, a 0.11 percent improvement in error correction that requires roughly half of Google’s processor to host a single qubit doesn’t represent any kind of computational advance. We’re not closer to breaking encryption than we were yesterday. But it does show that we’re already in a place where our qubits are good enough to avoid making things worse—and that we’ve gotten there long before people have run out of ideas for how to make hardware qubits perform better. And that means we’re closer to where the technical hurdles we have to clear have less to do with the qubit hardware.
Nature, 2023. DOI: 10.1038/s41586-022-05434-1 (About DOIs).