Physics World
25 Mar 2024
Steve Brierley argues that quantum computers must implement comprehensive error-correction techniques before they can become fully useful to society
“There are no persuasive arguments indicating that commercially viable applications will be found that do not use quantum error-correcting codes and fault-tolerant quantum computing.” So stated the Caltech physicist John Preskill during a talk at the end of 2023 at the Q2B23 meeting in California. Quite simply, anyone who wants to build a practical quantum computer will need to find a way to deal with errors.
Quantum computers are getting ever more powerful, but their fundamental building blocks – quantum bits, or qubits – are highly error prone, limiting their widespread use. It is not enough to simply build quantum computers with more and better qubits. Unlocking the full potential of quantum-computing applications will require new hardware and software tools that can control inherently unstable qubits and comprehensively correct system errors 10 billion times or more per second.
Preskill’s words essentially announced the dawn of the so-called Quantum Error Correction (QEC) era. QEC is not a new idea and firms have for many years been developing technologies to protect the information stored in qubits from errors and decoherence caused by noise. What is new, however, is giving up on the idea that today’s noisy intermediate scale devices (NISQ) could outperform classical supercomputers and run applications that are currently impossible.
Sure, NISQ – a term that was coined by Preskill – was an important stepping stone on the journey to fault tolerance. But the quantum industry, investors and governments must now realize that error correction is quantum computing’s defining challenge.
A matter of time
QEC has already seen unprecedented progress in the last year alone. In 2023 Google demonstrated that a 17-qubit system could recover from a single error and a 49-qubit system from two errors (Nature 614 676). Amazon released a chip that suppressed errors 100 times, while IBM scientists discovered a new error-correction scheme that works with 10 times fewer qubits (arXiv:2308.07915). Then at the end of the year, Harvard University’s quantum spin-out Quera produced the largest yet number of error-corrected qubits .
Decoding, which turns many unreliable physical qubits into one or more reliable “logical” qubits, is a core QEC technology. That’s because large-scale quantum computers will generate terabytes of data every second that have to be decoded as fast as they are acquired to stop errors propagating and rendering calculations useless. If we don’t decode fast enough, we will be faced with an exponentially growing backlog of data.
My own company – Riverlane – last year introduced the world’s most powerful quantum decoder. Our decoder is solving this backlog issue but there’s still a lot more work to do. The company is currently developing “streaming decoders” that can process continuous streams of measurement results as they arrive, not after an experiment is finished. Once we’ve hit that target, there’s more work to do. And decoders are just one aspect of QEC – we also need high-accuracy, high-speed “control systems” to read and write the qubits.
As quantum computers continue to scale, these decoder and control systems must work together to produce error-free logical qubits and, by 2026, Riverlane aims to have built an adaptive, or real-time, decoder. Today’s machines are only capable of a few hundred error-free operations but future developments will work with quantum computers capable of processing a million error-free quantum operations (known as a MegaQuOp).
Riverlane is not alone in such endeavours and other quantum companies are now prioritising QEC. IBM has not previously worked on QEC technology, focusing instead on more and better qubits. But the firm’s 2033 quantum roadmap states that IBM aims to build a 1000-qubit machine by the end of the decade that is capable of useful computations – such as simulating the workings of catalyst molecules.
Quera, meanwhile, recently unveiled its roadmap that also prioritizes QEC, while the UK’s National Quantum Strategy aims to build quantum computers capable of running a trillion error-free operations (TeraQuOps) by 2035. Other nations have published similar plans and a 2035 target feels achievable, partly because the quantum-computing community is starting to aim for smaller, incremental – but just as ambitious – goals.
What really excites me about the UK’s National Quantum Strategy is the goal to have a MegaQuOp machine by 2028. Again, this is a realistic target – in fact, I’d even argue that we’ll reach the MegaQuOp regime sooner, which is why Riverlane’s QEC solution, Deltaflow, will be ready to work with these MegaQuOp machines by 2026. We don’t need any radically new physics to build a MegaQuOp quantum computer – and such a machine will help us better understand and profile quantum errors.
Once we understand these errors, we can start to fix them and proceed toward TeraQuOp machines. The TeraQuOp is also a floating target – and one where improvements in both the QEC and elsewhere could result in the 2035 target being delivered a few years earlier.
It is only a matter of time before quantum computers are useful for society. And now that we have a co-ordinated focus on quantum error correction, we will reach that tipping point sooner rather than later.