Error-correcting a quantum computer can mean processing 100TB every second.

One of the more striking things about quantum computing is that the field, despite not having proven itself especially useful, has already spawned a collection of startups that are focused on building something other than qubits. It might be easy to dismiss this as opportunism—trying to cash in on the hype surrounding quantum computing. But it can be useful to look at the things these startups are targeting, because they can be an indication of hard problems in quantum computing that haven’t yet been solved by any one of the big companies involved in that space—companies like Amazon, Google, IBM, or Intel.

In the case of a UK-based company called Riverlane, the unsolved piece that is being addressed is the huge amount of classical computations that are going to be necessary to make the quantum hardware work. Specifically, it’s targeting the huge amount of data processing that will be needed for a key part of quantum error correction: recognizing when an error has occurred.

Error detection vs. the data

All qubits are fragile, tending to lose their state during operations, or simply over time. No matter what the technology—cold atoms, superconducting transmons, whatever—these error rates put a hard limit on the amount of computation that can be done before an error is inevitable. That rules out doing almost every useful computation operating directly on existing hardware qubits.

The generally accepted solution to this is to work with what are called logical qubits. These involve linking multiple hardware qubits together and spreading the quantum information among them. Additional hardware qubits are linked in so that they can be measured to monitor errors affecting the data, allowing them to be corrected. It can take dozens of hardware qubits to make a single logical qubit, meaning even the largest existing systems can only support about 50 robust logical qubits.

Riverlane’s founder and CEO, Steve Brierley, told Ars that error correction doesn’t only stress the qubit hardware; it stresses the classical portion of the system as well. Each of the measurements of the qubits used for monitoring the system needs to be processed to detect and interpret any errors. We’ll need roughly 100 logical qubits to do some of the simplest interesting calculations, meaning monitoring thousands of hardware qubits. Doing more sophisticated calculations may mean thousands of logical qubits.

That error-correction data (termed syndrome data in the field) needs to be read between each operation, which makes for a lot of data. “At scale, we’re talking a hundred terabytes per second,” said Brierley. “At a million physical qubits, we’ll be processing about a hundred terabytes per second, which is Netflix global streaming.”

It also has to be processed in real time, otherwise computations will get held up waiting for error correction to happen. To avoid that, errors must be detected in real time. For transmon-based qubits, syndrome data is generated roughly every microsecond, so real time means completing the processing of the data—possibly Terabytes of it—with a frequency of around a Megahertz. And Riverlane was founded to provide hardware that’s capable of handling it.

Handling the data

The system the company has developed is described in a paper that it has posted on the arXiv. It’s designed to handle syndrome data after other hardware has already converted the analog signals into digital form. This allows Riverlane’s hardware to sit outside any low-temperature hardware that’s needed for some forms of physical qubits.

That data is run through an algorithm the paper terms a “Collision Clustering decoder,” which handles the error detection. To demonstrate its effectiveness, they implement it based on a typical Field Programmable Gate Array from Xilinx, where it occupies only about 5 percent of the chip but can handle a logical qubit built from nearly 900 hardware qubits (simulated, in this case).

The company also demonstrated a custom chip that handled an even larger logical qubit, while only occupying a tiny fraction of a square millimeter and consuming just 8 milliwatts of power.

Both of these versions are highly specialized; they simply feed the error information for other parts of the system to act on. So, it is a highly focused solution. But it’s also quite flexible in that it works with various error-correction codes. Critically, it also integrates with systems designed to control a qubit based on very different physics, including cold atoms, trapped ions, and transmons.

“I think early on it was a bit of a puzzle,” Brierley said. “You’ve got all these different types of physics; how are we going to do this?” It turned out not to be a major challenge. “One of our engineers was in Oxford working with the superconducting qubits, and in the afternoon he was working with the ion trap qubits. He came back to Cambridge and he was all excited. He was like, ‘They’re using the same control electronics.'” It turns out that, regardless of the physics involved in controlling the qubits, everybody had borrowed the same hardware from a different field (Brierley said it was a Xilinx radiofrequency system-on-a-chip built for 5G base stationed prototyping.) That makes it relatively easy to integrate Riverlane’s custom hardware with a variety of systems.

What’s next?

But on Tuesday, the company announced a roadmap that will see it scale up this chip rapidly. “Right now we’ve got a single [quantum error-correction] chip that supports a single logical qubit on up to a thousand physical qubits,” Brierley told Ars. “The next generation will support 10,000 physical qubits. And that’s a big challenge—there’s a lot of engineering to do. That gets us to the first generation of error-corrected quantum computers.” From there, the company expects to continue boosting capacity by a factor of 10 every 12 to 18 months, he said.

The arXiv paper also noted that the algorithm currently remembers the entire data stream but will ultimately need to be modified to “forget” older data and only operate on a narrower window of time. But the system is designed so that individual functional units can be combined on a single die (Brierley termed these “chiplets”) and, once the complexity gets high enough, combine multiple dies. Brierley said that the algorithm can be run in parallel on the same data stream, as long as there’s some temporal overlap between the signals that different chiplets are processing.

Again, Riverlane’s interest in this area comes from the fact that these are problems everyone in the quantum computing field will have to solve in order to move forward with error-corrected qubits. And, as Brierley acknowledged, there’s nothing to stop them from creating their own solution. But, he described a strong personal motivation for wanting to see this issue solved:

“I was giving a talk at a conference on a new [quantum] algorithm that I’d developed, and I was very proud of this new thing. And there was a straw poll of the audience of who thought there would be a useful quantum computer in five years, 10 years, 15 years. And about a third of the audience voted for never, there would never be a useful quantum computer. And I was a bit shocked. I was like, ‘well, I’ve just invented an algorithm for a computer that would never exist.’  What am I doing?”

Now, he’s considerably more optimistic. “I think we’ll see the first long-lived logical qubit in the next 12 months, and we’ll quickly get to hundreds of logical qubits in two to three years,” he told Ars. For a technology that some have derided as being constantly over the horizon, that is a very short timeframe.