This year’s Quantum Computing Theory in Practice (QCTiP) hosted by Riverlane at Jesus College, Cambridge University, welcomed more than 250 global experts from academia and industry to three days of talks, posters and knowledge sharing.

This year, there was an overriding emphasis on writing algorithms and preparing the quantum operating systems and control systems for early fault-tolerant quantum computing.

We’re now on the path toward large error corrected quantum computers with speakers and attendees thinking beyond NISQ and solving quantum’s defining challenge: quantum error correction.

The conference was packed with high-calibre research across five streams so it’s difficult to pick out all the insights uncovered, both during the formal talks and my informal conversations. But here are some highlights.

1. Shadow tomography 

Shadow tomography was a key theme across many of the talks and seems very much in vogue now. Shadow tomography is a technique for measuring quantum systems, which is much more efficient than full tomography. Shadow tomography achieves this by a combination of randomised algorithm proof techniques and limiting itself to “only learning a limited set of important quantities”.

Let me focus on one personal highlight in this area. Balint Koczor, a research fellow at Oxford University, spoke about Algorithmic Shadow Spectroscopy and how to combine it with statistical phase estimation, with some compelling evidence that this might use fewer ‘shots’ (e.g. repetitions of the same algorithm).

Balint’s work was interesting because, when we do phase estimations, we normally only get one bit of information per shot. However, by measuring all qubits, Balint said we can now get N bits per shot, and all of these contain some phase information.

So, intuitively, you should be able to do something useful with this information. This is an interesting premise but one that now needs scaling up from small to larger systems. I look forward to hearing more about shadow tomography and how this area progresses.

2. Efficient quantum algorithms for imitating thermal systems in nature

András Gilyén, a Marie Curie fellow from the Renyi Institute, talked about a Gibbs state preparation algorithm. This has been a long-standing question in the literature and András presented some elegant solutions to several of the major obstacles. These solutions include how to implement a Metropolis update with a unitary, and technical aspects on how to imitate the weak-coupling limit.

The argument was that this approach can “imitate nature”. But there is still a risk that some systems will have a long mixing time, which they take a long time to reach its natural, low energy, state. Then, the Gibbs preparation algorithm will also take a long time. Glass is a cool example of long mixing times, as all glass slowly changes into a crystal – it will just take longer than the age of the universe to complete the transformation.

3. Power requirements for quantum error corrected computers

The industry panel highlighted the energy requirements for large-scale quantum computing, focusing on superconducting architectures where cooling to the millikelvin regime is a current requirement.

An interesting comment, which we discussed and verified later, was that scaling up current superconducting technologies (without any improvements in power efficiency or control systems) would mean we need a whole nuclear plant to power an error-corrected quantum computer capable of solving useful problems.

While this paper provides some estimates, there are still many unanswered questions in this area and improvements are needed to address quantum’s power requirements as we increase qubit numbers.

4. Low-density parity check quantum error correction codes

Nicolas Delfosse, principal researcher at Microsoft Quantum, spoke about two-dimensional implementations of quantum Low-Density Parity Check (qLDPC) codes. It was an impressive talk as it was one of the first works to study circuit-level noise with qLDPC codes.

The researchers managed to get the data by just running belief propagation, which is a decoding algorithm popular in classical technologies, without any additional post-processing. They are yet to do any detailed analysis to identify the best circuits (those that would lead to fewer bad hook errors). Yet, the research still reveals a potential advantage in qLDPC codes and there is lots of headroom for improvement here.

5. Fast, open-source decoders for quantum error correction codes

UCL quantum researcher Oscar Higgot’s open source PyMatching package is the simplest open-source decoding tool out there. Recently, he teamed up with Craig Gidney from Google’s quantum team to build PyMatching2, with an impressive speed-up that will accelerate research in the field.

Oscar presented the ideas and techniques behind PyMatching2.  This is a tool that Riverlane has been using internally, and I know many other people in the community use it already.

6. Parallelised methods for quantum error correction

Of course, I was also happy to see some Riverlane work represented including a talk by our senior quantum scientist Luka Skoric on parallel window decoding.

Luka shared the methodology laid out in our recent paper on this topic that shows how to parallelise decoding with respect to the time direction. This shows a link between decoder speeds and the logical clock speed of quantum computers.

If you couldn’t make QCTiP or would like to revisit any of the presentations, all of the talks were professionally recorded and are now available on our YouTube channel here.