arxiv.org/abs/2310.04710
Approximate quantum error correction is a mostly unexplored land, of which we know only a few landmarks. Here they give us a map, and show interesting connections to other areas of physics.
| https://twitter.com/decodoku | |
| GitHub | https://github.com/quantumjim |
To unleash the potential of quantum computers, noise effects on qubits' performance must be carefully managed. The decoders responsible for diagnosing noise-induced computational errors must use resources efficiently to enable scaling to large qubit counts and cryogenic operation. Additionally, they must operate at speed, to avoid an exponential slowdown in the logical clock rate of the quantum computer. To overcome such challenges, we introduce the Collision Clustering decoder and implement it on FPGA and ASIC hardware. We simulate logical memory experiments using the leading quantum error correction scheme, the surface code, and demonstrate MHz decoding speed - matching the requirements of fast-operating modalities such as superconducting qubits - up to an 881 and 1057 qubits surface code with the FPGA and ASIC, respectively. The ASIC design occupies 0.06 mm$^2$ and consumes only 8 mW of power. Our decoder is both highly performant and resource efficient, unlocking a viable path to practically realising fault-tolerant quantum computers.
Autonomous quantum memories are a way to passively protect quantum information using engineered dissipation that creates an "always-on'' decoder. We analyze Markovian autonomous decoders that can be implemented with a wide range of qubit and bosonic error-correcting codes, and derive several upper bounds and a lower bound on the logical error rate in terms of correction and noise rates. For many-body quantum codes, we show that, to achieve error suppression comparable to active error correction, autonomous decoders generally require correction rates that grow with code size. For codes with a threshold, we show that it is possible to achieve faster-than-polynomial decay of the logical error rate with code size by using superlogarithmic scaling of the correction rate. We illustrate our results with several examples. One example is an exactly solvable global dissipative toric code model that can achieve an effective logical error rate that decreases exponentially with the linear lattice size, provided that the recovery rate grows proportionally with the linear lattice size.
Today in QEC on the arXiv
https://arxiv.org/abs/2308.15520
qLDPC codes can have impressive numbers, but they mean nothing if there is no way to do syndrome measurements without messing everything up.
Here they prove that at least the hypergraph product codes don't have this issue. You can do the syndrome measurements however you like, and it won't lower the effective distance.
Unlike the surface code, quantum low-density parity-check (QLDPC) codes can have a finite encoding rate, potentially lowering the error correction overhead. However, finite-rate QLDPC codes have nonlocal stabilizers, making it difficult to design stabilizer measurement circuits that are low-depth and do not decrease the effective distance. Here, we demonstrate that a popular family of finite-rate QLDPC codes, hypergraph product codes, has the convenient property of distance-robustness: any stabilizer measurement circuit preserves the effective distance. In particular, we prove the depth-optimal circuit in [Tremblay et al, PRL 129, 050504 (2022)] is also optimal in terms of effective distance.
The GAP package QDistRnd implements a probabilistic algorithm for finding the minimum distance of a quantum low-density parity-check code linear over a finite field GF(q). At each step several codewords are randomly drawn from a distribution biased toward smaller weights. The corresponding weights are used to update the upper bound on the distance, which eventually converges to the minimum distance of the code. While there is no performance guarantee, an empirical convergence criterion is given to estimate the probability that a minimum weight codeword has been found. In addition, a format for storing matrices associated with q-ary quantum codes is introduced and implemented via the provided import/export functions. The format, MTXE, is based on the well established MaTrix market eXchange (MTX) Coordinate format developed at NIST, and is designed for full backward compatibility with this format. Thus, MTXE files are readable by any software package which supports MTX.
We address the problem of performing message-passing-based decoding of quantum LDPC codes under hardware latency limitations. We propose a novel way to do layered decoding that suits quantum constraints and outperforms flooded scheduling, the usual scheduling on parallel architectures. A generic construction is given to construct layers of hypergraph product codes. In the process, we introduce two new notions, t-covering layers which is a generalization of the usual layer decomposition, and a new scheduling called random order scheduling. Numerical simulations show that the random ordering is of independent interest as it helps relieve the high error floor typical of message-passing decoders on quantum codes for both layered and serial decoding without the need for post-processing.
We consider some questions related to codes constructed using various graphs, in particular focusing on graphs which are not lattices in two or three dimensions. We begin by considering Floquet codes which can be constructed using ``emergent fermions". Here, we are considering codes that in some sense generalize the honeycomb code[1] to more general, non-planar graphs. We then consider a class of these codes that is related to (generalized) toric codes on $2$-complexes. For (generalized) toric codes on $2$-complexes, the following question arises: can the distance of these codes grow faster than square-root? We answer the question negatively, and remark on recent systolic inequalities[2]. We then turn to the case that of planar codes with vacancies, or ``dead qubits", and consider the statistical mechanics of decoding in this setting. Although we do not prove a threshold, our results should be asymptotically correct for low error probability and high degree decoding graphs (high degree taken before low error probability). In an appendix, we discuss a toy model of vacancies in planar quantum codes, giving a phenomenological discussion of how errors occur when ``super-stabilizers" are not measured, and in a separate appendix we discuss a relation between Floquet codes and chain maps.
Achieving fault-tolerance will require a strong relationship between the hardware and the protocols used. Different approaches will therefore naturally have tailored proof-of-principle experiments to benchmark progress. Nevertheless, repetition codes have become a commonly used basis of experiments that allow cross-platform comparisons. Here we propose methods by which repetition code experiments can be expanded and improved, while retaining cross-platform compatibility. We also consider novel methods of analyzing the results, which offer more detailed insights than simple calculation of the logical error rate.
Leakage errors, in which a qubit is excited to a level outside the qubit subspace, represent a significant obstacle in the development of robust quantum computers. We present a computationally efficient simulation methodology for studying leakage errors in quantum error correcting codes (QECCs) using tensor network methods, specifically Matrix Product States (MPS). Our approach enables the simulation of various leakage processes, including thermal noise and coherent errors, without approximations (such as the Pauli twirling approximation) that can lead to errors in the estimation of the logical error rate. We apply our method to two QECCs: the one-dimensional (1D) repetition code and a thin $3\times d$ surface code. By leveraging the small amount of entanglement generated during the error correction process, we are able to study large systems, up to a few hundred qudits, over many code cycles. We consider a realistic noise model of leakage relevant to superconducting qubits to evaluate code performance and a variety of leakage removal strategies. Our numerical results suggest that appropriate leakage removal is crucial, especially when the code distance is large.