Fault tolerant quantum computing
Classification of quantum processors
From Wikipedia, the free encyclopedia
In quantum information, fault-tolerant quantum computing (FTQC) is a regime of quantum processors that are both large-scale and that effectively incorporate quantum error correction to achieve arbitrarily low error rates (i.e. their logical error rate is much lower than their physical error rate).[1][2]
Full-FTQC processors are theoretically possible, but have not yet been realized experimentally. They are often seen as the primary end goal of quantum processor development, and are used to contrast with existing noisy intermediate-scale quantum (NISQ) quantum processors, which are subject to noise and decoherence preventing scalable error correction.
One way FTQC devices can be realized is by grouping together multiple physical qubits to create a single logical qubit, and using error correction methods such as the surface code so that the combined system is fault-tolerant.[3] Proposed FTQC devices generally include hundreds of logical (error-corrected) qubits, which would mean thousands of physical qubits at minimum.[4][5]
The transition from NISQ-era devices to the FTQC regime is an open research topic in quantum information.[6][7] It is likely that advances at both the hardware and algorithm level are necessary for the transition to occur.
History
When quantum computers were initially theorized, it was unclear whether they would be able to overcome errors due to noise. Although quantum error-correcting codes, first established by Peter Shor in 1995, made it possible to protect data from these errors by using redundancies, it became clear that this error detection and correction process might actually produce more errors than it removed.[8][9]
To address this issue, researchers began developing quantum stabilizer codes and a quantum stabilizer formalism, which allowed them to establish protocols for fault-tolerant quantum computation. The culmination of these efforts was to demonstrate the accuracy threshold theorem, which states that, under certain conditions, a quantum computer with a physical error rate below a certain threshold can suppress the logical error rate to arbitrarily low levels, even in the presence of noise and imperfect quantum gates.[8][10] The proof of the threshold theorem meant that reliable quantum computing in the presence of noise and imperfect quantum gates was not only possible, but also scalable. This meant that a fault-tolerant quantum computer was physically realizable.
The concept of FTQC as a distinct future direction, rather than just a technical condition, was popularized by John Preskill, who contrasted near-term NISQ quantum processors with the end goal of fully fault-tolerant quantum computing.[2] Since this paper, FTQC has become a ubiquitous term for error-corrected regimes beyond NISQ.[11][12][13]
From NISQ to FTQC
The proposed development of quantum computing from NISQ to FTQC is usually grouped into two or three 'eras' of devices as follows:
NISQ era
NISQ devices are sensitive to noise, and have a large number of degrees of freedom when compared to a classical computer, but not sufficiently large enough to perform error-correction effectively. The algorithms run on NISQ devices are usually classical-quantum hybrid algorithms such as the variational quantum eigensolver.[6]
Early Fault Tolerant Quantum Computing (EFTQC) regime
The EFTQC era, or the early-FTQC era, is a proposed intermediate era between NISQ and FTQC which reduces the number of logical qubits, and thus limits the size of problems that can be solved, but in turn allowing for efficient fault-tolerant protocols.[14] Quantum processors in the EFTQC regime would still be capable of solving relevant problems, such as simulating the Hubbard model.[15]
As of 2025, basic fault-tolerant operations have been demonstrated in research settings. The Google Sycamore and Willow processors have shown that increasing the error correction code's distance (practically, increasing the number of physical qubits per logical qubit) improves the logical error rate per measurement round, indicating that early-FTQC devices are experimentally realizable.[16] However, these technologies are far from the number of logical qubits necessary for practical FTQC.[17]
FTQC regime
While early experimental results have shown that logical error rate can be improved, FTQC devices must also be able to demonstrate fault tolerance on a large scale in order to run fault-tolerant-level algorithms.[18] Due to the importance of this criteria, the FTQC regime is sometimes instead referred to as the fault-tolerant application-scale quantum (FASQ) regime, to indicate that these devices must not only be fault-tolerant, but also that they are large enough to run various useful applications.[19][20][21]
There is no hard cutoff for when a fault-tolerant quantum processor has reached the application-scale regime, and thus capable of real-world applications. One qualitative benchmark proposed by John Preskill is the "megaquop" machine, a proposed quantum processor which can execute on the order of a million coherent quantum operations before errors overwhelm computation.[22][23]
Currently, because quantum error correction requires encoding information across multiple physical qubits, realizations of FTQC quantum processors would require very large numbers of physical qubits. For example, a 2048-bit RSA integer using Shor’s algorithm would require around one million superconducting qubits.[24]
Approaches to realizing FTQC devices
Topological stabilizer codes (Surface codes)
Stabilizer codes work by preparing and maintaining data qubits, using ancillary qubits to extract stabilizer outcomes, and repeating this extraction to distinguish between memory and data errors. Then, a decoder, which is usually a classical algorithm, will infer the most likely error configuration, and apply corrections accordingly.[25] In this approach, logical gates between logical qubits are realized using lattice surgery.
Stabilizer codes are is useful because everything is local, which makes it compatible with existing 2D architectures such as superconducting chips. The major issue with this approach is its overhead; the number of physical qubits needed to achieve a single fault-tolerant logical qubit is large. As a result, it is difficult to scale. Another challenge to this approach is universality. It is not easy to do non-Clifford gates in a surface code system, but these gates are required for a complete gate set. One solution to this is magic-state distillation.
LDPC codes
In contrast to the surface code, LDPC codes allow logical qubits to scale proportionally with physical qubits, reducing the amount of overhead necessary for FTQC in comparison to stabilizer codes.[26] However, LDPC codes require more complex connectivity, because parity checks may be further than the nearest-neighbor. As a result, LDPC-based FTQC would require significant improvements to hardware and architecture to be implemented at-scale.
Measurement-Based FTQC (MBQC)
MBQC starts with a vast, pre-entangled resource, such as a 2D or 3D "cluster state". Then, one performs specific single-qubit measurements on this cluster state.[27] During these measurements, error correction codes like the surface code are implemented to protect logical qubits. The benefit of this approach is that once the large entangled state is generated, logical gate operations are simple.
This approach is especially promising for high-connectivity platforms like trapped ions and neutral atoms.[28]