Sign Up for Our Monthly Newsletter

Building Better Qubits

By Pooya Ronagh & Artur Scherer

In the 1QBit Guide to Advanced and Quantum Computing, developments from 2019 were presented that indicate we are on the path to fully realized quantum computing. Advancements in quantum computing are strongly dependent on the development of better qubits. The key to creating more-powerful machines is building more-reliable, better-connected, and longer-lasting qubits that could be accurately controlled to effectively process the information they store. A major barrier to this is the present instability of qubits, often referred to simply as “noise”. 

The research and development of near-term, “noisy, intermediate-scale quantum” (NISQ) devices are important first steps toward building large-scale quantum devices. For NISQ devices consisting of up to about 100 qubits, the effects of noise can be mitigated by clever classical post-processing techniques, and without a significant overhead in the quantum resources required.

Building Noise-Resilient Quantum Hardware

Stock image of people working on engineering machinery

One of the key components in reducing noise is having greater control at the quantum level. Having such control is required for refining the quality of physical qubits and gates to a level of accuracy dictated by what is known as the “threshold theorem for fault-tolerant quantum computation”. 

In the search for better control of qubits, various types of qubit technologies have been proposed and implemented. Each type has its own advantages and limitations. Although superconducting qubits appear to be the leading technology at present, this may not be the case in the longer term.

Error Mitigation Methods

A key component needed in the creation of better qubits is error mitigation (EM), which refers to a variety of techniques designed to increase the reliability and computational capabilities of NISQ devices. Given the limited availability of quantum resources, the realization of more-powerful quantum error correction (QEC) schemes is still believed to be out of reach for current quantum technologies.

Error mitigation is complementary in nature to QEC. Whereas QEC can be viewed as an algorithmic solution to the errors that arise in quantum computation, EM can be seen as a more passive approach that removes the effect of errors in a given computation with the use of clever post-processing programming. The strong appeal of EM over QEC lies in the modest overhead requirements of quantum computational resources: unlike QEC, many EM protocols do not require additional qubits and can maintain a constant circuit length. Error mitigation is thus well-positioned to extend the usefulness of current and near-term noisy, quantum devices.

Two EM methods were recently introduced by researchers at IBM. The first method achieves probabilistic error cancellation by resampling randomized circuits according to a probability distribution. The second method alleviates the effects of errors by what is known as “extrapolation to the zero-noise limit”.

The first EM approach constructs a representation of the desired ideal quantum circuit as a quasi-probability distribution over physical circuits that run on a noisy, quantum hardware device. Such a representation allows one to reduce the effects of noise by averaging over the results of circuits sampled from this distribution. While each individual run is corrupted by noise, the average is statistically unbiased, which implies that the noise averages out to zero in the mean. Therefore, by controlling the number of runs, one can directly control the precision in the resulting calculation. 

The second EM approach pertains to quantum hardware, where the duration of each gate can be adjusted as desired. Assuming that the sources of error do not vary in time, one can modulate the rate of noise relative to the intrinsic dynamics of the quantum system by proportionally changing the time of each gate. In this way, by running the same circuit with different total durations, one effectively modulates the amount of error in the resulting computation. Usually, the shortest time circuit is employed, as it is subject to the least cumulative error. However, it turns out that the individual results can be combined in one estimate that effectively drastically reduces the noise.

Quantum Error Correction

Abstract image of advanced computing chip

Physical qubits—the actual, two-level quantum systems—exist in fragile states of superposition (they can be in a state of one, or zero, or some combination of both) due to the presence of uncontrolled interactions with the environment or other sources of noise. Even in the absence of environmentally induced noise, errors occur due to imperfect implementations of information processing operations. Error correction must therefore be applied to protect quantum information against the effects of errors while also allowing manipulation of the information for computational purposes.

Similar to classical error correction, QEC uses redundancy to detect and correct errors. However, introducing redundancy and error detection is much trickier in quantum systems. Unlike with classical computing information, copying quantum information given by an arbitrary quantum state of superposition is not possible due to the no-cloning theorem of physics. This theorem states that it is impossible to create an identical copy of an arbitrary unknown quantum state. Moreover, performing a quantum measurement irreversibly collapses the wavefunction (the mathematical function that describes the quantum state). The key to introducing redundancy and error detection without destroying sensitive information is in exploiting the phenomenon of quantum entanglement—the phenomenon of  particles’ having their quantum states inextricably linked, even when they are separated in space or time.

Quantum error correcting codes (QECC) program one qubit’s quantum information into a highly entangled state that involves several physical qubits, resulting in what is known as a “logical qubit”. Logical qubits are much more reliable than physical qubits, as they permit errors, to a certain extent, to be detected and corrected. Error detection is achieved by taking advantage of the properties of quantum entanglement shared between the physical qubits that make up a logical qubit. Multi-qubit parity measurements determine whether some of the physical qubits that constitute a logical qubit have been corrupted, and, if so, which ones. 

The key feature of such measurements is that they reveal nothing about the encoded quantum information stored in the logical qubit and hence do not irreversibly disturb the associated state of superposition. Instead, they retrieve information about the errors only. With the knowledge of which errors have occurred, corrective operations may then be applied. Similar to the error detection step, the error correction step does not disturb the encoded quantum information one aims to protect.

The building of better qubits, whether this is achieved through quantum error correction, other error mitigation methods, or the manufacturing of noise-resilient quantum hardware, is a necessity for all companies wishing to attain a foothold in the field of superconducting qubit technologies. The efforts of three major players are presented in the following section.

Major Players in the Field of Superconducting Qubit Technologies

IBM has released the IBM Q system, which has 53 superconducting transmon qubits. A transmon qubit has been reported to have a coherence time, or lifetime of a qubit, of about 100 μs (microseconds). This is nearly a five-fold improvement over initial studies of the coherence time of superconducting qubits. The longer coherence time is achieved by using both very small junctions and large capacitors. The error rates of single- and two-qubit operations are as low as 0.05% and 0.6%, respectively. The system boasts a “Quantum Volume” (QV) of 32. Quantum Volume is a figure of merit developed by IBM that takes into account the number of qubits, their error rates, and qubit connectivity. The higher the QV, the more capable a quantum computer is.

Google recently conducted an experiment that, for a particular problem, demonstrated quantum supremacy using 53 transmon qubits. Each qubit features a coherence time of around 40 μs, fast control, and a design of full controllability and scalability. The long coherence time is critically reliant upon advanced fabrication techniques and a large capacitor. The average error rates for qubit control are as low as 0.15% and 0.36% for single- and two-qubit operations, respectively.

Rigetti Computing uses planar transmon qubits in their 19-qubit chip. The company has significantly improved qubit quality over earlier coherence times of 13–20 μs to the presently reported average coherence time of 76 μs through a refinement in fabrication techniques. The two-qubit gate error rate is 0.8%. The company operates its own rapid prototyping fabrication laboratory, Fab-1, that is working on creating quantum integrated circuits that would scale beyond 100 qubits.

Other Competing Technologies

Stock image of people working on engineering machinery

 

Other promising approaches to the study and creation of large-scale quantum hardware are being tested around the globe. Which technology will become superior remains an open question.

Examples of other advanced technologies include the following:

  • Trapped-ion quantum devices have the advantage of better, replicable scalability compared to superconducting qubits. In addition, recent advances in the control of ions by means of microwave pulses have significantly increased the speed of quantum operations. The major commercial players employing this technological approach are IonQ, Honeywell, and Alpine Quantum Technologies. 
  • A silicon-based qubit is built using a silicon device to electrostatically confine electrons in a corner quantum dot. This qubit is compatible with modern CMOS techniques and can be easily scaled up and at low cost. The Australian company Silicon Quantum Computing is a key player in this field.
  • Silicon photonics technology has the advantage of operating at room temperature and with low levels of noise. Recent advancements in integrated photonics have leveraged the design of scalable and energy-efficient, photonics-based quantum processors. Photonic circuits process and transmit light in a way that is similar to traditional electronic circuits that transmit electronic signals. PsiQuantum and Xanadu are two companies who develop conceptually different silicon-photonics qubit systems.

A greater number of companies are becoming increasingly involved in the race toward fully realized quantum computing and large-scale quantum devices that will be able to tackle real-world problems. Building better, more robust qubits is crucial for advancing quantum computing technologies. Already in the first quarter of 2020, there have been exciting announcements made by qubit developers and the scientific community. As new achievements in the field are attained, the 1QBit blog will continue to provide perspectives on advancements in qubit and other quantum technologies.

The important limitations common to NISQ computing technologies are the absence of error correction and the short coherence time, which limit the computational power of these systems. To remedy these problems, 1QBit is continually developing industry-relevant, quantum-inspired optimization solutions on our 1Qloud service.

We have a Monthly Newsletter

Sign Up for Our Monthly Newsletter

We respect your privacy. If you subscribe, we will never spam you or sell, rent, lease, or give away your information to any third party.

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website. We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.