Quantum Error Mitigation at Utility Scales
Abstract
The utility-scale quantum computers with 100+ qubits, now available through IBM Quantum, have provided a cutting-edge platform for researchers and practitioners to demonstrate their quantum applications of interest. A suite of error mitigation methods has been developed for near term quantum applications, but how to apply them to large-scale quantum computational tasks with optimal settings is yet to be understood and established. This tutorial will present state-of-the-art error mitigation methods and focus on applying those methods to use cases involving quantum circuits at utility scales, using the latest Qiskit Runtime capabilities. The tutorial addresses a central challenge in error mitigation, which is to decide on an optimal error mitigation setting to achieve the desired accuracy and precision for the computation results. For utility-scale circuits, it is generally hard to validate or predict the results of the quantum circuits using direct classical simulation methods due to the exponential overhead of representing the qubit state and the noise channels. On the other hand, testing the computational workflow on the actual hardware by trial and error is also expensive. The tutorial presents a classical workflow to classically simulate the effect of noise on a given quantum computational task under an error mitigation setting. The said workflow is made scalable using a circuit Cliffordization procedure and standard assumptions about hardware noise. The workflow presented can be used to validate if the results from quantum hardware execution are expected to meet the desired accuracy and precision, and it can be further used to optimize error mitigation settings.