About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Paper
Scalable measurement error mitigation via iterative bayesian unfolding
Abstract
Measurement errors are a significant obstacle to achieving scalable quantum computation. To counteract systematic readout errors, researchers have developed postprocessing techniques known as measurement error mitigation methods. However, these methods face a tradeoff between scalability and returning nonnegative probabilities. In this paper, we present a solution to overcome this challenge. Our approach focuses on iterative Bayesian unfolding, a standard mitigation technique used in high-energy physics experiments, and implements it in a scalable way. We demonstrate our method on experimental Greenberger-Horne-Zeilinger state preparation on up to 127 qubits and on the Bernstein-Vazirani algorithm on up to 26 qubits. Compared to state-of-the-art methods (such as M3), our implementation guarantees valid probability distributions, returns comparable or better-mitigated results, and does so without a noticeable time and memory overhead.