Potential and limitations of probabilistic modelling with quantum circuits
Abstract
Learning representations of probability distributions is a fundamental task in machine learning to which quantum learning algorithms seem potentially well suited. In this talk I will describe some recent results, which contribute to our understanding of the extent to which different types of quantum probabilistic modelling algorithms may or may not offer concrete advantages over classical algorithms. Firstly, I will show that there exists a finely tuned probabilistic modelling task which is provably hard for classical algorithms, but efficiently solvable by a special-purpose quantum algorithm running on a fault-tolerant quantum computer. While this seems like a promising start, ideally one would like to show similar quantum versus classical separations result for "real-world" classes of probability distributions, via generically applicable quantum learning algorithms, which can run on near term devices. An ideal candidate for such a distribution class is precisely the output distributions of quantum circuits themselves - so called "quantum circuit Born machines". Given this, in the second part of the talk I will present a variety of results characterizing both the quantum and classical learnability (or non-learnability!) of the output distributions of quantum circuits, and discuss both the extent to which these results limit the potential advantages of near-term quantum generative modelling techniques, and the extent to which these results inform our understanding of the link between efficient simulation and efficient learning. Of particular interest are our results showing (a) the average-case hardness of learning sufficiently deep quantum circuit Born machines in the statistical query model, and (b) that while the output distributions of Clifford circuits can be efficiently learned, the addition of a single T gate (surprisingly!) renders the worst-case learning problem hard.