About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
Physical Review E
Paper
Quality of uncertainty estimates from neural network potential ensembles
Abstract
Neural network potentials (NNPs) combine the computational efficiency of classical interatomic potentials with the high accuracy and flexibility of the ab initio methods used to create the training set, but can also result in unphysical predictions when employed outside their training set distribution. Estimating the epistemic uncertainty of a NNP is required in active learning or on-the-fly generation of potentials. Inspired from their use in other machine-learning applications, NNP ensembles have been used for uncertainty prediction in several studies, with the caveat that ensembles do not provide a rigorous Bayesian estimate of the uncertainty. To test whether NNP ensembles provide accurate uncertainty estimates, we train such ensembles in four different case studies and compare the predicted uncertainty with the errors on out-of-distribution validation sets. Our results indicate that NNP ensembles are often overconfident, underestimating the uncertainty of the model, and require to be calibrated for each system and architecture. We also provide evidence that Bayesian NNPs, obtained by sampling the posterior distribution of the model parameters using Monte Carlo techniques, can provide better uncertainty estimates.