About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICLR 2023
Short paper
A variational condition for minimal-residual latent representations
Abstract
Autoencoders are a useful unsupervised-learning architecture that can be used to build surrogate models of systems governed by partial differential equations. In this article, we address two key questions underpinning this procedure: whether the reconstructed output satisfies the partial differential equation, and whether other latent vectors not corresponding to the encoding of any training data satisfy the same equation. Our results spell out some relevant conditions, and clarify the different impact of three main design decisions (architecture, training criterion, and choice of training solutions) on the final result.