About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Conference paper
Using graphical models as explanations in deep neural networks
Abstract
Despite its remarkable success, deep learning currently typically operates as a black-box. Instead, can models produce explicit reasons to explain their decisions? To address that question, we propose to exploit probabilistic graphical models which are declarative representations of our understanding of the world (e.g., what the relevant variables are, and how they interact with each other), and are commonly used to perform causal inference. More specifically, we propose a novel architecture called Deep Explainable Bayesian Networks whose main idea consists in concatenating a deep network with a Bayesian network, and to rely on the latter one to provide the explanations. We conduct extensive experiments on classical image, and text classification tasks. First, the results show that deep explainable Bayesian networks can achieve comparable accuracy than models that are trained on the same datasets but without producing explanations. Second, the experiments show promising results: The average accuracy of the explanation ranges from 68.3% to 84.8%.
Related
Conference paper
Toward Optimal Software-Defined Interdomain Routing
Conference paper