Viviane T. Silva, Rodrigo Neumann Barros Ferreira, et al.
ACS Fall 2024
Concept-based models are an emerging paradigm in deep learning that constrains the inference process to operate through human-interpretable variables, facilitating explainability and human interaction. However, these architectures, on par with popular opaque neural models, fail to account for the true causal mechanisms underlying the target phenomena represented in the data. This hampers their ability to support causal reasoning tasks, limits out-of-distribution generalization, and hinders the implementation of fairness constraints. To overcome these issues, we propose \emph{Causally reliable Concept Bottleneck Models} (CBMs), a class of concept-based architectures that enforce reasoning through a bottleneck of concepts structured according to a model of the real-world causal mechanisms. We also introduce a pipeline to automatically learn this structure from observational data and \emph{unstructured} background knowledge (e.g., scientific literature). Experimental evidence suggests that CBMs are more interpretable, causally reliable, and improve responsiveness to interventions w.r.t. standard opaque and concept-based models, while maintaining their accuracy.
Viviane T. Silva, Rodrigo Neumann Barros Ferreira, et al.
ACS Fall 2024
Yuankai Luo, Lei Shi, et al.
NeurIPS 2023
Alice Driessen, Susane Unger, et al.
ISMB 2023
Amogh Wasti, Zongmin Yang, et al.
SHTC 2025