Poster

MetaExplainer In Action: An Overview of a Framework to Generate Multi-Type User-Centered Explanations

Abstract

Explanations are crucial for building trustworthy AI systems, but a gap often exists between the explanations provided by models and those needed by users. We build on prior research and direct feedback from clinicians, which reveal that users prefer interactive, question-driven, and diverse explanations. To bridge this gap, we present MetaExplainer, a neuro-symbolic framework that generates user-centered, multi-type explanations tailored to user questions. MetaExplainer follows a three-stage pipeline: (1) decompose user questions into machine-readable representations using state-of-the-art large language models (LLMs), (2) invoke appropriate model-specific explainer methods to generate recommendation rationales, and (3) synthesize coherent, user-friendly natural-language explanations that summarize and contextualize these outputs. We demonstrate MetaExplainer with an end-to-end example on the widely used PIMA Indian Diabetes dataset, highlighting how the framework addresses diverse clinical questions. Overall, MetaExplainer offers a versatile, traceable approach to explanation generation, capable of addressing a broad spectrum of user queries and advancing AI explainability across domains. MetaExplainer’s implementation, along with quantitative and qualitative evaluation results, is detailed in [1], and the open-source code is publicly available at https://github.com/tetherless-world/metaexplainer.