One Explanation Does Not Fit All: A Toolkit And Taxonomy Of AI Explainability Techniques
Abstract
As machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. Moreover, these stakeholders, whether they be government regulators, affected citizens, domain experts, or developers, present different requirements for explanations. To address these needs, we introduce AI Explainability 360, an open-source software toolkit featuring eight diverse state-of-the-art explainability methods, two evaluation metrics, tutorials to introduce AI explainability to practitioners, and an extensible software architecture that organises these methods according to their use in the AI modelling pipeline. Our toolkit can help improve transparency of machine learning models and provides a platform to integrate new explainability techniques as they are developed.