Achieving useful AI explanations in a high-Tempo complex environment
Abstract
Based on current capabilities, many Machine Learning techniques are often inscrutable and they can be hard for users to trust because they lack effective means of generating explanations for their outputs. There is much research and development investigating this area, with a wide variety of proposed explanation techniques for AI/ML across a variety of data modalities. In this paper we investigate which modality of explanation to choose for a particular user and task, taking into account relevant contextual information such as the time available to them, their level of skill, what level of access they have to the data and sensors in question, and the device that they are using. Additional environmental factors such as available bandwidth, currently usable sensors and services are also able to be accounted for. The explanation techniques that we are investigating range across transparent and post-hoc mechanisms and form part of a conversation with the user in which the explanation (and therefore human understanding of the AI decision) can be ascertained through dialogue with the system. Our research is exploring generic techniques that can be used to underpin useful explanations in a range of modalities in the context of AI/ML services that operate on multisensor data in a distributed, dynamic, contested and adversarial setting. We define a meta-model for representing this information and through a series of examples show how this approach can be used to support conversational explanation across a range of situations, datasets and modalities.