About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Conference paper
Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning
Abstract
Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, recognition and inference tasks. Given the complexity of the battlespace, ML has the potential to revolutionise how Coalition Situation Understanding is synthesised and revised. However, many issues must be overcome before its widespread adoption. In this paper we consider two - interpretability and adversarial attacks. Interpretability is needed because military decision-makers must be able to justify their decisions. Adversarial attacks arise because many ML algorithms are very sensitive to certain kinds of input perturbations. In this paper, we argue that these two issues are conceptually linked, and insights in one can provide insights in the other. We illustrate these ideas with relevant examples from the literature and our own experiments.
Related
Conference paper
Adversarial Knowledge Transfer from Unlabeled Data
Conference paper