About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
CIKM 2019
Conference paper
Additive explanations for anomalies detected from multivariate temporal data
Abstract
Detecting anomalies from high-dimensional multivariate temporal data is challenging, because of the non-linear, complex relationships between signals. Recently, deep learning methods based on autoencoders have been shown to capture these relationships and accurately discern between normal and abnormal patterns of behavior, even in fully unsupervised scenarios. However, validating the anomalies detected is difficult without additional explanations. In this paper, we extend SHAP - a unified framework for providing additive explanations, previously applied for supervised models - with influence weighting, in order to explain anomalies detected from multivariate time series with a GRU-based autoencoder. Namely, we extract the signals that contribute most to an anomaly and those that counteract it. We evaluate our approach on two use cases and show that we can generate insightful explanations for both single and multiple anomalies.