About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
INFORMS 2022
Talk
A Novel Hybrid Interpretability Method for Sequential Decision Making
Abstract
We consider the problem of interpretability for sequential decision making, which is frequently addressed by the Markov Decision Processes (MDPs) approach. We distinguish interpretability toward two types: (i) for a machine and (ii) for humans.The key difference between the two is that interpretability for a machine helps simplify the model and for humans helps understand the recommendations. We propose a hybrid approach combining these two types of interpretabilities to achieve better user satisfaction; for this we utilized i) the Logical NeuralNetwork (LNN) and ii) the classical Decision Tree (DT) techniques.