About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
AAAI 2022
Workshop paper
Logical Neural Networks to Serve Decision Making with Meaning
Abstract
Machine Learning applications are emerging as a useful tool for decision making. However, there exist difficulties in interpreting and explaining the decisions, which is a needed feature for human users. In this paper, we consider a problem of interpretability of the decisions made in sequential decision making problems which frequently addressed by Markov Decision Processes (MDPs) or Reinforcement Learning (RL) approaches. We distinguish between two types of interpretability: (i) dimensionality reduction which targets technical experts such as optimization experts and data scientists and (ii) interpretability for business users (e.g., customers). In this work, we utilise a neuro-symbolic framework called Logical Neural Network (LNN), which offers an integration of data-driven neural learning and symbolic representation. For the multi-echelon supply chain use case we show how LNN is used to help technical expert to make a decision what state variables should remain in the problem state description to be solved by a classical MDP dual linear programming (LP) approach. Later we show how data set, generated by applying MDP policy (using gym environment), was used by LNN to generate rules which are more tractable than a classical MDP policy.