About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
INFORMS 2020
Conference paper
Directly Interpretable AI Models With User Constraints
Abstract
Interpretable AI models make their inner workings visible to end-users providing justification for automated decisions. One class of such a model is a boolean decision rule set, i.e. an if (condition) then (outcome 1) else (outcome 2) statement, where the conditional clause is learnt from the data. This is a challenging to do as there are exponentially many clauses, and training samples may miss context (for edge cases). We present a practical mechanism whereby users provide feedback that is treated as constraints in an optimization problem. We show two applications - one where the underlying rule sets are known and one where they are not - where such user-input leads to more accurate rule sets.