Publication
INFORMS 2020
Conference paper
Directly Interpretable AI Models With User Constraints
Abstract
Interpretable AI models make their inner workings visible to end-users providing justification for automated decisions. One class of such a model is a boolean decision rule set, i.e. an if (condition) then (outcome 1) else (outcome 2) statement, where the conditional clause is learnt from the data. This is a challenging to do as there are exponentially many clauses, and training samples may miss context (for edge cases). We present a practical mechanism whereby users provide feedback that is treated as constraints in an optimization problem. We show two applications - one where the underlying rule sets are known and one where they are not - where such user-input leads to more accurate rule sets.