Neural Reasoning Networks: Efficient interpretable neural networks with automatic textual explanations
Abstract
In this paper, we present a novel neuro-symbolic AI architecture, Neural Reasoning Networks (NRN), that is scalable, and generates interpretable logical reasoning at both a global and sample level. NRNs use connected layers of logical neurons which implement a form of Lukasiewicz logic. A combined gradient descent and bandit-based training procedure jointly optimizes both the structure and weights of the network and is implemented as an extension to Pytorch that takes full advantage of GPU scaling and batched training. Evaluation on a diverse set of open-source datasets for tabular learning demonstrates performance that exceeds traditional deep learning (DL) and is on par with state-of-the-art classical machine learning (ML) tree-based approaches, while training faster compared with other recent methods. Furthermore, NRN is the only method to meet all three challenges for interpretable algorithms introduced by Rudin (2019), namely 1) \textit{logical conditions}, 2) \textit{linear modeling}, and 3) \textit{case-based reasoning}. Our approach thus provides a strong solution to overcome the interpretability-performance trade-off.