Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020
Interpretability of predictive models is becoming increasingly important with growing adoption in the real-world. We present RuleNN, a neural network architecture for learning transparent models for sentence classification. The models are in the form of rules expressed in first-order logic, a dialect with well-defined, human-understandable semantics. More precisely, RuleNN learns linguistic expressions (LE) built on top of predicates extracted using shallow natural language understanding. Our experimental results show that RuleNN outperforms statistical relational learning and other neuro-symbolic methods, and performs comparably with black-box recurrent neural networks. Our user studies confirm that the learned LEs are explainable and capture domain semantics. Moreover, allowing domain experts to modify LEs and instill more domain knowledge leads to human-machine co-creation of models with better performance.
Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020
Gabriele Picco, Lam Thanh Hoang, et al.
EMNLP 2021
Kevin Gu, Eva Tuecke, et al.
ICML 2024
Kshitij P. Fadnis, Nathaniel Mills, et al.
EMNLP 2020