Neural Unification for Logic Reasoning over Natural Language
Abstract
Reasoning over a knowledge base is an important AI task and research area. The problem is especially difficult when the knowledge base is expressed in natural language as opposed to formal logical representations. \cite{clark2020transformers} have recently proposed a neural network architecture based on transformers that emulates reasoning by answering queries over a knowledge base consisting of facts and rules, where both the queries, the facts and the rules are expressed in natural language (English). The RuleTaker approach of \cite{clark2020transformers} generalizes well when the model is trained with deep queries, which are questions that require multiple inference steps over the knowledge base. In this paper we propose an architecture that can generalize well to deep queries, even if the model is trained only on shallow ones. Our architecture emulates logical unification to reduce the query depth using the knowledge base. We demonstrate the approach in experiments using a diverse set of benchmark data and achieve state-of-the-art results on these datasets when training the model only with shallow queries. We also contribute the source code to the research community for reproducibility.