- Radu Marinescu
- Haifeng Qian
- et al.
- 2022
- NeurIPS 2022
Imprecise Probabilistic Logic
Overview
Neuro-symbolic AI
Neuro-symbolic AI aims to bridge the gap between two of the most studied disciplines in AI: principled, deductive reasoning of formal logic systems and data-driven neural network architectures. The aim is to deliver a robust AI capable of reasoning, learning and cognitive modelling. Both disciplines come with their own strengths and weaknesses. Formal logic is interpretable, verifiable and, in principle, can generalize to novel tasks. However it is computationally intensive (if not intractable), requires extensive domain knowledge, and it is quite inflexible with even minor inconsistencies. Neural networks, on the other hand, perform well with noisy data, require little human input, and are much more efficient at runtime. However, they require enormous amounts of training data, are vulnerable to adversarial attacks and in general are very hard to interpret. Merging the two disciplines may exploit either's strengths while mitigating their weaknesses.
Our research efforts have been concentrated on developing novel neural architectures to facilitate learning and efficient logical reasoning (like logical neural networks), as well as developing a novel probabilistic logic framework that can represent and reason with imperfect or incomplete knowledge (such as logical credal networks).
Logical credal networks
Logical credal networks or LCNs are a recent neural architecture specifically designed for effective aggregation and reasoning over multiple sources of imprecise knowledge. An LCN expresses both probability bounds for propositional and first-order logic formulas with few restrictions and a Markov condition that is similar to Bayesian and Markov networks for capturing certain independence relations. Exact inference in LCNs involves the exact solution of a non-linear, non-convex constraint program defined over an exponentially large number of non-negative real valued variables.
Approximate inference in LCNs can be done using a novel iterative message-passing algorithm called ARIEL. This approach is inspired by the classical belief propagation scheme for graphical models and propagates messages in an iterative manner between the nodes of a factor graph associated with the LCN. The key novelty of our scheme is that the messages contain both lower and upper bounds on the marginal probability of LCN’s variables and they are tightened iteratively. These messages solve considerably smaller local non-linear constraint programs as compared with those involved in exact inference.
Our results are quite promising and show that ARIEL can produce high quality solutions compared with the exact inference approach. ARIEL can scale to much larger problems than previously considered while maintaining solution quality, allowing us to tackle practical problems, especially first-order logic LCNs with large domains whose groundings could translate to many hundreds of variables.
Potential future directions include extending to temporal models, further algorithmic innovations for learning LCNs from data and experiments on a wider array of applications.
Probabilistic logical neural networks
Logical neural networks or LNNs are simultaneously capable of both neural network-style learning and classical AI-style reasoning. The LNN is a new neural network architecture with a 1-to-1 correspondence to a system of logical formulas, in which neurons model a rigorously defined notion of weighted real-valued or classical first-order logic. LNNs allow training while preserving the classical or real-valued nature of its logical gates by enforcing certain logical constraints on neural weights during training. They perform bidirectional inference, propagating truth values from each formula's atoms to its root and vice versa, modelling classical inference rules like modus ponens among others. Maintaining both lower and upper bounds on truth values at each of its neurons, an LNN allows the open-world assumption that some logical statements may be true even if their truth value is not known or provable. LNN can naturally be integrated with classical neural network architectures thus facilitating the integration of expert domain knowledge.
Probabilistic logical neural networks or PLNNs are a novel probabilistic extension of LNNs that aim to support probabilistic reasoning tasks such as computing conditional probabilities of certain neurons given observations on other neurons or parts of the network. In addition, our research efforts are also centered around developing novel lifted inference algorithms for both LNN and PLNNs, thus allowing to scale to large first-order logic knowledge bases.