International Conference on Big Data

Interpretable Explanations for Probabilistic Inference in Markov Logic

, , , and

Markov Logic Networks (MLNs) represent relational knowledge using a combination of first-order logic and probabilistic models. In this paper, we develop an approach to explain the results of probabilistic inference in MLNs. Unlike approaches such as LIME and SHAP that explain black-box classifiers, e explaining MLN inference is harder since the data is interconnected. We develop an explanation framework that computes importance weights for MLN formulas based on their influence on the marginal likelihood. However, it turns out that computing these important weights exactly is a hard problem and even approximate sampling methods are unreliable when the MLN is large, resulting in non-interpretable explanations. Therefore, we develop an approach where we reduce the large MLN into simpler coalitions of formulas that approximately preserve relational dependencies and generate explanations based on these coalitions. We then weigh explanations from different coalitions and combine them into a single explanation. Our experiments illustrate that our approach generates more interpretable explanations in several text processing problems as compared to other state-of-the-art methods.


  • 1526330 bytes

explainable ai, markov logic networks, statistical relational models

InProceedings

Downloads: 316 downloads

UMBC ebiquity