Many important real-world data sets come in the form of graphs or networks, including social networks, knowledge graphs, protein-interaction networks, the World Wide Web and many more. Graph neural networks (GNNs) are connectionist models that capture the dependence structure induced by links via message passing between the nodes of graphs. Similarly to other connectionist models, GNNs lack transparency in their decision-making. Since the unprecedented levels of performance of such AI methods lead to increasing use in the daily life of humans, there is an emerging need to understand the decision-making process of such systems. While symbolic methods such as inductive logic learning come with explainability, they perform best when dealing with relatively small and precise data. Sub-symbolic methods such as graph neural networks are able to handle large datasets, have a higher tolerance to noise in real world data, generally have high computing performance and are easier to scale up.
We aim to develop a hybrid method by combining GNNs, sub-symbolic explainer methods and inductive logic learning. This enables human-centric and causal explanations through extracting symbolic explanations from identified decision drivers and enriching them with available background knowledge. With this method, high-accuracy sub-symbolic predictions come with symbolic-level explanations, and the preliminary evaluation results reported show an effective solution for the performance vs. explainability trade-off. The evaluation is done on a chemical use case and an industrial cybersecurity use case.