Event Certifications: logconference.org/LoG/2024/Journal_Track
Abstract: Regulators, researchers, and practitioners recognize the urgency of explainability in artificial intelligence systems, including the ones based on machine learning for graph-structured data. Despite the large number of proposals, however, a common understanding of what constitutes a good explanation is still lacking: different explainers often arrive at different conclusions on the same problem instance, making it hard for practitioners to choose among them. Furthermore, explainers often produce explanations through opaque logic hard to understand and assess -- ironically mirroring the black box nature they aim to elucidate.
Recent proposals in the literature for benchmarking graph-based explainers typically involve embedding specific logic into data, training a black-box model, and then empirically assessing how well the explanation matches the embedded logic, i.e., they test truthfulness to the data. In contrast, we propose a true-to-the-model axiomatic framework for auditing explainers in the task of node classification on graphs.
Our proposal hinges on the fundamental idea that an explainer should discern if a model relies on a particular feature for classifying a node.
Building on this concept, we develop three types of white-box classifiers, with clear internal logic, that are relevant in real-world applications. We then formally prove that the set of features that can induce a change in the classification correctly corresponds to a ground-truth set of predefined important features. This property allows us to use the white-box classifiers to build a testing framework.
We apply this framework to both synthetic and real data and evaluate various state-of-the-art explainers, thus characterizing their behavior. Our findings highlight how explainers often react in a rather counter-intuitive fashion to technical details that might be easily overlooked. Our approach offers valuable insights and recommended practices for selecting the right explainer given the task at hand, and for developing new methods for explaining graph-learning models.
Submission Length: Long submission (more than 12 pages of main content)
Code: https://github.com/corradomonti/axiomatic-g-xai
Assigned Action Editor: ~Guillaume_Rabusseau1
Submission Number: 2078
Loading