Rule-Based Explanations of Machine Learning Classifiers Using Knowledge Graphs
DOI:
https://doi.org/10.1609/aaaiss.v3i1.31200Keywords:
XAI, Explainability, Knowledge Graphs, Rule-based, ExplanationsAbstract
The use of symbolic knowledge representation and reasoning as a way to resolve the lack of transparency of machine learning classifiers is a research area that has lately gained a lot of traction. In this work, we use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier escaping the constraints of using the features of raw data as a means to express the explanations, providing a promising solution to the problem of the understandability of explanations. In particular, given a description of the application domain of the classifier in the form of a knowledge graph, we introduce a novel theoretical framework for representing explanations of its operation, in the form of query-based rules expressed in the terminology of the knowledge graph. This allows for explaining opaque black-box classifiers, using terminology and information that is independent of the features of the classifier and its domain of application, leading to more understandable explanations but also allowing the creation of different levels of explanations according to the final end-user.Downloads
Published
2024-05-20
How to Cite
Menis Mastromichalakis, O., Dervakos, E., Chortaras, A., & Stamou, G. (2024). Rule-Based Explanations of Machine Learning Classifiers Using Knowledge Graphs. Proceedings of the AAAI Symposium Series, 3(1), 193-202. https://doi.org/10.1609/aaaiss.v3i1.31200
Issue
Section
Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge