Abstract
Principles of analogical reasoning have recently been applied in the context of machine learning, for example to develop new methods for classification and preference learning. In this paper, we argue that, while analogical reasoning is certainly useful for constructing new learning algorithms with high predictive accuracy, is arguably not less interesting from an interpretability and explainability point of view. More specifically, we take the view that an analogy-based approach is a viable alternative to existing approaches in the realm of explainable AI and interpretable machine learning, and that analogy-based explanations of the predictions produced by a machine learning algorithm can complement similarity-based explanations in a meaningful way. To corroborate these claims, we outline the basic idea of an analogy-based explanation and illustrate its potential usefulness by means of some examples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ahmadi Fahandar, M., Hüllermeier, E.: Feature selection for analogy-based learning to rank. In: Kralj Novak, P., Šmuc, T., Džeroski, S. (eds.) DS 2019. LNCS (LNAI), vol. 11828, pp. 279–289. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33778-0_22
Ahmadi Fahandar, M., Hüllermeier, E.: Learning to rank based on analogical reasoning. In: Proceedings AAAI-2018, 32th AAAI Conference on Artificial Intelligence, Louisiana, USA, New Orleans, pp. 2951–2958 (2018)
Andrews, R., Diederich, J., Tickle, A.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl. Based Syst. 8(6), 373–389 (1995)
Beliakov, G., James, S.: Citation-based journal ranks: the use of fuzzy measures. Fuzzy Sets Syst. (2010)
Bounhas, M., Pirlot, M., Prade, H.: Predicting preferences by means of analogical proportions. In: Cox, M.T., Funk, P., Begum, S. (eds.) ICCBR 2018. LNCS (LNAI), vol. 11156, pp. 515–531. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01081-2_34
Bounhas, M., Prade, H., Richard, G.: Analogy-based classifiers for nominal or numerical data. Int. J. Approx. Reason. 91, 36–55 (2017)
Cheng, W., Hüllermeier, E.: Learning similarity functions from qualitative feedback. In: Althoff, K.-D., Bergmann, R., Minor, M., Hanft, A. (eds.) ECCBR 2008. LNCS (LNAI), vol. 5239, pp. 120–134. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85502-6_8
Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theory IT 13, 21–27 (1967)
Dubois, D., Prade, H., Richard, G.: Multiple-valued extensions of analogical proportions. Fuzzy Sets Syst. 292, 193–202 (2016)
Fürnkranz, J., Hüllermeier, E.: Preference Learning. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-14125-6
Gentner, D.: The mechanisms of analogical reasoning. In: Vosniadou, S., Ortony, A. (eds.) Similarity and Analogical Reasoning, pp. 197–241. Cambridge University Press, Cambridge (1989)
Goodman, R., Flaxman, S.: European Union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 38(3), 1–9 (2017)
Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Processing of NeurIPS, Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)
Miclet, L., Prade, H.: Handling analogical proportions in classical logic and fuzzy logics settings. In: Sossai, C., Chemello, G. (eds.) ECSQARU 2009. LNCS (LNAI), vol. 5590, pp. 638–650. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02906-6_55
Molnar, C.: Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2018). http://leanpub.com/interpretable-machine-learning
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.): Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6
Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. CoRR abs/1907.02584 (2019). http://arxiv.org/abs/1907.02584
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Hüllermeier, E. (2020). Towards Analogy-Based Explanations in Machine Learning. In: Torra, V., Narukawa, Y., Nin, J., Agell, N. (eds) Modeling Decisions for Artificial Intelligence. MDAI 2020. Lecture Notes in Computer Science(), vol 12256. Springer, Cham. https://doi.org/10.1007/978-3-030-57524-3_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-57524-3_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-57523-6
Online ISBN: 978-3-030-57524-3
eBook Packages: Computer ScienceComputer Science (R0)