Abstract
With the rapid advancement of artificial intelligence (AI) technology and analytics compute engines, machine learning (ML) models have become increasingly complex. Unfortunately, many of these models are commonly treated as black boxes, lacking user interpretability. As a result, understanding and trusting the predictions made by such complex ML models have become more challenging. However, researchers have developed various frameworks that employ explainable AI methods to enhance the interpretability and explainability of ML models, thereby increasing the trustworthiness of their predictions. In this study, we propose a methodology called Local Interpretable Model Agnostic Shap Explanations (LIMASE). This ML explanation technique leverages Shapley values within the LIME paradigm to achieve several objectives: (a) It explains the prediction of any model by utilizing a locally faithful and interpretable decision tree model. The Tree Explainer is employed to calculate the Shapley values, enabling visually interpretable explanations, (b) It provides visually interpretable global explanations by plotting local explanations for multiple data points, (c) It offers a solution for the submodular optimization problem, (d) It provides insights into regional interpretation, and (e) It enables faster computation compared to the use of kernel explainer. By proposing the LIMASE methodology, this work contributes to the field of ML model interpretability and provides a practical solution to address the challenges posed by complex and opaque ML models. The proposed approach empowers users to gain a deeper understanding of model predictions through visually interpretable explanations at both local and global levels. Overall, this study aims to bridge the gap between the complexity of ML models and the need for interpretability, ultimately enhancing trust and usability in AI-driven applications.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Tjoa, E., Guan, C.: A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. P, IEEE Transactions on Neural Networks and Learning Systems (2020). https://doi.org/10.1109/TNNLS.2020.3027314
Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fus. 58, 82–115 (2019). https://doi.org/10.1016/j.inffus.2019.12.012
Anjomshoae, S., Främling, K., Najjar, A.: Explanations of black-box model predictions by contextual importance and utility. Explain. Transp. Autonom. Agents Multi-Agent Syst. (2019). https://doi.org/10.1007/978-3-030-30391-4_6
Kohlbrenner, M., Bauer, A., et al.: Towards best practice in explaining neural network decisions with LRP. Neural Netw. (2020). https://doi.org/10.1109/IJCNN48605.2020.9206975.10.48550/arXiv.1910.09840
Kakogeorgiou, I., Karantzalos, K.: Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing. Int J. Appl. Earth Observ. Geoinform 103, 102520 (2021). https://doi.org/10.1016/j.jag.2021.102520
La Gatta, V., Moscato, V., Postiglione, M., Sperlì, G.: CASTLE: Cluster-aided space transformation for local explanations. Expert Syst. Appl. 179, 115045 (2021). https://doi.org/10.1016/j.eswa.2021.115045
Darius, A., Romain, H., Vincent, G.: Towards Rigorous Interpretations: a Formalisation of Feature Attribution. Mach. Learn. (2021). https://doi.org/10.48550/arXiv.2104.12437
Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach (2019). bLIMEy: Surrogate Prediction Explanations Beyond. LIME. https://doi.org/10.48550/arXiv.1910.13016
Scott Lundberg and Su-In Lee.: A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 4768–4777 (2017). https://doi.org/10.48550/arXiv.1705.07874
Jürgen, D., Sabrina, K.: Why model why? Assessing the strengths and limitations of LIME (2020). https://doi.org/10.48550/arXiv.2012.00093
David, W., Limor, G., Ankur, T., Luciano, F.: Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice (2021). https://doi.org/10.48550/arXiv.2103.14651
Amparore, E., Perotti, A., Bajardi, P.: To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods. PeerJ Computer Science. 7, e479 (2021). https://doi.org/10.7717/peerj-cs.479
Bence Mark, H., Finnian, K., Rob van s, Anil A,: Residual networks for resisting noise analysis of an embeddings-based spoofing countermeasure. Odyssey (2020). https://doi.org/10.21437/Odyssey.2020-46
Poppi, S., Cornia, M., Baraldi, L., Cucchiara, R.: Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis. Computer Vision and Pattern Recognition (2021). https://doi.org/10.1109/CVPRW53098.2021.00260
Tulio Riberio, M., Singh, S., and Guestrin, C.: “why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and data Mining (KDD’16). Association for Computing Machinery, New York, NY, USA, 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778
Marco Tulio, R., Singh, S., Guestrin, c.:“Why should i trust you?”: Explaining the predictions of any classifier (2016). arXiv:1602.04938
Ras, G., van, Gerven, M., Haselage, P.: Explanation methods in deep learning: users, values, concerns and challenges (2018). arXiv:1803.07517
Andrew S. R., Nina, C., Elisa, Z. H., Elena, L. G., Finale D.-V.: Evaluating the Interpretability of generative models by interactive reconstruction (2021). arXiv:2102.01264.
Fryer, D., Strumke, I., and Hien, N.:Shapley Values for Feature Selection: The Good the Bad and the Axioms. IEEE Access,(2021). https://doi.org/10.48550/arXiv.2102.10936
Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: more accurate approximations to Shapley values. Artif. Intell. 298, 103502 (2020). https://doi.org/10.1016/j.artint.2021.103502
Heuillet, A., Couthouis, F., Díaz-Rodríguez, N.: Collective eXplainable AI: explaining cooperative strategies and agent contribution in multiagent reinforcement learning with Shapley values. IEEE Comput. Intel. Magaz. 17, 59–71 (2022). https://doi.org/10.1109/MCI.2021.3129959
Lundberg, S., Erion, G., Chen, H., et al.: From local explanations to global understanding with explainable AI for trees. Nat Mach Intell 2, 56–67 (2020). https://doi.org/10.1038/s42256-019-0138-9
Quan-shi Zhang Song-chun Zhu: Visual interpretability for deep learning: a survey. Front. Inf. Technol. Electron. Eng 19, 27–39 (2018). https://doi.org/10.1631/FITEE.1700808
The Shapley Value explanations: https://towardsdatascience.com/the-Shapley-value-for-ml-models-f1100bff78d1
Christoph Molnar: Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.) (2022). christophm.github.io/interpretable-ml-book/
Mobile price Classification data from Kaggle: https://www.kaggle.com/datasets/iabhishekofficial/mobile-price-classification
Author information
Authors and Affiliations
Contributions
P. Sai Ram Aditya conceived the idea, performed the data analysis, and wrote the manuscript. Mayukha Pal conceptualized the project and contributed to idea generation, results analysis and discussion, guidance, and review of the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The data obtained for our analysis are from the available public domain database made for academic research purposes.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Parisineni, S.R.A., Pal, M. Enhancing trust and interpretability of complex machine learning models using local interpretable model agnostic shap explanations. Int J Data Sci Anal 18, 457–466 (2024). https://doi.org/10.1007/s41060-023-00458-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41060-023-00458-w