Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

Enhancing trust and interpretability of complex machine learning models using local interpretable model agnostic shap explanations

  • Regular Paper
  • Published:
International Journal of Data Science and Analytics Aims and scope Submit manuscript

Abstract

With the rapid advancement of artificial intelligence (AI) technology and analytics compute engines, machine learning (ML) models have become increasingly complex. Unfortunately, many of these models are commonly treated as black boxes, lacking user interpretability. As a result, understanding and trusting the predictions made by such complex ML models have become more challenging. However, researchers have developed various frameworks that employ explainable AI methods to enhance the interpretability and explainability of ML models, thereby increasing the trustworthiness of their predictions. In this study, we propose a methodology called Local Interpretable Model Agnostic Shap Explanations (LIMASE). This ML explanation technique leverages Shapley values within the LIME paradigm to achieve several objectives: (a) It explains the prediction of any model by utilizing a locally faithful and interpretable decision tree model. The Tree Explainer is employed to calculate the Shapley values, enabling visually interpretable explanations, (b) It provides visually interpretable global explanations by plotting local explanations for multiple data points, (c) It offers a solution for the submodular optimization problem, (d) It provides insights into regional interpretation, and (e) It enables faster computation compared to the use of kernel explainer. By proposing the LIMASE methodology, this work contributes to the field of ML model interpretability and provides a practical solution to address the challenges posed by complex and opaque ML models. The proposed approach empowers users to gain a deeper understanding of model predictions through visually interpretable explanations at both local and global levels. Overall, this study aims to bridge the gap between the complexity of ML models and the need for interpretability, ultimately enhancing trust and usability in AI-driven applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Tjoa, E., Guan, C.: A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. P, IEEE Transactions on Neural Networks and Learning Systems (2020). https://doi.org/10.1109/TNNLS.2020.3027314

    Book  Google Scholar 

  2. Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform. Fus. 58, 82–115 (2019). https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  3. Anjomshoae, S., Främling, K., Najjar, A.: Explanations of black-box model predictions by contextual importance and utility. Explain. Transp. Autonom. Agents Multi-Agent Syst. (2019). https://doi.org/10.1007/978-3-030-30391-4_6

    Article  Google Scholar 

  4. Kohlbrenner, M., Bauer, A., et al.: Towards best practice in explaining neural network decisions with LRP. Neural Netw. (2020). https://doi.org/10.1109/IJCNN48605.2020.9206975.10.48550/arXiv.1910.09840

    Article  Google Scholar 

  5. Kakogeorgiou, I., Karantzalos, K.: Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing. Int J. Appl. Earth Observ. Geoinform 103, 102520 (2021). https://doi.org/10.1016/j.jag.2021.102520

    Article  Google Scholar 

  6. La Gatta, V., Moscato, V., Postiglione, M., Sperlì, G.: CASTLE: Cluster-aided space transformation for local explanations. Expert Syst. Appl. 179, 115045 (2021). https://doi.org/10.1016/j.eswa.2021.115045

    Article  Google Scholar 

  7. Darius, A., Romain, H., Vincent, G.: Towards Rigorous Interpretations: a Formalisation of Feature Attribution. Mach. Learn. (2021). https://doi.org/10.48550/arXiv.2104.12437

    Article  Google Scholar 

  8. Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach (2019). bLIMEy: Surrogate Prediction Explanations Beyond. LIME. https://doi.org/10.48550/arXiv.1910.13016

  9. Scott Lundberg and Su-In Lee.: A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 4768–4777 (2017). https://doi.org/10.48550/arXiv.1705.07874

  10. Jürgen, D., Sabrina, K.: Why model why? Assessing the strengths and limitations of LIME (2020). https://doi.org/10.48550/arXiv.2012.00093

  11. David, W., Limor, G., Ankur, T., Luciano, F.: Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice (2021). https://doi.org/10.48550/arXiv.2103.14651

  12. Amparore, E., Perotti, A., Bajardi, P.: To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods. PeerJ Computer Science. 7, e479 (2021). https://doi.org/10.7717/peerj-cs.479

    Article  Google Scholar 

  13. Bence Mark, H., Finnian, K., Rob van s, Anil A,: Residual networks for resisting noise analysis of an embeddings-based spoofing countermeasure. Odyssey (2020). https://doi.org/10.21437/Odyssey.2020-46

    Article  Google Scholar 

  14. Poppi, S., Cornia, M., Baraldi, L., Cucchiara, R.: Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis. Computer Vision and Pattern Recognition (2021). https://doi.org/10.1109/CVPRW53098.2021.00260

    Article  Google Scholar 

  15. Tulio Riberio, M., Singh, S., and Guestrin, C.: “why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and data Mining (KDD’16). Association for Computing Machinery, New York, NY, USA, 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778

  16. Marco Tulio, R., Singh, S., Guestrin, c.:“Why should i trust you?”: Explaining the predictions of any classifier (2016). arXiv:1602.04938

  17. Ras, G., van, Gerven, M., Haselage, P.: Explanation methods in deep learning: users, values, concerns and challenges (2018). arXiv:1803.07517

  18. Andrew S. R., Nina, C., Elisa, Z. H., Elena, L. G., Finale D.-V.: Evaluating the Interpretability of generative models by interactive reconstruction (2021). arXiv:2102.01264.

  19. Fryer, D., Strumke, I., and Hien, N.:Shapley Values for Feature Selection: The Good the Bad and the Axioms. IEEE Access,(2021). https://doi.org/10.48550/arXiv.2102.10936

  20. Aas, K., Jullum, M., Løland, A.: Explaining individual predictions when features are dependent: more accurate approximations to Shapley values. Artif. Intell. 298, 103502 (2020). https://doi.org/10.1016/j.artint.2021.103502

    Article  MathSciNet  Google Scholar 

  21. Heuillet, A., Couthouis, F., Díaz-Rodríguez, N.: Collective eXplainable AI: explaining cooperative strategies and agent contribution in multiagent reinforcement learning with Shapley values. IEEE Comput. Intel. Magaz. 17, 59–71 (2022). https://doi.org/10.1109/MCI.2021.3129959

    Article  Google Scholar 

  22. Lundberg, S., Erion, G., Chen, H., et al.: From local explanations to global understanding with explainable AI for trees. Nat Mach Intell 2, 56–67 (2020). https://doi.org/10.1038/s42256-019-0138-9

    Article  Google Scholar 

  23. Quan-shi Zhang Song-chun Zhu: Visual interpretability for deep learning: a survey. Front. Inf. Technol. Electron. Eng 19, 27–39 (2018). https://doi.org/10.1631/FITEE.1700808

    Article  Google Scholar 

  24. The Shapley Value explanations: https://towardsdatascience.com/the-Shapley-value-for-ml-models-f1100bff78d1

  25. Christoph Molnar: Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.) (2022). christophm.github.io/interpretable-ml-book/

  26. Mobile price Classification data from Kaggle: https://www.kaggle.com/datasets/iabhishekofficial/mobile-price-classification

Download references

Author information

Authors and Affiliations

Authors

Contributions

P. Sai Ram Aditya conceived the idea, performed the data analysis, and wrote the manuscript. Mayukha Pal conceptualized the project and contributed to idea generation, results analysis and discussion, guidance, and review of the manuscript.

Corresponding author

Correspondence to Mayukha Pal.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The data obtained for our analysis are from the available public domain database made for academic research purposes.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Parisineni, S.R.A., Pal, M. Enhancing trust and interpretability of complex machine learning models using local interpretable model agnostic shap explanations. Int J Data Sci Anal 18, 457–466 (2024). https://doi.org/10.1007/s41060-023-00458-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41060-023-00458-w

Keywords

Navigation