Nothing Special   »   [go: up one dir, main page]

Skip to main content

Feature Importance versus Feature Influence and What It Signifies for Explainable AI

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1901))

Included in the following conference series:

Abstract

When used in the context of decision theory, feature importance expresses how much changing the value of a feature can change the model outcome (or the utility of the outcome), compared to other features. Feature importance should not be confused with the feature influence used by most state-of-the-art post-hoc Explainable AI methods. Contrary to feature importance, feature influence is measured against a reference level or baseline. The Contextual Importance and Utility (CIU) method provides a unified definition of global and local feature importance that is applicable also for post-hoc explanations, where the value utility concept provides instance-level assessment of how favorable or not a feature value is for the outcome. The paper shows how CIU can be applied to both global and local explainability, assesses the fidelity and stability of different methods, and shows how explanations that use contextual importance and contextual utility can provide more expressive and flexible explanations than when using influence only.

The work is partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Intermediate concepts also deal with dependencies between features. However, in this paper we assume that the features are independent, as is the case for Shapley value and LIME too.

  2. 2.

    The approach in [10] is applicable to any feature set \(\{i\}\) and \(\{I\}\), including \(1,\dots ,N\).

References

  1. Andrews, R., Diederich, J., Tickle, A.B.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Know. Based Syst. 8(6), 373–389 (1995)

    Article  Google Scholar 

  2. Biecek, P., Burzykowski, T.: Explanatory Model Analysis. Chapman and Hall/CRC, New York (2021)

    Book  Google Scholar 

  3. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)

    Article  MATH  Google Scholar 

  4. Dyer, J.S.: Maut — multiattribute utility theory. In: Multiple Criteria Decision Analysis: State of the Art Surveys. ISORMS, vol. 78, pp. 265–292. Springer, New York (2005). https://doi.org/10.1007/0-387-23081-5_7

    Chapter  Google Scholar 

  5. Fishburn, P.C.: Utility Theory and Decision Theory, pp. 303–312. Palgrave Macmillan, London (1990)

    Google Scholar 

  6. Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 20(177), 1–81 (2019)

    MathSciNet  MATH  Google Scholar 

  7. Främling, K.: Explaining results of neural networks by contextual importance and utility. In: Andrews, R., Diederich, J. (eds.) Rules and networks: Proceedings of the Rule Extraction from Trained Artificial Neural Networks Workshop, AISB 1996 Conference, Brighton, UK (1996)

    Google Scholar 

  8. Främling, K.: Modélisation et apprentissage des préférences par réseaux de neurones pour l’aide à la décision multicritère. Phd thesis, INSA de Lyon (1996)

    Google Scholar 

  9. Främling, K.: Decision theory meets explainable AI. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) EXTRAAMAS 2020. LNCS, vol. 12175, pp. 57–74. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51924-7_4

    Chapter  Google Scholar 

  10. Främling, K.: Contextual importance and utility in R: the ‘ciu’ package. In: Proceedings of \(1^{st}\) Workshop on Explainable Agency in Artificial Intelligence, at \(35^{th}\) AAAI Conference on Artificial Intelligence, 2–9 February 2021, pp. 110–114 (2021)

    Google Scholar 

  11. Keeney, R.L., Raiffa, H.: Decisions with Multiple Objectives: Preferences and Value Trade-Offs. Cambridge University Press, Cambridge (1993)

    Book  Google Scholar 

  12. Kuhn, M.: Building predictive models in R using the caret package. J. Stat. Softw. Art. 28(5), 1–26 (2008)

    Google Scholar 

  13. Kumar, I.E., Venkatasubramanian, S., Scheidegger, C., Friedler, S.: Problems with shapley-value-based explanations as feature importance measures. In: International Conference on Machine Learning, pp. 5491–5500. PMLR (2020)

    Google Scholar 

  14. Lundberg, S.M., Erion, G.G., Lee, S.I.: Consistent individualized feature attribution for tree ensembles. CoRR abs/1802.03888 (2018)

    Google Scholar 

  15. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017)

    Google Scholar 

  16. Molnar, C., Casalicchio, G., Bischl, B.: iml: an R package for interpretable machine learning. J. Open Source Softw. 3(26), 786 (2018)

    Article  Google Scholar 

  17. Pedersen, T.L., Benesty, M.: lime: Local Interpretable Model-Agnostic Explanations (2019). R package version 0.5.1

    Google Scholar 

  18. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  19. Shapley, L.S.: A value for n-person games. In: Kuhn, H.W., Tucker, A.W. (eds.) Contributions to the Theory of Games II, pp. 307–317. Princeton University Press, Princeton (1953)

    Google Scholar 

  20. Shortliffe, E.H., Davis, R., Axline, S.G., Buchanan, B.G., Green, C., Cohen, S.N.: Computer-based consultations in clinical therapeutics: explanation and rule acquisition capabilities of the MYCIN system. Comput. Biomed. Res. 8(4), 303–320 (1975)

    Article  Google Scholar 

  21. Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2014)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kary Främling .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Främling, K. (2023). Feature Importance versus Feature Influence and What It Signifies for Explainable AI. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1901. Springer, Cham. https://doi.org/10.1007/978-3-031-44064-9_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44064-9_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44063-2

  • Online ISBN: 978-3-031-44064-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics