Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/3295222.3295230guideproceedingsArticle/Chapter ViewAbstractPublication PagesnipsConference Proceedingsconference-collections
Article
Free access

A unified approach to interpreting model predictions

Published: 04 December 2017 Publication History

Abstract

Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

References

[1]
Sebastian Bach et al. "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation". In: PloS One 10.7 (2015), e0130140.
[2]
A Charnes et al. "Extremal principle solutions of games in characteristic function form: core, Chebychev and Shapley value generalizations". In: Econometrics of Planning and Efficiency 11 (1988), pp. 123-133.
[3]
Anupam Datta, Shayak Sen, and Yair Zick. "Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems". In: Security and Privacy (SP), 2016 IEEE Symposium on. IEEE. 2016, pp. 598-617.
[4]
Stan Lipovetsky and Michael Conklin. "Analysis of regression in game theory approach". In: Applied Stochastic Models in Business and Industry 17.4 (2001), pp. 319-330.
[5]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why should i trust you?: Explaining the predictions of any classifier". In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM. 2016, pp. 1135-1144.
[6]
Lloyd S Shapley. "A value for n-person games". In: Contributions to the Theory of Games 2.28 (1953), pp. 307-317.
[7]
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. "Learning Important Features Through Propagating Activation Differences". In: arXiv preprint arXiv:1704.02685 (2017).
[8]
Avanti Shrikumar et al. "Not Just a Black Box: Learning Important Features Through Propagating Activation Differences". In: arXiv preprint arXiv:1605.01713 (2016).
[9]
Erik Štrumbelj and Igor Kononenko. "Explaining prediction models and individual predictions with feature contributions". In: Knowledge and information systems 41.3 (2014), pp. 647-665.
[10]
H Peyton Young. "Monotonic solutions of cooperative games". In: International Journal of Game Theory 14.2 (1985), pp. 65-72.

Cited By

View all
  • (2025)Autonomous intrusion detection for IoT: a decentralized and privacy preserving approachInternational Journal of Information Security10.1007/s10207-024-00926-924:1Online publication date: 1-Feb-2025
  • (2024)Complex-Path: Effective and Efficient Node Ranking with Paths in Billion-Scale Heterogeneous GraphsProceedings of the VLDB Endowment10.14778/3685800.368582017:12(3973-3986)Online publication date: 1-Aug-2024
  • (2024)A Roadmap of Explainable Artificial Intelligence: Explain to Whom, When, What and How?ACM Transactions on Autonomous and Adaptive Systems10.1145/370200419:4(1-40)Online publication date: 5-Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems
December 2017
7104 pages

Publisher

Curran Associates Inc.

Red Hook, NY, United States

Publication History

Published: 04 December 2017

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)7,242
  • Downloads (Last 6 weeks)1,204
Reflects downloads up to 23 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2025)Autonomous intrusion detection for IoT: a decentralized and privacy preserving approachInternational Journal of Information Security10.1007/s10207-024-00926-924:1Online publication date: 1-Feb-2025
  • (2024)Complex-Path: Effective and Efficient Node Ranking with Paths in Billion-Scale Heterogeneous GraphsProceedings of the VLDB Endowment10.14778/3685800.368582017:12(3973-3986)Online publication date: 1-Aug-2024
  • (2024)A Roadmap of Explainable Artificial Intelligence: Explain to Whom, When, What and How?ACM Transactions on Autonomous and Adaptive Systems10.1145/370200419:4(1-40)Online publication date: 5-Nov-2024
  • (2024)A Survey on Advanced Persistent Threat Detection: A Unified Framework, Challenges, and CountermeasuresACM Computing Surveys10.1145/370074957:3(1-36)Online publication date: 11-Nov-2024
  • (2024)Predicting sales in live streaming: An interpretable frameworkProceedings of the 2024 7th International Conference on Information Management and Management Science10.1145/3695652.3695669(58-66)Online publication date: 23-Aug-2024
  • (2024)A Novel Tree-Based Method for Interpretable Reinforcement LearningACM Transactions on Knowledge Discovery from Data10.1145/369546418:9(1-22)Online publication date: 9-Sep-2024
  • (2024)Explainable AI in Practice: Practitioner Perspectives on AI for Social Good and User Engagement in the Global SouthProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694707(1-16)Online publication date: 29-Oct-2024
  • (2024)Generalizable Error Modeling for Human Data Annotation: Evidence From an Industry-Scale Search Data Annotation ProgramJournal of Data and Information Quality10.1145/368839416:3(1-15)Online publication date: 26-Sep-2024
  • (2024)RAI Guidelines: Method for Generating Responsible AI Guidelines Grounded in Regulations and Usable by (Non-)Technical RolesProceedings of the ACM on Human-Computer Interaction10.1145/36869278:CSCW2(1-28)Online publication date: 8-Nov-2024
  • (2024)Enhancing Digital Agriculture with XAI: Case Studies on Tabular Data and Future DirectionsCompanion Proceedings of the 26th International Conference on Multimodal Interaction10.1145/3686215.3689201(211-217)Online publication date: 4-Nov-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media