Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3490099.3511139acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users

Published: 22 March 2022 Publication History

Abstract

The increasing usage of complex Machine Learning models for decision-making has raised interest in explainable artificial intelligence (XAI). In this work, we focus on the effects of providing accessible and useful explanations to non-expert users. More specifically, we propose generic XAI design principles for contextualizing and allowing the exploration of explanations based on local feature importance. To evaluate the effectiveness of these principles for improving users’ objective understanding and satisfaction, we conduct a controlled user study with 80 participants using 4 different versions of our XAI system, in the context of an insurance scenario. Our results show that the contextualization principles we propose significantly improve user’s satisfaction and is close to have a significant impact on user’s objective understanding. They also show that the exploration principles we propose improve user’s satisfaction. On the other hand, the interaction of these principles does not appear to bring improvement on both dimensions of users’ understanding.

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160.
[2]
Victoria Bellotti and Keith Edwards. 2001. Intelligibility and accountability: human considerations in context-aware systems. Human-Computer Interaction 16, 2-4 (2001), 193–212.
[3]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ’It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions. In Proc. of the Int. Conf. on Human Factors in Computing Systems, CHI’18. Association for Computing Machinery, New York, NY, USA, 1–14.
[4]
Jenna Burrell. 2016. How the machine ’thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 2053951715622512.
[5]
Carrie Cai, Martin Stumpe, Michael Terry, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, and Greg Corrado. 2019. Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making. In Proc. of the Int. Conf. on Human Factors in Computing Systems, CHI’19. Association for Computing Machinery, New York, NY, USA, 1–14.
[6]
Hao Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proc. of the Int. Conf. on Human Factors in Computing Systems, CHI’19. Association for Computing Machinery, New York, NY, USA, 1–12.
[7]
Michael Chromik, Malin Eiband, Felicitas Buchner, Adrian Krüger, and Andreas Butz. 2021. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI. In Proc. of the 26th Int. Conf. on Intelligent User Interfaces, IUI’21. Association for Computing Machinery, New York, NY, USA, 307–317.
[8]
Dennis Collaris, Leo M Vink, and Jarke J van Wijk. 2018. Instance-level explanations for fraud detection: A case study. arxiv:1806.07129
[9]
Devleena Das and Sonia Chernova. 2020. Leveraging rationales to improve human task performance. In Proc. of the 25th Int. Conf. on Intelligent User Interfaces, IUI’20. Association for Computing Machinery, New York, NY, USA, 510–518.
[10]
Michael A DeVito, Jeffrey T Hancock, Megan French, Jeremy Birnholtz, Judd Antin, Karrie Karahalios, Stephanie Tong, and Irina Shklovski. 2018. The algorithm and the user: How can hci use lay understandings of algorithmic systems?. In Extended Abstracts of the Int. Conf. on Human Factors in Computing Systems, CHI’18. Association for Computing Machinery, New York, NY, USA, 1–6.
[11]
Christophe Dutang and Arthur Charpentier. 2020. Package ‘CASdatasets’.
[12]
Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2020. ViCE: visual counterfactual explanations for machine learning models. In Proc. of the 25th Int. Conf. on Intelligent User Interfaces, IUI’20. Association for Computing Machinery, New York, NY, USA, 531–535.
[13]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arxiv:1812.04608
[14]
Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proc. of the Int. Conf. on Human Factors in Computing Systems, CHI’19. Association for Computing Machinery, New York, NY, USA, 1–13.
[15]
Andreas Holzinger, Peter Kieseberg, Edgar Weippl, and A Min Tjoa. 2018. Current advances, trends and challenges of machine learning and knowledge extraction: from machine learning to explainable AI. In Proc. of the Int. Cross-Domain Conf. for Machine Learning and Knowledge Extraction, CD-MAKE’18. Springer International Publishing, Cham, 1–8.
[16]
Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with predictions: Visual inspection of black-box machine learning models. In Proc. of the Int. Conf. on Human Factors in Computing Systems, CHI’16. Association for Computing Machinery, New York, NY, USA, 5686–5697.
[17]
Freddy Lecue. 2020. On the role of knowledge graphs in explainable AI. Semantic Web 11, 1 (2020), 41–51.
[18]
Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proc. of the Int. Conference on Human Factors in Computing Systems, CHI’20. Association for Computing Machinery, New York, NY, USA, 1–15.
[19]
Brian Y Lim. 2012. Improving understanding and trust with intelligibility in context-aware applications. Ph. D. Dissertation. Carnegie Mellon University.
[20]
Zachary C. Lipton. 2016. The Mythos of Model Interpretability. In Proc. of the Int. Conf. on Machine Learning, ICML’16 - Workshop on Human Interpretability in Machine Learning. Association for Computing Machinery, New York, NY, USA, 36–43.
[21]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proc. of the Int. Conf of Advances in Neural Information Processing Systems, NeurIPS’17. Curran Associates Inc., Red Hook, NY, USA, 4765–4774.
[22]
Bertram F Malle. 2006. How the mind explains behavior: Folk explanations, meaning, and social interaction. MIT Press, Cambridge, MA, USA.
[23]
Kyle Martin, Anne Liret, Nirmalie Wiratunga, Gilbert Owusu, and Mathias Kern. 2019. Developing a catalogue of explainability methods to support expert and non-expert users. In Proc. of the In. Conf. on Innovative Techniques and Applications of Artificial Intelligence, IAAI’19. Springer-Verlag, Berlin, Heidelberg, 309–324.
[24]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.
[25]
Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2018. A survey of evaluation methods and measures for interpretable machine learning. arxiv:1811.11839
[26]
Boris Ruf, Chaouki Boutharouite, and Marcin Detyniecki. 2020. Getting Fairness Right: Towards a Toolbox for Practitioners.
[27]
Md Kamruzzaman Sarker, Joshua Schwartz, Pascal Hitzler, Lu Zhou, Srikanth Nadella, Brandon Minnery, Ion Juvina, Michael L Raymer, and William R Aue. 2020. Wikipedia knowledge graph for explainable AI. In Proc. of the Iberoamerican Conf. of Knowledge Graphs and Semantic Web, KGSWC’20. Springer International Publishing, Cham, 72–87.
[28]
Daniel Schwarcz. 2014. Transparently Opaque: Understanding the Lack of Transparency in Insurance Consumer Protection, 61UCLA L. Rev 394(2014), 400.
[29]
Ramprasaath R Selvaraju, Prithvijit Chattopadhyay, Mohamed Elhoseiny, Tilak Sharma, Dhruv Batra, Devi Parikh, and Stefan Lee. 2018. Choose your neuron: Incorporating domain knowledge through neuron-importance. In Proc. of the European Conf. on Computer Vision, ECCV’18. Springer, Cham, 540–556.
[30]
Leonard J Simms, Kerry Zelazny, Trevor F Williams, and Lee Bernstein. 2019. Does the number of response options matter? Psychometric perspectives using personality questionnaire data.Psychological assessment 31, 4 (2019), 557.
[31]
Ronal Singh, Paul Dourish, Piers Howe, Tim Miller, Liz Sonenberg, Eduardo Velloso, and Frank Vetere. 2021. Directive explanations for actionable explainability in machine learning applications. arxiv:2102.02671
[32]
Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In Proc. of the 26th Int. Conf. on Intelligent User Interfaces, IUI’21. Association for Computing Machinery, New York, NY, USA, 109–119.
[33]
Patrick Van Esch, J Stewart Black, and Joseph Ferolie. 2019. Marketing AI recruitment: The next phase in job application and selection. Computers in Human Behavior 90 (2019), 215–222.
[34]
Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law 7, 2 (2017), 76–99.
[35]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proc. of the Int. Conf. on Human Factors in Computing Systems, CHI’19. Association for Computing Machinery, New York, NY, USA, 1–15.
[36]
Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In Proc. of the 26th Int. Conf. on Intelligent User Interfaces, IUI’21. Association for Computing Machinery, New York, NY, USA, 318–328.
[37]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L. Arendt. 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proc. of the Int. Conf. on Intelligent User Interfaces, IUI’20. Association for Computing Machinery, New York, NY, USA, 189–201.
[38]
Jiaming Zeng, Berk Ustun, and Cynthia Rudin. 2017. Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society: Series A (Statistics in Society) 180, 3(2017), 689–722.

Cited By

View all
  • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
  • (2024)Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A ReviewRecent Patents on Engineering10.2174/187221211866623041708423118:5Online publication date: Jul-2024
  • (2024)Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human DecisionsACM Transactions on Interactive Intelligent Systems10.1145/366564714:3(1-36)Online publication date: 22-May-2024
  • Show More Cited By

Index Terms

  1. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      IUI '22: Proceedings of the 27th International Conference on Intelligent User Interfaces
      March 2022
      888 pages
      ISBN:9781450391443
      DOI:10.1145/3490099
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 March 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. explainable AI
      2. human-centered AI methods
      3. interface design
      4. user studies

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      IUI '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 746 of 2,811 submissions, 27%

      Upcoming Conference

      IUI '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)306
      • Downloads (Last 6 weeks)33
      Reflects downloads up to 18 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
      • (2024)Explainable Artificial Intelligence (XAI) Approaches in Predictive Maintenance: A ReviewRecent Patents on Engineering10.2174/187221211866623041708423118:5Online publication date: Jul-2024
      • (2024)Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human DecisionsACM Transactions on Interactive Intelligent Systems10.1145/366564714:3(1-36)Online publication date: 22-May-2024
      • (2024)Representation Debiasing of Generated Data Involving Domain ExpertsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664910(516-522)Online publication date: 27-Jun-2024
      • (2024)Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI InteractionsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3638177(1-6)Online publication date: 11-May-2024
      • (2024)EXMOS: Explanatory Model Steering through Multifaceted Explanations and Data ConfigurationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642106(1-27)Online publication date: 11-May-2024
      • (2024)Enhancing missing persons search strategies through technological touchpointsPolicing and Society10.1080/10439463.2024.235446734:9(975-994)Online publication date: 14-May-2024
      • (2024)VELCRO: A visual-based programming tool for effortless deep learning model constructionSoftwareX10.1016/j.softx.2024.10165626(101656)Online publication date: May-2024
      • (2024)Explainable predictive process monitoring: a user evaluationProcess Science10.1007/s44311-024-00003-31:1Online publication date: 2-Oct-2024
      • (2024)Toward Human-centered XAI in Practice: A surveyMachine Intelligence Research10.1007/s11633-022-1407-321:4(740-770)Online publication date: 12-Jan-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media