Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3511047.3537678acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
short-paper

Creating a User Model to Support User-specific Explanations of AI Systems

Published: 04 July 2022 Publication History

Abstract

In this paper, we present a framework that supports providing user-specific explanations of AI systems. This is achieved by proposing a particular approach for modeling a user which enables a decision procedure to reason about how much detail to provide in an explanation. We also clarify the circumstances under which it is best not to provide an explanation at all, as one novel aspect of our design. While transparency of black box AI systems is an important aim for ethical AI, efforts to date are often one-size-fits-all. Our position is that more attention should be paid towards offering explanations that are context-specific, and our model takes an important step forward towards achieving that aim.

References

[1]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
[2]
John A Bateman and Cecile Paris. 1989. Phrasing a text in terms the user can understand. In IJCAI. 1511–1517.
[3]
Maartje MA De Graaf and Somaya Ben Allouch. 2013. Exploring influencing variables for the acceptance of social robots. Robotics and autonomous systems 61, 12 (2013), 1476–1486.
[4]
Gerhard Fischer. 2000. User Modeling in Human-Computer Interaction. User Modeling and User-Adapted Interaction 11 (08 2000). https://doi.org/10.1023/A:1011145532042
[5]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407–434.
[6]
Clodéric Mars, Rémi Dès, and Matthieu Boussard. 2020. The three stages of Explainable AI: How explainability facilitates real world deployment of AI.
[7]
Gerald Matthews, Jinchao Lin, April Rose Panganiban, and Michael D Long. 2019. Individual differences in trust in autonomous robots: Implications for transparency. IEEE Transactions on Human-Machine Systems 50, 3 (2019), 234–244.
[8]
Stephanie M Merritt and Daniel R Ilgen. 2008. Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Human factors 50, 2 (2008), 194–210.
[9]
Tatsuya Nomura, Takayuki Kanda, Tomohiro Suzuki, and Kensuke Kato. 2008. Prediction of Human Behavior in Human–Robot Interaction Using Psychological Scales for Anxiety and Negative Attitudes Toward Robots. IEEE Transactions on Robotics 24, 2 (2008), 442–451. https://doi.org/10.1109/TRO.2007.914004
[10]
Anders Persson, Mikael Laaksoharju, and Hiroshi Koga. 2021. We Mostly Think Alike: Individual Differences in Attitude Towards AI in Sweden and Japan. The Review of Socionetwork Strategies 15, 1 (01 May 2021), 123–142. https://doi.org/10.1007/s12626-021-00071-y
[11]
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. CoRR abs/1602.04938(2016). arXiv:1602.04938
[12]
James L. Szalma and Grant S. Taylor. 2011. Individual differences in response to automation: the five factor model of personality.Journal of experimental psychology. Applied 17 2 (2011), 71–96.
[13]
Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. Association for Computing Machinery, New York, NY, USA, 318–328.

Cited By

View all
  • (2024)Devising Scrutable User Models for Time Management AssistantsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3665182(250-255)Online publication date: 27-Jun-2024
  • (2024)Promoting Green Fashion Consumption in Recommender SystemsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664922(50-54)Online publication date: 27-Jun-2024
  • (2024)Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature ReviewIEEE Access10.1109/ACCESS.2024.343719012(106385-106414)Online publication date: 2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
UMAP '22 Adjunct: Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization
July 2022
409 pages
ISBN:9781450392327
DOI:10.1145/3511047
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Ethical AI
  2. Explanation
  3. User Modeling

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Funding Sources

  • NSERC CREATE

Conference

UMAP '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 162 of 633 submissions, 26%

Upcoming Conference

UMAP '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)48
  • Downloads (Last 6 weeks)1
Reflects downloads up to 23 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Devising Scrutable User Models for Time Management AssistantsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3665182(250-255)Online publication date: 27-Jun-2024
  • (2024)Promoting Green Fashion Consumption in Recommender SystemsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664922(50-54)Online publication date: 27-Jun-2024
  • (2024)Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature ReviewIEEE Access10.1109/ACCESS.2024.343719012(106385-106414)Online publication date: 2024
  • (2023)Service-based Presentation of Multimodal Information for the Justification of Recommender Systems ResultsProceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization10.1145/3565472.3592962(46-53)Online publication date: 18-Jun-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media