Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3450614.3464479acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
research-article

On-demand Personalized Explanation for Transparent Recommendation

Published: 22 June 2021 Publication History

Abstract

The literature on explainable recommendations is already rich. In this paper, we aim to shed light on an aspect that remains under-explored in this area of research, namely providing personalized explanations. To address this gap, we developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides on-demand personalized explanations with varying levels of detail to meet the demands of different types of end-users. The results of a preliminary qualitative user study demonstrated potential benefits in terms of user satisfaction with the explainable recommender system. Our work would contribute to the literature on explainable recommendation by exploring the potential of on-demand personalized explanations, and contribute to the practice by offering suggestions for the design and appropriate use of personalized explanation interfaces in recommender systems.

References

[1]
Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012(2019).
[2]
Shuo Chang, F Maxwell Harper, and Loren Gilbert Terveen. 2016. Crowd-based personalized natural language explanations for recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems. 175–182.
[3]
Tim Donkers, Benedikt Loepp, and Jürgen Ziegler. 2018. Explaining Recommendations by Means of User Reviews. In IUI Workshops.
[4]
Haiyan Fan and Marshall Scott Poole. 2006. What is personalization? Perspectives on the design and implementation of personalization in information systems. Journal of Organizational Computing and Electronic Commerce 16, 3-4(2006), 179–202.
[5]
Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies 72, 4 (2014), 367–382.
[6]
Mouadh Guesmi, mohamed Amine Chatti, and Arham Muslim. 2020. A Review of Explanatory Visualizations in Recommender Systems. In Companion Proceedings 10th International Conference on Learning Analytics and Knowledge (LAK20). 480–491.
[7]
Alexander Jung and Pedro HJ Nardelli. 2020. An information-theoretic approach to personalized explainable machine learning. IEEE Signal Processing Letters 27 (2020), 825–829.
[8]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized explanations for hybrid recommender systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 379–390.
[9]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2020. Generating and Understanding Personalized Explanations in Hybrid Recommender Systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4(2020), 1–40.
[10]
NIklas Kuhl, Jodie Lobana, and Christian Meske. 2020. Do you comply with AI?–Personalized explanations of learning algorithms and their impact on employees’ compliance behavior. arXiv preprint arXiv:2002.08777(2020).
[11]
Yichao Lu, Ruihai Dong, and Barry Smyth. 2018. Why I like it: multi-task learning for recommendation and explanation. In Proceedings of the 12th ACM Conference on Recommender Systems. 4–12.
[12]
James McInerney, Benjamin Lacker, Samantha Hansen, Karl Higley, Hugues Bouchard, Alois Gruson, and Rishabh Mehrotra. 2018. Explore, exploit, and explain: personalizing explainable recommendations with bandits. In Proceedings of the 12th ACM conference on recommender systems. 31–39.
[13]
Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 397–407.
[14]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.
[15]
Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2018. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. arXiv (2018), arXiv–1811.
[16]
Cataldo Musto, Fedelucio Narducci, Pasquale Lops, Marco De Gemmis, and Giovanni Semeraro. 2016. ExpLOD: a framework for explaining recommendations based on the linked open data cloud. In Proceedings of the 10th ACM Conference on Recommender Systems. 151–154.
[17]
Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction 27, 3-5 (2017), 393–444.
[18]
Johanes Schneider and Joshua Handali. 2019. Personalized explanation in machine learning: A conceptualization. arXiv preprint arXiv:1901.00770(2019).
[19]
Martin Svrcek, Michal Kompan, and Maria Bielikova. 2019. Towards understandable personalized recommendations: Hybrid explanations. Computer Science and Information Systems 16, 1 (2019), 179–203.
[20]
Nava Tintarev and Judith Masthoff. 2012. Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction 22, 4-5 (2012), 399–439.
[21]
Nava Tintarev and Judith Masthoff. 2015. Explaining recommendations: Design and evaluation. In Recommender systems handbook. Springer, 353–382.
[22]
Yongfeng Zhang and Xu Chen. 2018. Explainable recommendation: A survey and new perspectives. arXiv preprint arXiv:1804.11192(2018).
[23]
Ruijing Zhao, Izak Benbasat, and Hasan Cavusoglu. 2019. Do users always want to know more? Investigating the relationship between system transparency and users’ trust in advice-giving systems. In Proceedings of the 27th European Conference on Information Systems.

Cited By

View all
  • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
  • (2024)Balanced Explanations in Recommender SystemsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664915(25-29)Online publication date: 27-Jun-2024
  • (2024)Designing Effective Warnings for Manipulative Designs in Mobile ApplicationsProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659550(12-17)Online publication date: 22-Jun-2024
  • Show More Cited By

Index Terms

  1. On-demand Personalized Explanation for Transparent Recommendation
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UMAP '21: Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization
    June 2021
    431 pages
    ISBN:9781450383677
    DOI:10.1145/3450614
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 June 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Personalized Explanations
    2. Recommendation Explanations
    3. Transparency
    4. User Modeling

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    UMAP '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 162 of 633 submissions, 26%

    Upcoming Conference

    UMAP '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)124
    • Downloads (Last 6 weeks)15
    Reflects downloads up to 25 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
    • (2024)Balanced Explanations in Recommender SystemsAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3664915(25-29)Online publication date: 27-Jun-2024
    • (2024)Designing Effective Warnings for Manipulative Designs in Mobile ApplicationsProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659550(12-17)Online publication date: 22-Jun-2024
    • (2023)Justification vs. Transparency: Why and How Visual Explanations in a Scientific Literature Recommender SystemInformation10.3390/info1407040114:7(401)Online publication date: 14-Jul-2023
    • (2023)Beyond Self-diagnosis: How a Chatbot-based Symptom Checker Should RespondACM Transactions on Computer-Human Interaction10.1145/358995930:4(1-44)Online publication date: 11-Sep-2023
    • (2023)Interactive Explanation with Varying Level of Details in an Explainable Scientific Literature Recommender SystemInternational Journal of Human–Computer Interaction10.1080/10447318.2023.226279740:22(7248-7269)Online publication date: 15-Oct-2023
    • (2022)Interactive Visualizations of Transparent User Models for Self-Actualization: A Human-Centered Design ApproachMultimodal Technologies and Interaction10.3390/mti60600426:6(42)Online publication date: 30-May-2022
    • (2022)Exploring the Effects of Interactive Dialogue in Improving User Control for Explainable Online Symptom CheckersExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491101.3519668(1-7)Online publication date: 27-Apr-2022
    • (2022)Enhancing Fairness Perception – Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic DecisionsInternational Journal of Human–Computer Interaction10.1080/10447318.2022.209570539:7(1455-1482)Online publication date: 19-Jul-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media