Interacting with Explanations through Critiquing
Interacting with Explanations through Critiquing
Diego Antognini, Claudiu Musat, Boi Faltings
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 515-521.
https://doi.org/10.24963/ijcai.2021/72
Using personalized explanations to support recommendations has been shown to increase trust and perceived quality. However, to actually obtain better recommendations, there needs to be a means for users to modify the recommendation criteria by interacting with the explanation. We present a novel technique using aspect markers that learns to generate personalized explanations of recommendations from review texts, and we show that human users significantly prefer these explanations over those produced by state-of-the-art techniques.
Our work's most important innovation is that it allows users to react to a recommendation by critiquing the textual explanation: removing (symmetrically adding) certain aspects they dislike or that are no longer relevant (symmetrically that are of interest). The system updates its user model and the resulting recommendations according to the critique. This is based on a novel unsupervised critiquing method for single- and multi-step critiquing with textual explanations. Empirical results show that our system achieves good performance in adapting to the preferences expressed in multi-step critiquing and generates consistent explanations.
Keywords:
AI Ethics, Trust, Fairness: Explainability
Data Mining: Recommender Systems
Humans and AI: Personalization and User Modeling