Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Visualization for Recommendation Explainability: A Survey and New Perspectives

Published: 02 August 2024 Publication History

Abstract

Providing system-generated explanations for recommendations represents an important step toward transparent and trustworthy recommender systems. Explainable recommender systems provide a human-understandable rationale for their outputs. Over the past two decades, explainable recommendation has attracted much attention in the recommender systems research community. This paper aims to provide a comprehensive review of research efforts on visual explanation in recommender systems. More concretely, we systematically review the literature on explanations in recommender systems based on four dimensions, namely explanation aim, explanation scope, explanation method, and explanation format. Recognizing the importance of visualization, we approach the recommender system literature from the angle of explanatory visualizations, that is using visualizations as a display style of explanation. As a result, we derive a set of guidelines that might be constructive for designing explanatory visualizations in recommender systems and identify perspectives for future work in this field. The aim of this review is to help recommendation researchers and practitioners better understand the potential of visually explainable recommendation research and to support them in the systematic design of visual explanations in current and future recommender systems.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–18.
[2]
Mohammed Alshammari, Olfa Nasraoui, and Scott Sanders. 2019. Mining semantic knowledge graphs to add explainability to black box recommender systems. IEEE Access 7 (2019), 110563–110579.
[3]
Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. AI Magazine 35, 4 (2014), 105–120.
[4]
Ivana Andjelkovic, Denis Parra, and John O’Donovan. 2019. Moodplay: Interactive music recommendation based on Artists’ mood similarity. International Journal of Human-Computer Studies 121 (2019), 142–159. DOI:
[5]
Alejandro B. Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (2020), 82–115.
[6]
Hernan Badenes, Mateo N. Bengualid, Jilin Chen, Liang Gou, Eben Haber, Jalal Mahmud, Jeffrey W. Nichols, Aditya Pal, Jerald Schoudt, Barton A. Smith, Ying Xuan, Huahai Yang, and Michelle X. Zhou. 2014. System U: Automatically deriving personality traits from social media for people recommendation. In Proceedings of the 8th ACM Conference on Recommender Systems (RecSys ’14). ACM, New York, NY, 373–374. DOI:
[7]
Fedor Bakalov, Marie-Jean Meurs, Birgitta König-Ries, Bahar Sateli, René Witte, Greg Butler, and Adrian Tsang. 2013. An approach to controlling user models and personalization effects in recommender systems. In Proceedings of the International Conference on Intelligent User Interfaces (IUI ’13). ACM, New York, NY, 49–56. DOI:
[8]
Krisztian Balog, Filip Radlinski, and Shushan Arakelyan. 2019. Transparent, scrutable and explainable user models for personalized recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 265–274.
[9]
Rohan Bansal, Jordan Olmstead, Uri Bram, Robert Cottrell, Gabriel Reder, and Jaan Altosaar. 2020. Recommending interesting writing using a controllable, explanation-aware visual interface. In IntRS@ RecSys. 77–80.
[10]
Jordan Barria-Pineda and Peter Brusilovsky. 2019. Making educational recommendations transparent through a fine-grained open learner model. In Proceedings of the 1st International Workshop on Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies at the 24th ACM Conference on Intelligent User Interfaces, (IUI 2019), ACM, New York, NY, 1–5.
[11]
Astrid Bertrand, James R. Eagan, and Winston Maxwell. 2023. Questioning the ability of feature-based explanations to empower non-experts in robo-advised financial decision-making. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23). ACM, New York, NY, 943–958. DOI:
[12]
Alsallakh Bilal, Amin Jourabloo, Mao Ye, Xiaoming Liu, and Liu Ren. 2017. Do convolutional neural networks learn class hierarchy? IEEE Transactions on Visualization and Computer Graphics 24, 1 (2017), 152–162.
[13]
Mustafa Bilgic and Raymond J. Mooney. 2005. Explaining recommendations: Satisfaction vs. promotion. In Proceedings of the Beyond Personalization Workshop, IUI, Vol. 5. 153.
[14]
Svetlin Bostandjiev, John O’Donovan, and Tobias Höllerer. 2012. TasteWeights: A visual interactive hybrid recommender system. In Proceedings of the 6th ACM Conference on Recommender Systems (RecSys ’12). ACM, New York, NY, 35–42. DOI:
[15]
Svetlin Bostandjiev, John O’Donovan, and Tobias Höllerer. 2013. LinkedVis: Exploring social and semantic career recommendations. In Proceedings of the International Conference on Intelligent User Interfaces (IUI ’13). ACM, New York, NY, 107–116. DOI:
[16]
Erin Burnett, Jessica Holt, Abigail Borron, and Bartosz Wojdynski. 2019. Interactive infographics’ effect on elaboration in agricultural communication. Journal of Applied Communications 103, 3 (2019), 4.
[17]
Stuart K. Card, Jock Mackinlay, and Ben Shneiderman. 1999. Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann.
[18]
Angelos Chatzimparmpas, Rafael M. Martins, Ilir Jusufi, Kostiantyn Kucher, Fabrice Rossi, and Andreas Kerren. 2020. The state of the art in enhancing trust in machine learning models with the use of visualizations. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 713–756.
[19]
Xu Chen, Hanxiong Chen, Hongteng Xu, Yongfeng Zhang, Yixin Cao, Zheng Qin, and Hongyuan Zha. 2019. Personalized fashion recommendation with visual explanations based on multimodal attention network: Towards visually explainable recommendation. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 765–774.
[20]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[21]
Jaegul Choo and Shixia Liu. 2018. Visual analytics for explainable deep learning. IEEE Computer Graphics and Applications 38, 4 (2018), 84–92.
[22]
Wenqiang Cui. 2019. Visual analytics: A comprehensive overview. IEEE Access 7 (2019), 81555–81573.
[23]
Marco de Gemmis, Pasquale Lops, Cataldo Musto, Fedelucio Narducci, and Giovanni Semeraro. 2015. Semantics-Aware Content-Based Recommender Systems. Springer US, Boston, MA, 119–159. DOI:
[24]
Vicente Dominguez, Pablo Messina, Ivania Donoso-Guzmán, and Denis Parra. 2019. The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 408–416.
[25]
Fan Du, Catherine Plaisant, Neil Spring, and Ben Shneiderman. 2018. Visual interfaces for recommendation systems: Finding similar and dissimilar peers. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 1 (2018), 9.
[26]
John J. Dudley and Per O. Kristensson. 2018. A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems (TiiS) 8, 2 (2018), 1–37.
[27]
Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice. In Proceedings of the 23rd International Conference on Intelligent User Interfaces. 211–223.
[28]
Alex Endert, William Ribarsky, Cagatay Turkay, B. L. William Wong, Ian Nabney, I. Díaz Blanco, and Fabrice Rossi. 2017. The state of the art in integrating machine learning into visual analytics. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 458–486.
[29]
Jonathan St B. T. Evans and Keith E. Stanovich. 2013. Dual-process theories of higher cognition: Advancing the debate. Perspectives on Psychological Science 8, 3 (2013), 223–241.
[30]
Gerhard Friedrich and Markus Zanker. 2011. A taxonomy for generating explanations in recommender systems. AI Magazine 32, 3 (2011), 90–98.
[31]
Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies 72, 4 (2014), 367–382.
[32]
Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2020. Vice: Visual counterfactual explanations for machine learning models. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 531–535.
[33]
David Graus, Maya Sappelli, and Dung Manh Chu. 2018. “Let me tell you who you are”—Explaining recommender systems by opening black box user profiles. In Proceedings of the 2nd FATREC Workshop on Responsible Recommendation, Vancouver, BC, Canada.
[34]
Brynjar Gretarsson, John O’Donovan, Svetlin Bostandjiev, Christopher Hall, and Tobias Höllerer. 2010. SmallWorlds: Visualizing social recommendations. Computer Graphics Forum 29, 3 (2010), 833–842. DOI:
[35]
Mouadh Guesmi, Mohamed A. Chatti, Shoeb Joarder, Qurat U. Ain, Rawaa Alatrash, Clara Siepmann, and Tannaz Vahidi. 2023. Interactive explanation with varying level of details in an explainable scientific literature recommender system. International Journal of Human–Computer Interaction (2023), 1–22.
[36]
Mouadh Guesmi, Mohamed A. Chatti, Yiqi Sun, Shadi Zumor, Fangzheng Ji, Arham Muslim, Laura Vorgerd, and Shoeb Ahmed Joarder. 2021. Open, scrutable and explainable interest models for transparent recommendation. In Joint Proceedings of the ACM IUI 2021 Workshops. ACM, New York, NY.
[37]
Mouadh Guesmi, Mohamed A. Chatti, Laura Vorgerd, Shoeb Ahmed Joarder, Shadi Zumor, Yiqi Sun, Fangzheng Ji, and Arham Muslim. 2021. On-demand personalized explanation for transparent recommendation. In UMAP (Adjunct Publication). Judith Masthoff, Eelco Herder, Nava Tintarev, and Marko Tkalcic (Eds.), ACM, 246–252. Retrieved from http://dblp.uni-trier.de/db/conf/um/umap2021a.htmlGuesmiCVJZSJM21
[38]
Jaron Harambam, Dimitrios Bountouridis, Mykola Makhortykh, and Joris van Hoboken. 2019. Designing for the better by taking users into account: A qualitative evaluation of user control mechanisms in (news) recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems (RecSys ’19). ACM, New York, NY, 69–77. DOI:
[39]
Adam W. Harley. 2015. An interactive node-link visualization of convolutional neural networks. In Proceedings of the 11th International Symposium on Advances in Visual Computing (ISVC ’15). Springer, 867–877.
[40]
Chen He, Denis Parra, and Katrien Verbert. 2016. Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities. Expert Systems with Applications 56 (2016), 9–27. DOI:
[41]
Marti Hearst. 2009. Search User Interfaces. Cambridge University Press, Cambridge; New York. Retrieved from http://searchuserinterfaces.com/
[42]
Jeffrey Heer, Michael Bostock, and Vadim Ogievetsky. 2010. A tour through the visualization zoo. Communications of the ACM 53, 6 (2010), 59–67. DOI:
[43]
Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work. ACM, 241–250.
[44]
Diana C. Hernandez-Bocanegra and Jürgen Ziegler. 2021. Effects of interactivity and presentation on review-based explanations for recommendations. In Proceedings of the 18th IFIP TC 13 International Conference on Human-Computer Interaction (INTERACT ’21). Springer, 597–618.
[45]
Yoshinori Hijikata, Yuki Kai, and Shogo Nishida. 2012. The relation between user intervention and user satisfaction for information recommendation. In Proceedings of the 27th Annual ACM Symposium on Applied Computing. 2002–2007.
[46]
Denis J. Hilton. 1990. Conversational processes and causal explanation. Psychological Bulletin 107, 1 (1990), 65.
[47]
Fred Hohman, Minsuk Kahng, Robert Pienta, and Duen Horng Chau. 2018. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Transactions on Visualization and Computer Graphics 25, 8 (2018), 2674–2693.
[48]
Dietmar Jannach, Michael Jugovac, and Ingrid Nunes. 2019. Explanations and user control in recommender systems. In Proceedings of the 23rd International Workshop on Personalization and Recommendation on the Web and Beyond. 31–31.
[49]
Dietmar Jannach, Ahtsham Manzoor, Wanling Cai, and Li Chen. 2021. A survey on conversational recommender systems. ACM Computing Surveys (CSUR) 54, 5 (2021), 1–36.
[50]
Dietmar Jannach, Sidra Naveed, and Michael Jugovac. 2016. User control in recommender systems: Overview and interaction challenges. In Proceedings of the 17th International Conference on Electronic Commerce and Web Technologies (EC-Web ’16). Springer, 21–33.
[51]
Liu Jiang, Shixia Liu, and Changjian Chen. 2019. Recent research advances on interactive machine learning. Journal of Visualization 22 (2019), 401–417.
[52]
Yucheng Jin, Karsten Seipp, Erik Duval, and Katrien Verbert. 2016. Go with the flow: Effects of transparency and user control on targeted advertising using flow charts. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI ’16). ACM, New York, NY, 68–75. DOI:
[53]
Yucheng Jin, Nava Tintarev, and Katrien Verbert. 2018. Effects of individual traits on diversity-aware music recommender user interfaces. In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization (UMAP ’18). ACM, New York, NY, 291–299. DOI:
[54]
Yucheng Jin, Nava Tintarev, and Katrien Verbert. 2018. Effects of personal characteristics on music recommender systems with different levels of controllability. In Proceedings of the 12th ACM Conference on Recommender Systems. 13–21.
[55]
Michael Jugovac and Dietmar Jannach. 2017. Interacting with recommenders—Overview and research directions. ACM Transactions on Interactive Intelligent Systems (TiiS) 7, 3 (2017), 1–46.
[56]
Daniel Kahneman. 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York.
[57]
Daniel Kahneman and Shane Frederick. 2002. Representativeness revisited: Attribute substitution in intuitive judgment. Heuristics and Biases: The Psychology of Intuitive Judgment 49, 49–81 (2002), 74.
[58]
Minsuk Kahng, Nikhil Thorat, Duen Horng Chau, Fernanda B. Viégas, and Martin Wattenberg. 2018. Gan lab: Understanding complex deep generative models using interactive visual experimentation. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2018), 310–320.
[59]
Antti Kangasrääsiö, Dorota Glowacka, and Samuel Kaski. 2015. Improving controllability and predictability of interactive recommendation interfaces for exploratory search. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI ’15). ACM, New York, NY, 247–251. DOI:
[60]
Daniel A. Keim, Florian Mansmann, Jörn Schneidewind, Jim Thomas, and Hartmut Ziegler. 2008. Visual Analytics: Scope and Challenges. Springer.
[61]
Barbara Kitchenham. 2004. Procedures for performing systematic reviews. Keele, UK, Keele University 33, 2004 (2004), 1–26.
[62]
Barbara Kitchenham and Stuart Charters. 2007. Guidelines for Performing Systematic Literature Reviews in Software Engineering. Technical Report. School of Computer Science and Mathematics, Keele University.
[63]
René F. Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2390–2395.
[64]
Bart P. Knijnenburg, Martijn C. Willemsen, Zeno Gantner, Hakan Soncu, and Chris Newell. 2012. Explaining the user experience of recommender systems. User Modeling and User-Adapted Interaction 22, 4–5 (Oct. 2012), 441–504. DOI:
[65]
Joseph A. Konstan and John Riedl. 2012. Recommender systems: From algorithms to user experience. User Modeling and User-Adapted Interaction 22, 1 (Apr. 2012), 101–123. DOI:
[66]
Yehuda Koren and Robert Bell. 2015. Advances in Collaborative Filtering. Springer US, Boston, MA, 77–118. DOI:
[67]
Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 8 (2009), 30–37.
[68]
Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise Getoor. 2019. Personalized explanations for hybrid recommender systems. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). ACM, New York, NY, 379–390. DOI:
[69]
Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 5686–5697.
[70]
Luciana Monteiro Krebs, Oscar Luis Alvarado Rodriguez, Pierre Dewitte, Jef Ausloos, David Geerts, Laurens Naudts, and Katrien Verbert. 2019. Tell me what you know: GDPR implications on designing transparency and accountability for news recommender systems. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. 1–6.
[71]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces. 126–137.
[72]
Bum C. Kwon, Min-Je Choi, Joanne T. Kim, Edward Choi, Young B. Kim, Soonwook Kwon, Jimeng Sun, and Jaegul Choo. 2018. Retainvis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2018), 299–309.
[73]
Allison Lazard and Lucy Atkinson. 2015. Putting environmental infographics center stage: The role of visuals at the elaboration likelihood model’s critical point of persuasion. Science Communication 37, 1 (2015), 6–33.
[74]
Eun-Ju Lee and Ye Weon Kim. 2016. Effects of infographics on news elaboration, acquisition, and evaluation: Prior knowledge and issue involvement as moderators. New Media & Society 18, 8 (2016), 1579–1598.
[75]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[76]
Brian Y. Lim and Anind K. Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing. 195–204.
[77]
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2119–2128.
[78]
Brian Y. Lim, Qian Yang, Ashraf M. Abdul, and Danding Wang. 2019. Why these explanations? Selecting intelligibility types for explanation goals. In Joint Proceedings of the ACM IUI 2019 Workshops. ACM, New York, NY.
[79]
Yujie Lin, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, and Maarten de Rijke. 2020. Explainable outfit recommendation with joint outfit matching and comment generation. IEEE Transactions on Knowledge and Data Engineering 32, 8 (2020), 1502–1516. DOI:
[80]
Shixia Liu, Xiting Wang, Mengchen Liu, and Jun Zhu. 2017. Towards better analysis of machine learning models: A visual analytics perspective. Visual Informatics 1, 1 (2017), 48–56.
[81]
Benedikt Loepp, Tim Donkers, Timm Kleemann, and Jürgen Ziegler. 2019. Interactive recommending with tag-enhanced matrix factorization (TagMF). International Journal of Human-Computer Studies 121 (2019), 21–41. DOI:
[82]
Yafeng Lu, Rolando Garcia, Brett Hansen, Michael Gleicher, and Ross Maciejewski. 2017. The state-of-the-art in predictive visual analytics. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 539–562.
[83]
Boxuan Ma, Min Lu, Yuta Taniguchi, and Shin’ichi Konomi. 2021. CourseQ: The impact of visual and interactive course recommendation in university environments. Research and Practice in Technology Enhanced Learning 16, 1 (2021), 1–24.
[84]
Douglas B Markant, Milad Rogha, Alireza Karduni, Ryan Wesslen, and Wenwen Dou. 2022. Can data visualizations change minds? identifying mechanisms of elaborative thinking and persuasion. In Proceedings of the IEEE Workshop on Visualization for Social Good (VIS4Good ’22). IEEE, 1–5.
[85]
Julian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM Conference on Recommender Systems (RecSys ’13). ACM, New York, NY, 165–172. DOI:
[86]
Miriah Meyer and Danyel Fisher. 2018. Making Data Visual. O’Reilly Media.
[87]
Martijn Millecamp, Nyi N. Htun, Cristina Conati, and Katrien Verbert. 2019. To explain or not to explain: The effects of personal characteristics when explaining music recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). ACM, New York, NY, 397–407. DOI:
[88]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.
[89]
Tim Miller. 2022. Are we measuring trust correctly in explainability, interpretability, and transparency research? arXiv preprint arXiv:2209.00651. Retrieved from https://doi.org/10.48550/arXiv.2209.00651
[90]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 11, 3–4 (2021), 1–45.
[91]
J. Müller, M. Stöhr, A. Oeser, J. Gaebel, A. Dietz, and S. Oeltze-Jafra. 2019. A Visual Approach to Explainable Clinical Decision Support. Eurographics Workshop on Visual Computing for Biology and Medicine (2019).
[92]
Tamara Munzner. 2014. Visualization Analysis and Design. CRC Press.
[93]
Mohammad Naiseh, Nan Jiang, Jianbing Ma, and Raian Ali. 2020. Personalising explainable recommendations: Literature and conceptualisation. In Trends and Innovations in Information Systems and Technologies. Á. Rocha, H. Adeli, L. Reis, S. Costanzo, I. Orovic, and E. Moreira (Eds.), Vol. 28, Springer, 518–533.
[94]
Tejaswini Narayanan and Deborah L McGuinness. 2008. Towards leveraging inference web to support intuitive explanations in recommender systems for automated career counseling. In Proceedings of the 1st International Conference on Advances in Computer-Human Interaction. IEEE, 164–169.
[95]
Thao Ngo, Johannes Kunkel, and Jürgen Ziegler. 2020. Exploring mental models for transparent and controllable recommender systems: A qualitative study. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. 183–191.
[96]
Chris North. 2006. Toward measuring visualization insight. IEEE Computer Graphics and Applications 26, 3 (2006), 6–9.
[97]
Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction 27, 3 (Dec. 2017), 393–444. DOI:
[98]
John O’Donovan, Barry Smyth, Brynjar Gretarsson, Svetlin Bostandjiev, and Tobias Höllerer. 2008. PeerChooser: Visual interactive recommendation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08). ACM, New York, NY, 1085–1088. DOI:
[99]
Jeroen Ooge, Gregor Stiglic, and Katrien Verbert. 2022. Explaining artificial intelligence with visual analytics in healthcare. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 12, 1 (2022), e1427.
[100]
Daniel J. O’Keefe. 2013. The elaboration likelihood model. The SAGE Handbook of Persuasion: Developments in Theory and Practice (2013), 137–149.
[101]
Lace M. Padilla, Sarah H. Creem-Regehr, Mary Hegarty, and Jeanine K. Stefanucci. 2018. Decision making with visualizations: a cognitive framework across disciplines. Cognitive Research: Principles and Implications 3 (2018), 1–25.
[102]
Anshul V. Pandey, Anjali Manivannan, Oded Nov, Margaret Satterthwaite, and Enrico Bertini. 2014. The persuasive power of data visualization. IEEE Transactions on Visualization and Computer Graphics 20, 12 (2014), 2211–2220.
[103]
Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. 2012. A generalized taxonomy of explanations styles for traditional and social recommender systems. Data Mining and Knowledge Discovery 24, 3 (May 2012), 555–583. DOI:
[104]
Haekyu Park, Hyunsik Jeon, Junghwan Kim, Beunguk Ahn, and U Kang. 2017. Uniwalk: Explainable and accurate recommendation for rating and network data. arXiv:1710.07134 [cs.IR]. DOI:
[105]
Denis Parra, Peter Brusilovsky, and Christoph Trattner. 2014. See what you want to see: Visual user-driven approach for hybrid recommendation. In Proceedings of the 19th International Conference on Intelligent User Interfaces (IUI ’14). ACM, New York, NY, 235–240. DOI:
[106]
Robert E. Patterson, Leslie M. Blaha, Georges G. Grinstein, Kristen K. Liggett, David E. Kaveney, Kathleen C. Sheldon, Paul R. Havig, and Jason A. Moore. 2014. A human cognition framework for information visualization. Computers & Graphics 42 (2014), 42–58.
[107]
Richard E. Petty and Pablo Briñol. 2011. The elaboration likelihood model. Handbook of Theories of Social Psychology 1 (2011), 224–245.
[108]
Richard E. Petty, John T. Cacioppo, Richard E. Petty, and John T. Cacioppo. 1986. The Elaboration Likelihood Model of Persuasion. Springer.
[109]
Nicolas Pfeuffer, Lorenz Baum, Wolfgang Stammer, Benjamin M. Abdel-Karim, Patrick Schramowski, Andreas M. Bucher, Christian Hügel, Gernot Rohde, Kristian Kersting, and Oliver Hinz. 2023. Explanatory Interactive Machine Learning. Business & Information Systems Engineering (2023), 1–25.
[110]
Pearl Pu, Li Chen, and Rong Hu. 2011. A user-centric evaluation framework for recommender systems. In Proceedings of the Fifth ACM Conference on Recommender Systems (RecSys ’11). ACM, New York, NY, 157–164. DOI:
[111]
Pearl Pu, Li Chen, and Rong Hu. 2012. Evaluating recommender systems from the user’s perspective: Survey of the state of the art. User Modeling and User-Adapted Interaction 22, 4 (Oct 2012), 317–355. DOI:
[112]
Lara Quijano-Sanchez, Christian Sauer, Juan A. Recio-Garcia, and Belen Diaz-Agudo. 2017. Make it personal: a social explanation system applied to group recommendations. Expert Systems with Applications 76 (2017), 36–48.
[113]
Paulo E. Rauber, Samuel G. Fadel, Alexandre X. Falcao, and Alexandru C. Telea. 2016. Visualizing the hidden activity of artificial neural networks. IEEE Transactions on Visualization and Computer Graphics 23, 1 (2016), 101–110.
[114]
James Schaffer, Tobias Hollerer, and John O’Donovan. 2015. Hypothetical recommendation: A study of interactive profile manipulation behavior for recommender systems. In Proceedings of the 28th International Flairs Conference.
[115]
Patrick Schramowski, Wolfgang Stammer, Stefano Teso, Anna Brugger, Franziska Herbert, Xiaoting Shao, Hans-Georg Luigs, Anne-Katrin Mahlein, and Kristian Kersting. 2020. Making deep neural networks right for the right scientific reasons by interacting with their explanations. Nature Machine Intelligence 2, 8 (2020), 476–486.
[116]
Cecilia Di Sciascio, Vedran Sabol, and Eduardo Veas. 2017. Supporting exploratory search with a visual user-driven approach. ACM Transactions on Interactive Intelligent Systems (TiiS) 7, 4 (2017), 1–35.
[117]
Amit Sharma and Dan Cosley. 2013. Do social explanations work? Studying and modeling the effects of social explanations in recommender systems. In Proceedings of the 22nd International Conference on World Wide Web. 1133–1144.
[118]
Ben Shneiderman. 2020. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10, 4 (2020), 1–31.
[119]
Ben Shneiderman. 2022. Human-Centered AI. Oxford University Press.
[120]
Daniel Smilkov, Shan Carter, D. Sculley, Fernanda B. Viégas, and Martin Wattenberg. 2017. Direct-manipulation visualization of deep networks. arXiv:1708.03788. Retrieved from https://doi.org/10.48550/arXiv.1708.03788
[121]
Kacper Sokol and Peter Flach. 2020. One explanation does not fit all: The promise of interactive explanations for machine learning transparency. KI-Künstliche Intelligenz 34, 2 (2020), 235–250.
[122]
Thilo Spinner, Udo Schlegel, Hanna Schäfer, and Mennatallah El-Assady. 2019. explAIner: A visual analytics framework for interactive and explainable machine learning. IEEE Transactions on Visualization and Computer Graphics 26, 1 (2019), 1064–1074.
[123]
Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfister, and Alexander M. Rush. 2017. Lstmvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Transactions on Visualization and Computer Graphics 24, 1 (2017), 667–676.
[124]
Yuan Sun and S. Shyam Sundar. 2022. Exploring the effects of interactive dialogue in improving user control for explainable online symptom checkers. In Proceedings of the CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–7.
[125]
Panagiotis Symeonidis, Alexandros Nanopoulos, and Yannis Manolopoulos. 2009. MoviExplain: A recommender system with explanations. In Proceedings of the 3rd ACM Conference on Recommender Systems. ACM, 317–320.
[126]
Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: The effect of user expertise on different explanations. In Proceedings of the 26th International Conference on Intelligent User Interfaces (IUI ’21). ACM, New York, NY, 109–119. DOI:
[127]
Stefano Teso and Kristian Kersting. 2019. Explanatory interactive machine learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 239–245.
[128]
James J. Thomas and Kristin A. Cook. 2006. A visual analytics agenda. IEEE Computer Graphics and Applications 26, 1 (2006), 10–13.
[129]
Nava Tintarev and Judith Masthoff. 2007. A survey of explanations in recommender systems. In Proceedings of the 2007 IEEE 23rd International Conference on Data Engineering Workshop. IEEE, 801–810.
[130]
Nava Tintarev and Judith Masthoff. 2012. Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction 22, 4–5 (2012), 399–439.
[131]
Nava Tintarev and Judith Masthoff. 2015. Explaining Recommendations: Design and Evaluation. Springer US, Boston, MA, 353–382.
[132]
Chun-Hua Tsai and Peter Brusilovsky. 2017. Providing control and transparency in a social recommender system for academic conferences. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization (UMAP ’17). ACM, New York, NY, 313–317. DOI:
[133]
Chun-Hua Tsai and Peter Brusilovsky. 2018. Explaining social recommendations to casual users: Design principles and opportunities. In Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. 1–2.
[134]
Chun-Hua Tsai and Peter Brusilovsky. 2019. Explaining recommendations in an interactive hybrid social recommender. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). ACM, New York, NY, 391–396. DOI:
[135]
Chun-Hua Tsai and Peter Brusilovsky. 2021. The effects of controllability and explainability in a social recommender system. User Modeling and User-Adapted Interaction 31 (2021), 591–627.
[136]
Konstantinos Tsiakas, Emilia Barakova, Javed V. Khan, and Panos Markopoulos. 2020. BrainHood: Towards an explainable recommendation system for self-regulated cognitive training in children. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments. 1–6.
[137]
Katrien Verbert, Denis Parra, Peter Brusilovsky, and Erik Duval. 2013. Visualizing recommendations to support exploration, transparency and controllability. In Proceedings of the International Conference on Intelligent User Interfaces (IUI ’13). ACM, New York, NY, 351–362. DOI:
[138]
Jesse Vig, Shilad Sen, and John Riedl. 2009. Tagsplanations: Explaining recommendations using tags. In Proceedings of the 14th International Conference on Intelligent User Interfaces (IUI ’09). ACM, New York, NY, 47–56. DOI:
[139]
Michail Vlachos and Daniel Svonava. 2012. Graph embeddings for movie visualization and recommendation. In Proceedings of the 1st International Workshop on Recommendation Technologies for Lifestyle Change (LIFESTYLE ’12), Vol. 56.
[140]
Beidou Wang, Martin Ester, Jiajun Bu, and Deng Cai. 2014. Who also likes it? Generating the most persuasive social explanations in recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 28, 1 (Jun. 2014). DOI:.
[141]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–15.
[142]
Zijie J. Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, and Duen H. P. Chau. 2020. CNN explainer: Learning convolutional neural networks with interactive visualization. IEEE Transactions on Visualization and Computer Graphics 27, 2 (2020), 1396–1406.
[143]
Matthew Ward, Georges G. Grinstein, and Daniel Keim. 2010. Interactive Data Visualization: Foundations, Techniques, and Applications. A K Peters, Natick, Mass. Retrieved from http://www.worldcat.org/search?qt=worldcat_org_all&q=9781568814735
[144]
Colin Ware. 2012. Information Visualization: Perception for Design (3rd. ed.). Morgan Kaufmann.
[145]
Kanit Wongsuphasawat, Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mane, Doug Fritz, Dilip Krishnan, Fernanda B Viégas, and Martin Wattenberg. 2017. Visualizing dataflow graphs of deep learning models in tensorflow. IEEE Transactions on Visualization and Computer Graphics 24, 1 (2017), 1–12.
[146]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L. Arendt. 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 189–201.
[147]
Yongfeng Zhang and Xu Chen. 2020. Explainable recommendation: A survey and new perspectives. Foundations and Trends® in Information Retrieval 14, 1 (2020), 1–101.
[148]
Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma. 2014. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In Proceedings of the 37th International ACM SIGIR Conference on Research & #38; Development in Information Retrieval (SIGIR ’14). ACM, New York, NY, 83–92. DOI:
[149]
Ruijing Zhao, Izak Benbasat, and Hasan Cavusoglu. 2019. Do users always want to know more? Investigating the relationship between system transparency and users’ trust in advice-giving systems. In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, June 8-14, 2019. ISBN 978-1-7336325-0-8 Research-in-Progress. Papers. https://aisel.aisnet.org/ecis2019_rip/42
[150]
Shiwan Zhao, Michelle X. Zhou, Xiatian Zhang, Quan Yuan, Wentao Zheng, and Rongyao Fu. 2011. Who is doing what and when: Social map-based recommendation for content-centric social web sites. ACM Transactions on Intelligent Systems and Technology (TIST) 3, 1, Article 5 (Oct. 2011), 23 pages. DOI:

Cited By

View all
  • (2024)Knowledge Graph-Based Integration of Conversational Advisors and Faceted FilteringInteracting with Computers10.1093/iwc/iwae044Online publication date: 18-Sep-2024
  • (2024)A Survey on Explainable Course Recommendation SystemsDistributed, Ambient and Pervasive Interactions10.1007/978-3-031-60012-8_17(273-287)Online publication date: 29-Jun-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 14, Issue 3
September 2024
384 pages
EISSN:2160-6463
DOI:10.1145/3613608
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 August 2024
Online AM: 11 June 2024
Accepted: 27 May 2024
Revised: 28 April 2024
Received: 22 August 2022
Published in TIIS Volume 14, Issue 3

Check for updates

Author Tags

  1. Recommender system
  2. explainable recommendation
  3. visualization

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)309
  • Downloads (Last 6 weeks)165
Reflects downloads up to 04 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Knowledge Graph-Based Integration of Conversational Advisors and Faceted FilteringInteracting with Computers10.1093/iwc/iwae044Online publication date: 18-Sep-2024
  • (2024)A Survey on Explainable Course Recommendation SystemsDistributed, Ambient and Pervasive Interactions10.1007/978-3-031-60012-8_17(273-287)Online publication date: 29-Jun-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media