Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3491101.3519668acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
poster

Exploring the Effects of Interactive Dialogue in Improving User Control for Explainable Online Symptom Checkers

Published: 28 April 2022 Publication History

Abstract

There has been a major push to improve the transparency of online symptom checkers (OSCs) by providing more explanations to users about their functioning and conclusions. However, not all users will want explanations about all aspects of these systems. A more user-centered approach is necessary for personalizing user experience of explanations. With this in mind, we designed and tested an interactive dialogue interface to afford user control to receive only those explanations that they would like to read. A user study (N = 152) with a text-based chatbot for assessing anxiety levels and presented explanations to participants in one of the three forms–an interactive dialogue providing choice for viewing different components of the explanations, a static disclosure of all explanations, and a control condition with no explanations whatsoever. We found that participants varied in the kinds of information they wanted to learn. The interactive delivery of explanations led to higher levels of perceived transparency and affective trust in the system. Furthermore, both subjective and objective understanding of the mechanism used for assessing anxiety was higher for participants in the interactive dialogue condition. We discuss theoretical and practical implications of imbuing interactivity for enhancing the effectiveness of explainable systems.

Supplementary Material

MP4 File (3491101.3519668-video-figure.mp4)
Video Figure
MP4 File (3491101.3519668-talk-video.mp4)
Talk Video

References

[1]
Alessandro Acquisti, Idris Adjerid, and Laura Brandimarte. 2013. Gone in 15 seconds: The limits of privacy transparency and control. IEEE Security & Privacy 11, 4 (2013), 72–74.
[2]
Muhammad Aurangzeb Ahmad, Carly Eckert, and Ankur Teredesai. 2018. Interpretable machine learning in healthcare. In Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. 559–560.
[3]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115.
[4]
Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, 2019. One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012(2019).
[5]
Muhammad Ashfaq, Jiang Yun, Shubin Yu, and Sandra Maria Correia Loureiro. 2020. I, chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics and Informatics 54 (2020), 101473.
[6]
Andrea Bunt, Matthew Lount, and Catherine Lauzon. 2012. Are explanations always important? A study of deployed, low-cost intelligent interactive systems. In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces. 169–178.
[7]
Duncan Chambers, Anna J Cantrell, Maxine Johnson, Louise Preston, Susan K Baxter, Andrew Booth, and Janette Turner. 2019. Digital and online symptom checkers and health assessment/triage services for urgent health problems: Systematic review. BMJ open 9, 8 (2019).
[8]
Tsai-Wei Chen and S Shyam Sundar. 2018. This app would like to use your current location to better serve you: Importance of user assent and system transparency in personalized mobile services. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[9]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
[10]
Boris Delibasic, Milan Vukicevic, and MILO Jovanovic. 2013. White-box decision tree algorithms: A pilot study on perceived usefulness, perceived ease of use, and perceived understanding. International Journal of Engineering Education 29, 3 (2013), 674–687.
[11]
Robert H Ducoffe. 1995. How consumers assess the value of advertising. Journal of Current Issues & Research in Advertising 17, 1(1995), 1–18.
[12]
Marlene Lynette East and Byron C Havard. 2015. Mental health mobile apps: From infusion to diffusion in the mental health social system. JMIR mental health 2, 1 (2015).
[13]
Saeede Eftekhari, Niam Yaraghi, Ranjit Singh, Ram D Gopal, and Ram Ramesh. 2017. Do health information exchanges deter repetition of medical services?ACM Transactions on Management Information Systems (TMIS) 8, 1(2017), 1–27.
[14]
Motahhare Eslami, Sneha R Krishna Kumaran, Christian Sandvig, and Karrie Karahalios. 2018. Communicating algorithmic process in online behavioral advertising. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–13.
[15]
Claes Fornell and David F. Larcker. 1981. Structural equation models with unobservable variables and measurement error: Algebra and statistics. Journal of Marketing Research 18, 3 (1981), 382–388.
[16]
Mouadh Guesmi, Mohamed Amine Chatti, Laura Vorgerd, Shoeb Joarder, Shadi Zumor, Yiqi Sun, Fangzheng Ji, and Arham Muslim. 2021. On-demand personalized explanation for transparent recommendation. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. 246–252.
[17]
Benjamin Haibe-Kains, George Alexandru Adam, Ahmed Hosny, Farnoosh Khodakarami, Levi Waldron, Bo Wang, Chris McIntosh, Anna Goldenberg, Anshul Kundaje, Casey S Greene, 2020. Transparency and reproducibility in Artificial Intelligence. Nature 586, 7829 (2020), E14–E16.
[18]
Andrew F Hayes. 2017. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford Publications.
[19]
Diana C Hernandez-Bocanegra and Jürgen Ziegler. 2021. Effects of interactivity and presentation on review-based explanations for recommendations. In IFIP Conference on Human-Computer Interaction. Springer, 597–618.
[20]
Denis J Hilton. 1990. Conversational processes and causal explanation.Psychological Bulletin 107, 1 (1990), 65.
[21]
Devon Johnson and Kent Grayson. 2005. Cognitive and affective trust in service relationships. Journal of Business Research 58, 4 (2005), 500–507.
[22]
Raina Langevin, Ross J Lordon, Thi Avrahami, Benjamin R Cowan, Tad Hirsch, and Gary Hsieh. 2021. Heuristic evaluation of conversational agents. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[23]
Gabriel Lins de Holanda Coelho, Paul HP Hanel, and Lukas J. Wolf. 2020. The very efficient assessment of need for cognition: Developing a six-item version. Assessment 27, 8 (2020), 1870–1885.
[24]
Chiara Longoni, Andrea Bonezzi, and Carey K Morewedge. 2019. Resistance to medical Artificial Intelligence. Journal of Consumer Research 46, 4 (2019), 629–650.
[25]
Bernd Löwe, Oliver Decker, Stefanie Müller, Elmar Brähler, Dieter Schellberg, Wolfgang Herzog, and Philipp Yorck Herzberg. 2008. Validation and standardization of the Generalized Anxiety Disorder Screener (GAD-7) in the general population. Medical Care (2008), 266–274.
[26]
Kai Lukoff, Ulrik Lyngs, Himanshu Zade, J Vera Liao, James Choi, Kaiyue Fan, Sean A Munson, and Alexis Hiniker. 2021. How the design of youtube influences user sense of agency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.
[27]
Sean M McNee, John Riedl, and Joseph A Konstan. 2006. Making recommendations better: An analytic model for human-recommender interaction. In CHI’06 Extended Abstracts on Human Factors in Computing Systems. 1103–1108.
[28]
Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To explain or not to explain: The effects of personal characteristics when explaining music recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 397–407.
[29]
Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2020. What’s in a user? Towards personalising transparency for music recommender interfaces. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. 173–182.
[30]
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
[31]
Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2021. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 11, 3-4(2021), 1–45.
[32]
Tom Nadarzynski, Oliver Miles, Aimee Cowie, and Damien Ridge. 2019. Acceptability of Artificial Intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digital Health 5(2019), 2055207619871808.
[33]
Jeeyun Oh and S Shyam Sundar. 2015. How does interactivity persuade? An experimental test of interactivity on cognitive absorption, elaboration, and attitudes. Journal of Communication 65, 2 (2015), 213–236.
[34]
Isabel P Riquelme and Sergio Román. 2014. Is the influence of privacy and security on online trust the same for all type of consumers?Electronic Markets 24, 2 (2014), 135–149.
[35]
Cecilia Panigutti, Alan Perotti, and Dino Pedreschi. 2020. Doctor XAI: An ontology-based approach to black-box sequential data classification explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 629–639.
[36]
Betty Pfefferbaum and Carol S North. 2020. Mental health and the Covid-19 pandemic. New England Journal of Medicine 383, 6 (2020), 510–512.
[37]
Pearl Pu, Li Chen, and Rong Hu. 2012. Evaluating recommender systems from the user’s perspective: Survey of the state of the art. User Modeling and User-Adapted Interaction 22, 4 (2012), 317–355.
[38]
Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–13.
[39]
David-Hillel Ruben. 2015. Explaining explanation. Routledge.
[40]
Amit Sapra, Priyanka Bhandari, Shivani Sharma, Trupesh Chanpura, and Lauri Lopp. 2020. Using generalized anxiety disorder-2 (GAD-2) and GAD-7 in a primary care setting. Cureus 12, 5 (2020).
[41]
Ulrich Schiefele. 1991. Interest, learning, and motivation. Educational Psychologist 26, 3-4 (1991), 299–323.
[42]
Rashmi Sinha and Kirsten Swearingen. 2002. The role of transparency in recommender systems. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 830–831.
[43]
Robert L Spitzer, Kurt Kroenke, Janet BW Williams, and Bernd Löwe. 2006. A brief measure for assessing generalized anxiety disorder: The GAD-7. Archives of Internal Medicine 166, 10 (2006), 1092–1097.
[44]
S Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (2020), 74–88.
[45]
S Shyam Sundar, Saraswathi Bellur, Jeeyun Oh, Haiyan Jia, and Hyang-Sook Kim. 2016. Theoretical importance of contingency in human-computer interaction: Effects of message interactivity on user engagement. Communication Research 43, 5 (2016), 595–625.
[46]
S Shyam Sundar, Jinyoung Kim, Mary Beth Rosson, and Maria D Molina. 2020. Online privacy heuristics that predict information disclosure. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[47]
Chun-Hua Tsai, Yue You, Xinning Gui, Yubo Kou, and John M Carroll. 2021. Exploring and promoting diagnostic transparency and explainability in online symptom checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.
[48]
Claire E Weinstein and Debra K Meyer. 1991. Cognitive learning strategies and college teaching.New Directions for Teaching and Learning 45 (1991), 15–26.
[49]
Eric W Welch and Charles C Hinnant. 2003. Internet use, transparency, and interactivity effects on trust in government. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences. IEEE.
[50]
Claire Woodcock, Brent Mittelstadt, Dan Busbridge, Grant Blank, 2021. The impact of explanations on layperson trust in Artificial Intelligence–driven symptom checker apps: Experimental study. Journal of Medical Internet research 23, 11 (2021), e29386.
[51]
Yue You and Xinning Gui. 2020. Self-diagnosis through AI-enabled chatbot-based symptom checkers: User experiences and design considerations. In AMIA Annual Symposium Proceedings, Vol. 2020. American Medical Informatics Association, 1354.

Cited By

View all
  • (2024)When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systemsProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686066(1-17)Online publication date: 16-Sep-2024
  • (2024)Generative AI in the Wild: Prospects, Challenges, and StrategiesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642160(1-16)Online publication date: 11-May-2024
  • (2024)Preventing users from going down rabbit holes of extreme video contentInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103303190:COnline publication date: 1-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI EA '22: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems
April 2022
3066 pages
ISBN:9781450391566
DOI:10.1145/3491101
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 April 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. explanations
  2. interactive dialogue
  3. online symptom checker
  4. trust
  5. user control

Qualifiers

  • Poster
  • Research
  • Refereed limited

Conference

CHI '22
Sponsor:
CHI '22: CHI Conference on Human Factors in Computing Systems
April 29 - May 5, 2022
LA, New Orleans, USA

Acceptance Rates

Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

Upcoming Conference

CHI '25
CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)254
  • Downloads (Last 6 weeks)25
Reflects downloads up to 25 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systemsProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686066(1-17)Online publication date: 16-Sep-2024
  • (2024)Generative AI in the Wild: Prospects, Challenges, and StrategiesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642160(1-16)Online publication date: 11-May-2024
  • (2024)Preventing users from going down rabbit holes of extreme video contentInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103303190:COnline publication date: 1-Oct-2024
  • (2023)Beyond Self-diagnosis: How a Chatbot-based Symptom Checker Should RespondACM Transactions on Computer-Human Interaction10.1145/358995930:4(1-44)Online publication date: 11-Sep-2023
  • (2023)AutoML in The Wild: Obstacles, Workarounds, and ExpectationsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581082(1-15)Online publication date: 19-Apr-2023
  • (2023)Is this AI trained on Credible Data? The Effects of Labeling Quality and Performance Bias on User TrustProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580805(1-11)Online publication date: 19-Apr-2023
  • (2023)Chatbots or Humans? Effects of Agent Identity and Information Sensitivity on Users’ Privacy Management and Behavioral Intentions: A Comparative Experimental Study between China and the United StatesInternational Journal of Human–Computer Interaction10.1080/10447318.2023.223897440:19(5632-5647)Online publication date: 14-Aug-2023
  • (2023)Giving DIAnA More TIME – Guidance for the Design of XAI-Based Medical Decision Support SystemsDesign Science Research for a New Society: Society 5.010.1007/978-3-031-32808-4_7(107-122)Online publication date: 31-May-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media