Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3687272.3688307acmconferencesArticle/Chapter ViewAbstractPublication PageshaiConference Proceedingsconference-collections
research-article
Open access

Broken Trust: Does the Agent Matter?

Published: 24 November 2024 Publication History

Abstract

Trust is a key part of any social interaction, whether that be between humans, or humans interacting with different artificial agents. This paper investigates how an agent’s repeated incongruence failure might impact users’ trust. We augment a previously published human-robot interaction study (Nesset et al., 2023), by replacing the robot condition with a human actor. Here, we explore how users’ trust can be impacted by repeated failure depending on the agent involved and how to best repair trust once the failures take place. Our study found a significant decrease in users’ trust when a human makes an incongruence failure, but not when this failure was repeated, regardless of the repair strategy implemented. When comparing this to the previous robot condition, we found a significant difference in the trust measured in the human and the robot condition. Additionally, the repair strategy used had a significant effect on the users’ trust when the robot repeated its failure but not when the actor did. Our findings contribute to research on broken trust with repeated failures and highlight the importance of including a human comparison to better understand research findings in human-robot interactions.

References

[1]
Gene M. Alarcon, August Capiola, Izz Aldin Hamdan, Michael A. Lee, and Sarah A. Jessup. 2023. Differential biases in human-human versus human-robot interactions. Applied Ergonomics 106 (2023), 103858. https://doi.org/10.1016/j.apergo.2022.103858
[2]
Casey Bennett, S. Sabanovic, Marlena Fraune, and Kate Shaw. 2014. Context congruency and robotic facial expressions: Do effects on human perceptions vary across culture?Proceedings - IEEE International Workshop on Robot and Human Interactive Communication 2014 (08 2014). https://doi.org/10.1109/ROMAN.2014.6926296
[3]
Janet Mills Bentz. 1973. Do actions speak louder than words?: an inquiry into incongruent communications. (1973).
[4]
Adella Bhaskara, Michael Skinner, and Shayne Loft. 2020. Agent Transparency: A Review of Current Theory and Evidence. IEEE Transactions on Human-Machine Systems 50, 3 (2020), 215–224. https://doi.org/10.1109/THMS.2020.2965529
[5]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77–101. https://doi.org/10.1191/1478088706qp063oa
[6]
C. Breazeal and B. Scassellati. 1999. How to build robots that make friends and influence people. Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289) 2 (Oct 1999), 858–863 vol.2. https://doi.org/10.1109/IROS.1999.812787
[7]
Daniel J. Brooks. 2017. A human-centric approach to autonomous robot failures. ProQuest Dissertations and Theses (2017), 229.
[8]
Meia Chita-Tegmark, Theresa Law, Nicholas Rabb, and Matthias Scheutz. 2021. Can you trust your trust measure?Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction (2021), 92–100.
[9]
Francisco Javier Chiyah Garcia, David A. Robb, Xingkun Liu, Atanas Laskov, Pedro Patron, and Helen Hastie. 2018. Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models. Proceedings of the 11th International Conference on Natural Language Generation (Nov. 2018), 99–108. https://doi.org/10.18653/v1/W18-6511
[10]
Benjamin F Crabtree. 1999. Doing qualitative research. Sage.
[11]
Ewart de Visser, Samuel Monfort, Ryan Mckendrick, Melissa Smith, Patrick Mcknight, Frank Krueger, and Raja Parasuraman. 2016. Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive Agents. Journal of Experimental Psychology: Applied 22 (08 2016). https://doi.org/10.1037/xap0000092
[12]
Munjal Desai, Poornima Kaniarasu, Mikhail Medvedev, Aaron Steinfeld, and Holly Yanco. 2013. Impact of robot failures and feedback on real-time trust. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2013), 251–258.
[13]
Connor Esterwood and Lionel P Robert. 2021. Do you still trust me? human-robot trust repair strategies. 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) (2021), 183–188.
[14]
M Lance Frazier, Paul D Johnson, and Stav Fainshmidt. 2013. Development and validation of a propensity to trust scale. Journal of Trust Research 3, 2 (2013), 76–97.
[15]
Manuel Giuliani, Nicole Mirnig, Gerald Stollnberger, Susanne Stadler, Roland Buchner, and Manfred Tscheligi. 2015. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations. Frontiers in Psychology 6 (2015). https://doi.org/10.3389/fpsyg.2015.00931
[16]
Edward Glaeser, David Laibson, Jose Scheinkman, and Christine Soutter. 2000. Measuring Trust. The Quarterly Journal of Economics 115 (02 2000), 811–846. https://doi.org/10.1162/003355300554926
[17]
Rita Gorawara-Bhat, Linda Hafskjold, Pål Gulbrandsen, and Hilde Eide. 2017. Exploring physicians’ verbal and nonverbal responses to cues/concerns: Learning from incongruent communication. Patient education and counseling 100, 11 (2017), 1979–1989.
[18]
Tsfira Grebelsky-Lichtman. 2017. Verbal versus nonverbal primacy: Children’s response to parental incongruent communication. Journal of social and personal Relationships 34, 5 (2017), 636–661.
[19]
Peter A Hancock, Deborah R Billings, Kristin E Schaefer, Jessie YC Chen, Ewart J De Visser, and Raja Parasuraman. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Human Factors 53, 5 (2011), 517–527.
[20]
P. A. Hancock, Theresa T. Kessler, Alexandra D. Kaplan, John C. Brill, and James L. Szalma. 2021. Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses. Human Factors 63, 7 (2021), 1196–1229. https://doi.org/10.1177/0018720820922080 arXiv:https://doi.org/10.1177/0018720820922080
[21]
Shanee Honig and Tal Oron-Gilad. 2018. Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development. Frontiers in Psychology 9 (2018). https://doi.org/10.3389/fpsyg.2018.00861
[22]
Sarah Jessup, Tamera Schneider, Gene Alarcon, Tyler Ryan, and August Capiola. 2019. The Measurement of the Propensity to Trust Automation. Virtual, Augmented and Mixed Reality. Applications and Case Studies (06 2019), 476–489. https://doi.org/10.1007/978-3-030-21565-1_32
[23]
Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics 4, 1 (2000), 53–71.
[24]
Theresa Kessler, Cintya Dutta, Tiffani Marlowe, Valarie Yerdon, and Peter Hancock. 2016. A Comparison of Trust Measures in Human–Robot Interaction Scenarios. 499 (07 2016), 353–364. https://doi.org/10.1007/978-3-319-41959-6_29
[25]
Peter H Kim, Donald L Ferrin, Cecily D Cooper, and Kurt T Dirks. 2004. Removing the shadow of suspicion: the effects of apology versus denial for repairing competence-versus integrity-based trust violations.Journal of applied psychology 89, 1 (2004), 104.
[26]
Dimosthenis Kontogiorgos, Sanne van Waveren, Olle Wallberg, Andre Pereira, Iolanda Leite, and Joakim Gustafson. 2020. Embodiment Effects in Interactions with Failing Robots. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020), 1–14. https://doi.org/10.1145/3313831.3376372
[27]
John Lee and Katrina See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors 46 (02 2004), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
[28]
Roy J. Lewicki and Chad Brinsfield. 2017. Trust Repair. Annual Review of Organizational Psychology and Organizational Behavior 4, 1 (2017), 287–313. https://doi.org/10.1146/annurev-orgpsych-032516-113147 arXiv:https://doi.org/10.1146/annurev-orgpsych-032516-113147
[29]
Michael Lewis, Katia Sycara, and Phillip Walker. 2018. The Role of Trust in Human-Robot Interaction. Foundations of Trusted Autonomy (2018), 135–159. https://doi.org/10.1007/978-3-319-64816-3_8
[30]
Joseph B. Lyons. 2013. Being Transparent about Transparency. (2013). https://api.semanticscholar.org/CorpusID:18712525
[31]
Karoline Malchus, Petra Jaecks, Oliver Damm, Prisca Stenneken, Carolin Carles, and Britta Wrede. 2013. The role of emotional congruence in human-robot interaction. ACM/IEEE International Conference on Human-Robot Interaction (03 2013), 191–192. https://doi.org/10.1109/HRI.2013.6483566
[32]
Birthe Nesset, Gnanathusharan Rajendran, José David Aguas Lopes, and Helen Hastie. 2022. Sensitivity of Trust Scales in the Face of Errors. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2022), 950–954. https://doi.org/10.1109/HRI53351.2022.9889427
[33]
Birthe Nesset, David A. Robb, José Lopes, and Helen Hastie. 2021. Transparency in HRI: Trust and Decision Making in the Face of Robot Errors. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (2021), 313–317. https://doi.org/10.1145/3434074.3447183
[34]
Birthe Nesset, Marta Romeo, Gnanathusharan Rajendran, and Helen Hastie. 2023. Robot Broken Promise? Repair strategies for mitigating loss of trust for repeated failures. 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (2023), 1389–1395.
[35]
Tatsuya Nomura, Tomohiro Suzuki, Takayuki Kanda, and Kensuke Kato. 2006. Measurement of negative attitudes toward robots. Interaction Studies 7 (01 2006), 437–454. https://doi.org/10.1075/is.7.3.14nom
[36]
Marieke Otterdijk, Emilia Barakova, Jim Torresen, and Margot Neggers. 2021. Preferences of Seniors for Robots Delivering a Message With Congruent Approaching Behavior. 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO) (07 2021). https://doi.org/10.1109/ARSO51874.2021.9542833
[37]
Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors: The Journal of Human Factors and Ergonomics Society 39, 2 (1997), 230–253.
[38]
Marta Romeo, Peter E. McKenna, David A. Robb, Gnanathusharan Rajendran, Birthe Nesset, Angelo Cangelosi, and Helen Hastie. 2022. Exploring Theory of Mind for Human-Robot Collaboration. 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (2022), 461–468. https://doi.org/10.1109/RO-MAN53752.2022.9900550
[39]
Julian B Rotter. 1967. A new scale for the measurement of interpersonal trust.Journal of personality (1967).
[40]
Denise M. Rousseau, Sim B. Sitkin, Ronald S. Burt, and Colin Camerer. 1998. Introduction to Special Topic Forum: Not so Different after All: A Cross-Discipline View of Trust. The Academy of Management Review 23, 3 (1998), 393–404. http://www.jstor.org/stable/259285
[41]
Maha Salem, Friederike Eyssel, Katharina Rohlfing, Stefan Kopp, and Frank Joublin. 2013. To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability. International Journal of Social Robotics 5 (08 2013). https://doi.org/10.1007/s12369-013-0196-9
[42]
Maha Salem, Gabriella Lakatos, Farshid Amirabdollahian, and Kerstin Dautenhahn. 2015. Would You Trust a (Faulty) Robot?: Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust. 2015 (03 2015). https://doi.org/10.1145/2696454.2696497
[43]
Kristin Schaefer. 2016. Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI”. 191–218. https://doi.org/10.1007/978-1-4899-7668-0_10
[44]
Nicolas Scharowski, Sebastian AC Perrig, Lena Fanya Aeschbach, Nick von Felten, Klaus Opwis, Philipp Wintersberger, and Florian Bruhlmann. 2024. To Trust or Distrust Trust Measures: Validating Questionnaires for Trust in AI. arXiv preprint arXiv:2403.00582 (2024).
[45]
E. Schniter, T.W. Shields, and D. Sznycer. 2020. Trust in humans and robots: Economically similar but emotionally different. Journal of Economic Psychology 78 (2020), 102253. https://doi.org/10.1016/j.joep.2020.102253
[46]
Rebecca Stower, Arvid Kappas, and Kristyn Sommer. 2024. When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task. Computers in Human Behavior 157 (2024), 108229. https://doi.org/10.1016/j.chb.2024.108229
[47]
Christiana Tsiourti, Astrid Weiss, Katarzyna Wac, and Markus Vincze. 2019. Multimodal integration of emotional signals from voice, body, and context: Effects of (in) congruence on emotion recognition and attitudes towards robots. International Journal of Social Robotics 11, 4 (2019), 555–573.
[48]
Daniel Ullman and Bertram Malle. 2018. What Does it Mean to Trust a Robot?: Steps Toward a Multidimensional Measure of Trust. Companion of the 2018 ACM/IEEE International Conference (03 2018), 263–264. https://doi.org/10.1145/3173386.3176991
[49]
Daniel Ullman and Bertram F. Malle. 2019. Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2019), 618–619. https://doi.org/10.1109/HRI.2019.8673154
[50]
Amy Van Buren. 2002. The relationship of verbal-nonverbal incongruence to communication mismatches in married couples. (2002).
[51]
Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 327 (oct 2021), 39 pages. https://doi.org/10.1145/3476068
[52]
Robert H Wortham and Andreas Theodorou. 2017. Robot transparency, trust and utility. Connection Science 29, 3 (2017), 242–248.

Index Terms

  1. Broken Trust: Does the Agent Matter?

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HAI '24: Proceedings of the 12th International Conference on Human-Agent Interaction
    November 2024
    502 pages
    ISBN:9798400711787
    DOI:10.1145/3687272
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 November 2024

    Check for updates

    Author Tags

    1. Failure
    2. Human-Human Interaction
    3. Human-Robot Interaction
    4. Repair Strategies
    5. Trust

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Heriot-Watt University James Watt Scholarship

    Conference

    HAI '24
    Sponsor:
    HAI '24: International Conference on Human-Agent Interaction
    November 24 - 27, 2024
    Swansea, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 121 of 404 submissions, 30%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 69
      Total Downloads
    • Downloads (Last 12 months)69
    • Downloads (Last 6 weeks)32
    Reflects downloads up to 20 Jan 2025

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media