Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

The Need for Verbal Robot Explanations and How People Would Like a Robot to Explain Itself

Published: 14 September 2021 Publication History

Abstract

Although non-verbal cues such as arm movement and eye gaze can convey robot intention, they alone may not provide enough information for a human to fully understand a robot’s behavior. To better understand how to convey robot intention, we conducted an experiment (N = 366) investigating the need for robots to explain, and the content and properties of a desired explanation such as timing, engagement importance, similarity to human explanations, and summarization. Participants watched a video where the robot was commanded to hand an almost-reachable cup and one of six reactions intended to show the unreachability : doing nothing (No Cue), turning its head to the cup (Look), or turning its head to the cup with the addition of repeated arm movement pointed towards the cup (Look & Point), and each of these with or without a Headshake. The results indicated that participants agreed robot behavior should be explained across all conditions, in situ, in a similar manner as what human explain, and provide concise summaries and respond to only a few follow-up questions by participants. Additionally, we replicated the study again with N = 366 participants after a 15-month span and all major conclusions still held.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 582:1–582:18.
[2]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 (2018), 52138–52160.
[3]
Henny Admoni, Thomas Weng, Bradley Hayes, and Brian Scassellati. 2016. Robot nonverbal behavior improves task performance in difficult collaborations. In Proceedings of the 11th ACM/IEEE International Conference on Human-robot Interaction. IEEE, 51–58.
[4]
Dan Amir and Ofra Amir. 2018. Highlights: Summarizing agent behavior to people. In Proceedings of the 17th International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’18). 1168–1176.
[5]
Ofra Amir, Finale Doshi-Velez, and David Sarne. 2018. Agent strategy summarization. In Proceedings of the 17th International Conference on Autonomous Agents and Multi-agent Systems (AAAMS’18). International Foundation for Autonomous Agents and Multiagent Systems, 1203–1207.
[6]
Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable agents and robots: Results from a systematic literature review. In Proceedings of the 18th International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’19). 1078–1088.
[7]
W. A. Bainbridge, J. Hart, E. S. Kim, and B. Scassellati. 2008. The effect of presence on human-robot interaction. In Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’08). IEEE, 701–706.
[8]
Gisela Böhm and Hans-Rüdiger Pfister. 2015. How people explain their own and others’ behavior: A theory of lay causal explanations. Front. Psychol. 6 (2015), 139.
[9]
Daniel J. Brooks, Momotaz Begum, and Holly A. Yanco. 2016. Analysis of reactions towards failures and recovery strategies for autonomous robots. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’16). IEEE, 487–492.
[10]
Elizabeth Cha, Yunkyung Kim, Terrence Fong, Maja J. Mataric, et al. 2018. A survey of nonverbal signaling methods for non-humanoid robots. Found. Trends® Robot. 6, 4 (2018), 211–323. Retrieved from https://www.lizcha.com/publications/ft_2018.pdf.
[11]
Tathagata Chakraborti, Sarath Sreedharan, and Subbarao Kambhampati. 2020. The emerging landscape of explainable automated planning & decision making. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI’20). 4803–4811.
[12]
Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, and Subbarao Kambhampati. 2017. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI’17). 156–163.
[13]
Angelos Chatzimparmpas, Rafael M. Martins, Ilir Jusufi, and Andreas Kerren. 2020. A survey of surveys on the use of visualization for interpreting machine learning models. Inf. Visual. 19, 3 (2020), 207–233.
[14]
Sachin Chitta, Ioan Sucan, and Steve Cousins. 2012. MoveIt!IEEE Robot. Autom. Mag. 19, 1 (2012), 18–19. Retrieved from https://moveit.ros.org.
[15]
M. M. de Graaf and Bertram F. Malle. 2017. How people explain action (and autonomous intelligent systems should too). In Proceedings of the AAAI Fall Symposium on Artificial Intelligence for Human-robot Interaction. AAAI, 19–26.
[16]
Munjal Desai, Poornima Kaniarasu, Mikhail Medvedev, Aaron Steinfeld, and Holly Yanco. 2013. Impact of robot failures and feedback on real-time trust. In Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction. IEEE, 251–258.
[17]
Robert F. DeVellis. 2016. Scale Development: Theory and Applications. Vol. 26. SAGE.
[18]
Anca Dragan and Siddhartha Srinivasa. 2014. Familiarization to robot motion. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction. ACM, 366–373.
[19]
Anca D. Dragan, Kenton C. T. Lee, and Siddhartha S. Srinivasa. 2013. Legibility and predictability of robot motion. In Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction. IEEE, 301–308.
[20]
Franz Faul, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Meth. 39, 2 (2007), 175–191. Retrieved from http://www.gpower.hhu.de/.
[21]
Naomi T. Fitter and Katherine J. Kuchenbecker. 2016. Designing and assessing expressive open-source faces for the Baxter robot. In Proceedings of the International Conference on Social Robotics. 340–350. Retrieved from https://github.com/nfitter/BaxterFaces.
[22]
Cliff Fitzgerald. 2013. Developing Baxter. In Proceedings of the IEEE Conference on Technologies for Practical Robot Applications (TePRA’13). IEEE, 1–6.
[23]
Cheryl D. Fryar, Deanna Kruszan-Moran, Qiuping Gu, and Cynthia L. Ogden. 2018. Mean body weight, weight, waist circumference, and body mass index among adults: United States, 1999–2000 through 2015–2016. Nat. Health Statist. Rep.122 (2018), 1–16.
[24]
Alison Gopnik. 2000. Explanation as orgasm and the drive for causal knowledge: The function, evolution, and phenomenology of the theory formation system. In Explanation and Cognition, Frank C. Keil and Robert Andrew Wilson (Eds.). The MIT Press, 299–323.
[25]
Alison Gopnik and Laura Elizabeth Schulz. 2007. Causal Learning: Psychology, Philosophy, and Computation. Oxford University Press.
[26]
Zhao Han, Daniel Giger, Jordan Allspaw, Michael S. Lee, Henny Admoni, and Holly A. Yanco. 2021. Building the foundation of robot explanation generation using behavior trees. ACM Trans. Hum.-robot Interact. 10, 3 (2021).
[27]
Zhao Han, Alexander Wilkinson, Jenna Parrillo, Jordan Allspaw, and Holly A. Yanco. 2020. Projection mapping implementation: Enabling direct externalization of perception results and action intent to improve robot explainability. In Proceedings of the AAAI Fall Symposium on the Artificial Intelligence for Human-robot Interaction (AI-HRI’20).
[28]
Zhao Han and Holly Yanco. 2019. The effects of proactive release behaviors during human-robot handovers. In Proceedings of the 14th ACM/IEEE International Conference on Human-robot Interaction (HRI’19). IEEE, 440–448.
[29]
Bradley Hayes and Julie A. Shah. 2017. Improving robot controller transparency through autonomous policy explanation. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction. ACM, 303–312.
[30]
Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scand. J. Statist. 6, 2 (1979), 65–70.
[31]
Keith James Holyoak and Robert G. Morrison. 2012. The Oxford Handbook of Thinking and Reasoning. Oxford University Press.
[32]
Barbara Koslowski. 1996. Theory and Evidence: The Development of Scientific Reasoning. The MIT Press.
[33]
Minae Kwon, Sandy H. Huang, and Anca D. Dragan. 2018. Expressing robot incapability. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction. ACM, 87–95.
[34]
J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics 33, 1 (1977), 159–174.
[35]
Ask Media Group LLC. 2019. How Long Is the Average Human Arm? Retrieved from https://www.reference.com/science/long-average-human-arm-62c7536c5e56f385.
[36]
Tania Lombrozo. 2006. The structure and function of explanations. Trends in Cognitive Sciences 10, 10 (2006), 464–470.
[37]
Bertram F. Malle. 2006. How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction. The MIT Press.
[38]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267 (2019), 1–38.
[39]
Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 279–288.
[40]
Daniel E. Moerman. 2002. Meaning, Medicine, and the “Placebo Effect.” Vol. 28. Cambridge University Press, Cambridge, MA.
[41]
AJung Moon, Daniel M. Troniak, Brian Gleeson, Matthew K. X. J. Pan, Minhua Zheng, Benjamin A. Blumer, Karon MacLean, and Elizabeth A. Croft. 2014. Meet me where I’m gazing: How shared attention gaze affects human-robot handover timing. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction. ACM, 334–341.
[42]
Elizabeth Phillips, Xuan Zhao, Daniel Ullman, and Bertram F. Malle. 2018. What is human-like? Decomposing robots’ human-like appearance using the Anthropomorphic roBOT (ABOT) database. In Proceedings of the ACM/IEEE International Conference on Human-robot Interaction. 105–113.
[43]
Stela H. Seo, Denise Geiskkovitch, Masayuki Nakane, Corey King, and James E. Young. 2015. Poor thing! Would you feel sorry for a simulated robot?: A comparison of empathy toward a physical and a simulated robot. In Proceedings of the 10th ACM/IEEE International Conference on Human-robot Interaction. ACM, 125–132.
[44]
Aaquib Tabrez, Matthew B. Luebbers, and Bradley Hayes. 2020. A survey of mental modeling techniques in human–robot teaming. Curr. Robot. Rep. 1, 4 (2020), 1–9.
[45]
Sam Thellman, Annika Silvervarg, and Tom Ziemke. 2017. Folk-psychological interpretation of human vs. humanoid robot behavior: Exploring the intentional stance toward robots. Front. Psychol. 8 (2017), 1962.
[46]
Sebastian Wallkotter, Silvia Tulli, Ginevra Castellano, Ana Paiva, and Mohamed Chetouani. 2020. Explainable agents through social cues: A review. arXiv preprint arXiv:2003.05251 (2020).
[47]
Henry M. Wellman, Anne K. Hickling, and Carolyn A. Schult. 1997. Young children’s psychological, physical, and biological explanations.New Direct. Child Adoles. Dev. 1997, 75 (1997), 7–26.
[48]
Fabio Massimo Zanzotto. 2019. Human-in-the-loop artificial intelligence. J. Artif. Intell. Res. 64 (2019), 243–252.

Cited By

View all
  • (2024)A survey of communicating robot learning during human-robot interactionThe International Journal of Robotics Research10.1177/02783649241281369Online publication date: 7-Oct-2024
  • (2024)Software Architecture to Generate Assistive Behaviors for Social RobotsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640715(1119-1123)Online publication date: 11-Mar-2024
  • (2024)A Generalizable Architecture for Explaining Robot Failures Using Behavior Trees and Large Language ModelsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640551(1038-1042)Online publication date: 11-Mar-2024
  • Show More Cited By

Index Terms

  1. The Need for Verbal Robot Explanations and How People Would Like a Robot to Explain Itself

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Human-Robot Interaction
    ACM Transactions on Human-Robot Interaction  Volume 10, Issue 4
    December 2021
    282 pages
    EISSN:2573-9522
    DOI:10.1145/3476005
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 September 2021
    Accepted: 01 April 2021
    Revised: 01 January 2021
    Received: 01 July 2020
    Published in THRI Volume 10, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Robot explanation
    2. behavior explanation
    3. system transparency

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)456
    • Downloads (Last 6 weeks)63
    Reflects downloads up to 13 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A survey of communicating robot learning during human-robot interactionThe International Journal of Robotics Research10.1177/02783649241281369Online publication date: 7-Oct-2024
    • (2024)Software Architecture to Generate Assistive Behaviors for Social RobotsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640715(1119-1123)Online publication date: 11-Mar-2024
    • (2024)A Generalizable Architecture for Explaining Robot Failures Using Behavior Trees and Large Language ModelsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640551(1038-1042)Online publication date: 11-Mar-2024
    • (2024)Explainability for Human-Robot CollaborationCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3638154(1364-1366)Online publication date: 11-Mar-2024
    • (2024)When Do People Want an Explanation from a Robot?Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634990(752-761)Online publication date: 11-Mar-2024
    • (2024)Reactive or Proactive? How Robots Should Explain FailuresProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634963(413-422)Online publication date: 11-Mar-2024
    • (2024)(Gestures Vaguely): The Effects of Robots' Use of Abstract Pointing Gestures in Large-Scale EnvironmentsProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634924(293-302)Online publication date: 11-Mar-2024
    • (2024)Designing Indicators to Show a Robot's Physical Vision Capability2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW62533.2024.00290(987-988)Online publication date: 16-Mar-2024
    • (2024) Exploratory user study on verbalization of explanations * 2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS)10.1109/ICHMS59971.2024.10555834(1-7)Online publication date: 15-May-2024
    • (2024)Exploratory design of non-verbal politeness of a robotic arm2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS)10.1109/ICHMS59971.2024.10555647(1-7)Online publication date: 15-May-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media