Abstract
Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearer’s real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16% increase in accuracy with a 62% decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines.
Eric Rosen and David Whitney are contributed equally.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Due to imperfections in the HoloLens’ SLAM, the authors noticed a drift of several centimeters could occur over a long period of use.
References
Ahn, J.-G., Kim, G.J.: Remote collaboration using a tele-presence mobile projector robot tele-operated by a smartphone. In: IEEE/SICE International Symposium on System Integration (SII), pp. 236–241. IEEE (2016)
Andersen, R.S., Madsen, O., Moeslund, T.B., Amor, H.B.: Projecting robot intentions into human environments. In: Robot and Human Interactive Communication (RO-MAN), pp. 294–301. IEEE (2016)
Brooke, J., et al.: SUS-A quick and dirty usability scale. Usability Eval. Ind. 189(194), 4–7 (1996)
Burke, J.L., Murphy, R.R.: Situation awareness and task performance in robot-assisted technical search: Bujold goes to Bridgeport (2004)
Burke, J.L., Murphy, R.R., Coovert, M.D., Riddle, D.L.: Moonlight in Miami: field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise. Hum.-Comput. Interact. 19(1–2), 85–116 (2004)
Chadalavada, R.T., Andreasson, H., Krug, R., Lilienthal, A.J.: That’s on my mind! robot to human intention communication through on-board projection on shared floor space. In: European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015)
Chadalavada, R.T., Lilienthal, A., Andreasson, H., Krug, R.: Empirical evaluation of human trust in an expressive mobile robot. In: RSS Workshop on Social Trust in Autonomous Robots (2016)
Chen, H., Lee, A.S., Swift, M., Tang, J.C.: 3D collaboration method over hololens and skype end points. In: Proceedings of the 3rd International Workshop on Immersive Media Experiences, pp. 27–30. ACM (2015)
Demiralp, C., Jackson, C.D., Karelitz, D.B., Zhang, S., Laidlaw, D.H.: Cave and fishtank virtual-reality displays: a qualitative and quantitative comparison. IEEE Trans. Vis. Comput. Graph. 12(3), 323–330 (2006)
Dragan, A.D., Lee, K.C., Srinivasa, S.S.: Legibility and predictability of robot motion. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 301–308. IEEE (2013)
Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Auton. Syst. 42(3), 143–166 (2003)
Han, Y.: The social behavior guide for confused autonomous machines. Master’s thesis, Rhode Island School of Design (2016)
Kam, H.R., Lee, S.-H., Park, T., Kim, C.-H.: RViz: a toolkit for real domain data visualization. Telecommun. Syst. 60(2), 337–345 (2015)
Kasik, D.J., Troy, J.J., Amorosi, S.R., Murray, M.O., Swamy, S.N.: Evaluating graphics displays for complex 3D models. IEEE Comput. Graph. Appl. 22(3), 56–64 (2002)
Kato, H., Billinghurst, M.: Marker tracking and HMD Calibration for a video-based augmented reality conferencing system. In: IEEE and ACM International Workshop on Augmented Reality (IWAR), pp. 85–94. IEEE (1999)
Macmillan, N.A.: Signal detection theory. Stevens’ Handbook of Experimental Psychology (2002)
May, A.D., Dondrup, C., Hanheide, M.: Show me your moves! Conveying navigation intention of a mobile robot to humans. In: European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015)
Milgram, P., Zhai, S., Drascic, D., Grodski, J.: Applications of augmented reality for human-robot communication. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3, pp. 1467–1472. IEEE (1993)
Mutlu, B., Yamaoka, F., Kanda, T., Ishiguro, H., Hagita, N.: Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior. In: ACM/IEEE International Conference on Human Robot Interaction, pp. 69–76. ACM (2009)
Nakata, T., Sato, T., Mori, T., Mizoguchi, H.: Expression of emotion and intention by robot body movement. In: International Conference on Autonomous Systems (1998)
NASA Human Performance Research Group and others. Task Load Index (NASA-TLX) v1. 0 computerised version. NASA Ames Research Centre (1987)
Ohshima, T., Satoh, K., Yamamoto, H., Tamura, H.: AR2 Hockey: a case study of collaborative augmented reality. In: Proceedings of the Virtual Reality Annual International Symposium, (s 268) (1998)
Pausch, R., Shackelford, M.A., Proffitt, D.: A user study comparing head-mounted and stationary displays. In: Research Properties in Virtual Reality Symposium (1993)
Rekimoto, J.: Transvision: a hand-held augmented reality system for collaborative design. Virtual Syst. Multimed. 96, 18–20 (1996)
Ruddle, R.A., Payne, S.J., Jones, D.M.: Navigating large-scale virtual environments: what differences occur between helmet-mounted and desk-top displays? Presence 8(2), 157–168 (1999)
Ruffaldi, E., Brizzi, F., Tecchia, F., Bacinelli, S.: Third point of view augmented reality for robot intentions visualization, pp. 471–478. Springer International Publishing, Cham (2016)
Scassellati, B., Hayes, B.: Human-robot collaboration. AI Matters 1(2), 22–23 (2014)
Schaefer, K.E., Straub, E.R., Chen, J.Y., Putney, J., Evans, A.: Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn. Syst. Res. (2017)
Shrestha, M.C., Kobayashi, A., Onishi, T., Uno, E., Yanagawa, H., Yokoyama, Y., Kamezaki, M., Schmitz, A. Sugano, S.: Intent communication in navigation through the use of light and screen indicators. In: ACM/IEEE International Conference on Human Robot Interaction, pp. 523–524. IEEE (2016)
Shrestha, M.C., Kobayashi, A., Onishi, T., Yanagawa, H., Yokoyama, Y., Uno, E., Schmitz, A., Kamezaki, M., Sugano, S.: Exploring the use of light and display indicators for communicating directional intent. In: Advanced Intelligent Mechatronics, pp. 1651–1656. IEEE (2016)
Slater, M., Linakis, V., Usoh, M., Kooper, R.: Immersion, presence, and performance in virtual environments: an experiment with tri-dimensional chess. In: ACM Virtual Reality Software and Technology (VRST), vol. 163, pp. 72. ACM Press, New York (1996)
Slater, M., Sanchez-Vives, M.V.: Enhancing our lives with immersive virtual reality. Front. Robot. AI 3, 74 (2016)
Santos, B.S., Dias, P., Pimentel, A., Baggerman, J.-W., Ferreira, C., Silva, S., Madeira, J.: Head-mounted display versus desktop for 3D navigation in virtual reality: a user study. Multimed. Tools Appl. 41(1), 161 (2009)
Stanislaw, H., Todorov, N.: Calculation of signal detection theory measures. Behav. Res. Methods Instrum. Comput. 31(1), 137–149 (1999)
Szafir, D., Mutlu, B., Fong, T.: Communication of intent in assistive free flyers. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 358–365. ACM (2014)
Szafir, D., Mutlu, B., Fong, T.: Communicating directionality in flying robots. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 19–26. ACM (2015)
Takayama, L., Dooley, D., Ju, W.: Expressing thought: improving robot readability with animation principles. In: International Conference on Human-Robot Interaction, pp. 69–76. ACM (2011)
Tanner Jr., W.P., Swets, J.A.: A decision-making theory of visual detection. Psychol. Rev. 61(6), 401 (1954)
Ware, C., Franck, G.: Viewing a graph in a virtual reality display is three times as good as a 2D diagram. In: IEEE Symposium on Visual Languages, pp. 182–183 (1994)
Acknowledgements
We thank David Laidlaw for fruitful discussion on VR literature. This work was supported by DARPA under grant number D15AP00102 and by the AFRL under grant number FA9550-17-1-0124. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA or AFRL.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Rosen, E. et al. (2020). Communicating Robot Arm Motion Intent Through Mixed Reality Head-Mounted Displays. In: Amato, N., Hager, G., Thomas, S., Torres-Torriti, M. (eds) Robotics Research. Springer Proceedings in Advanced Robotics, vol 10. Springer, Cham. https://doi.org/10.1007/978-3-030-28619-4_26
Download citation
DOI: https://doi.org/10.1007/978-3-030-28619-4_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-28618-7
Online ISBN: 978-3-030-28619-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)