Nothing Special   »   [go: up one dir, main page]

Skip to main content

Communicating Robot Arm Motion Intent Through Mixed Reality Head-Mounted Displays

  • Conference paper
  • First Online:
Robotics Research

Abstract

Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods. Many existing methods for robot motion intent communication rely on 2D displays, which require the human to continually pause their work and check a visualization. We propose a mixed reality head-mounted display visualization of the proposed robot motion over the wearer’s real-world view of the robot and its environment. To evaluate the effectiveness of this system against a 2D display visualization and against no visualization, we asked 32 participants to labeled different robot arm motions as either colliding or non-colliding with blocks on a table. We found a 16% increase in accuracy with a 62% decrease in the time it took to complete the task compared to the next best system. This demonstrates that a mixed-reality HMD allows a human to more quickly and accurately tell where the robot is going to move than the compared baselines.

Eric Rosen and David Whitney are contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Due to imperfections in the HoloLens’ SLAM, the authors noticed a drift of several centimeters could occur over a long period of use.

References

  1. Ahn, J.-G., Kim, G.J.: Remote collaboration using a tele-presence mobile projector robot tele-operated by a smartphone. In: IEEE/SICE International Symposium on System Integration (SII), pp. 236–241. IEEE (2016)

    Google Scholar 

  2. Andersen, R.S., Madsen, O., Moeslund, T.B., Amor, H.B.: Projecting robot intentions into human environments. In: Robot and Human Interactive Communication (RO-MAN), pp. 294–301. IEEE (2016)

    Google Scholar 

  3. Brooke, J., et al.: SUS-A quick and dirty usability scale. Usability Eval. Ind. 189(194), 4–7 (1996)

    Google Scholar 

  4. Burke, J.L., Murphy, R.R.: Situation awareness and task performance in robot-assisted technical search: Bujold goes to Bridgeport (2004)

    Google Scholar 

  5. Burke, J.L., Murphy, R.R., Coovert, M.D., Riddle, D.L.: Moonlight in Miami: field study of human-robot interaction in the context of an urban search and rescue disaster response training exercise. Hum.-Comput. Interact. 19(1–2), 85–116 (2004)

    Article  Google Scholar 

  6. Chadalavada, R.T., Andreasson, H., Krug, R., Lilienthal, A.J.: That’s on my mind! robot to human intention communication through on-board projection on shared floor space. In: European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015)

    Google Scholar 

  7. Chadalavada, R.T., Lilienthal, A., Andreasson, H., Krug, R.: Empirical evaluation of human trust in an expressive mobile robot. In: RSS Workshop on Social Trust in Autonomous Robots (2016)

    Google Scholar 

  8. Chen, H., Lee, A.S., Swift, M., Tang, J.C.: 3D collaboration method over hololens and skype end points. In: Proceedings of the 3rd International Workshop on Immersive Media Experiences, pp. 27–30. ACM (2015)

    Google Scholar 

  9. Demiralp, C., Jackson, C.D., Karelitz, D.B., Zhang, S., Laidlaw, D.H.: Cave and fishtank virtual-reality displays: a qualitative and quantitative comparison. IEEE Trans. Vis. Comput. Graph. 12(3), 323–330 (2006)

    Article  Google Scholar 

  10. Dragan, A.D., Lee, K.C., Srinivasa, S.S.: Legibility and predictability of robot motion. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 301–308. IEEE (2013)

    Google Scholar 

  11. Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Auton. Syst. 42(3), 143–166 (2003)

    Article  Google Scholar 

  12. Han, Y.: The social behavior guide for confused autonomous machines. Master’s thesis, Rhode Island School of Design (2016)

    Google Scholar 

  13. Kam, H.R., Lee, S.-H., Park, T., Kim, C.-H.: RViz: a toolkit for real domain data visualization. Telecommun. Syst. 60(2), 337–345 (2015)

    Article  Google Scholar 

  14. Kasik, D.J., Troy, J.J., Amorosi, S.R., Murray, M.O., Swamy, S.N.: Evaluating graphics displays for complex 3D models. IEEE Comput. Graph. Appl. 22(3), 56–64 (2002)

    Article  Google Scholar 

  15. Kato, H., Billinghurst, M.: Marker tracking and HMD Calibration for a video-based augmented reality conferencing system. In: IEEE and ACM International Workshop on Augmented Reality (IWAR), pp. 85–94. IEEE (1999)

    Google Scholar 

  16. Macmillan, N.A.: Signal detection theory. Stevens’ Handbook of Experimental Psychology (2002)

    Google Scholar 

  17. May, A.D., Dondrup, C., Hanheide, M.: Show me your moves! Conveying navigation intention of a mobile robot to humans. In: European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015)

    Google Scholar 

  18. Milgram, P., Zhai, S., Drascic, D., Grodski, J.: Applications of augmented reality for human-robot communication. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 3, pp. 1467–1472. IEEE (1993)

    Google Scholar 

  19. Mutlu, B., Yamaoka, F., Kanda, T., Ishiguro, H., Hagita, N.: Nonverbal leakage in robots: communication of intentions through seemingly unintentional behavior. In: ACM/IEEE International Conference on Human Robot Interaction, pp. 69–76. ACM (2009)

    Google Scholar 

  20. Nakata, T., Sato, T., Mori, T., Mizoguchi, H.: Expression of emotion and intention by robot body movement. In: International Conference on Autonomous Systems (1998)

    Google Scholar 

  21. NASA Human Performance Research Group and others. Task Load Index (NASA-TLX) v1. 0 computerised version. NASA Ames Research Centre (1987)

    Google Scholar 

  22. Ohshima, T., Satoh, K., Yamamoto, H., Tamura, H.: AR2 Hockey: a case study of collaborative augmented reality. In: Proceedings of the Virtual Reality Annual International Symposium, (s 268) (1998)

    Google Scholar 

  23. Pausch, R., Shackelford, M.A., Proffitt, D.: A user study comparing head-mounted and stationary displays. In: Research Properties in Virtual Reality Symposium (1993)

    Google Scholar 

  24. Rekimoto, J.: Transvision: a hand-held augmented reality system for collaborative design. Virtual Syst. Multimed. 96, 18–20 (1996)

    Google Scholar 

  25. Ruddle, R.A., Payne, S.J., Jones, D.M.: Navigating large-scale virtual environments: what differences occur between helmet-mounted and desk-top displays? Presence 8(2), 157–168 (1999)

    Article  Google Scholar 

  26. Ruffaldi, E., Brizzi, F., Tecchia, F., Bacinelli, S.: Third point of view augmented reality for robot intentions visualization, pp. 471–478. Springer International Publishing, Cham (2016)

    Google Scholar 

  27. Scassellati, B., Hayes, B.: Human-robot collaboration. AI Matters 1(2), 22–23 (2014)

    Article  Google Scholar 

  28. Schaefer, K.E., Straub, E.R., Chen, J.Y., Putney, J., Evans, A.: Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn. Syst. Res. (2017)

    Google Scholar 

  29. Shrestha, M.C., Kobayashi, A., Onishi, T., Uno, E., Yanagawa, H., Yokoyama, Y., Kamezaki, M., Schmitz, A. Sugano, S.: Intent communication in navigation through the use of light and screen indicators. In: ACM/IEEE International Conference on Human Robot Interaction, pp. 523–524. IEEE (2016)

    Google Scholar 

  30. Shrestha, M.C., Kobayashi, A., Onishi, T., Yanagawa, H., Yokoyama, Y., Uno, E., Schmitz, A., Kamezaki, M., Sugano, S.: Exploring the use of light and display indicators for communicating directional intent. In: Advanced Intelligent Mechatronics, pp. 1651–1656. IEEE (2016)

    Google Scholar 

  31. Slater, M., Linakis, V., Usoh, M., Kooper, R.: Immersion, presence, and performance in virtual environments: an experiment with tri-dimensional chess. In: ACM Virtual Reality Software and Technology (VRST), vol. 163, pp. 72. ACM Press, New York (1996)

    Google Scholar 

  32. Slater, M., Sanchez-Vives, M.V.: Enhancing our lives with immersive virtual reality. Front. Robot. AI 3, 74 (2016)

    Article  Google Scholar 

  33. Santos, B.S., Dias, P., Pimentel, A., Baggerman, J.-W., Ferreira, C., Silva, S., Madeira, J.: Head-mounted display versus desktop for 3D navigation in virtual reality: a user study. Multimed. Tools Appl. 41(1), 161 (2009)

    Google Scholar 

  34. Stanislaw, H., Todorov, N.: Calculation of signal detection theory measures. Behav. Res. Methods Instrum. Comput. 31(1), 137–149 (1999)

    Article  Google Scholar 

  35. Szafir, D., Mutlu, B., Fong, T.: Communication of intent in assistive free flyers. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 358–365. ACM (2014)

    Google Scholar 

  36. Szafir, D., Mutlu, B., Fong, T.: Communicating directionality in flying robots. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 19–26. ACM (2015)

    Google Scholar 

  37. Takayama, L., Dooley, D., Ju, W.: Expressing thought: improving robot readability with animation principles. In: International Conference on Human-Robot Interaction, pp. 69–76. ACM (2011)

    Google Scholar 

  38. Tanner Jr., W.P., Swets, J.A.: A decision-making theory of visual detection. Psychol. Rev. 61(6), 401 (1954)

    Article  Google Scholar 

  39. Ware, C., Franck, G.: Viewing a graph in a virtual reality display is three times as good as a 2D diagram. In: IEEE Symposium on Visual Languages, pp. 182–183 (1994)

    Google Scholar 

Download references

Acknowledgements

We thank David Laidlaw for fruitful discussion on VR literature. This work was supported by DARPA under grant number D15AP00102 and by the AFRL under grant number FA9550-17-1-0124. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA or AFRL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Rosen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rosen, E. et al. (2020). Communicating Robot Arm Motion Intent Through Mixed Reality Head-Mounted Displays. In: Amato, N., Hager, G., Thomas, S., Torres-Torriti, M. (eds) Robotics Research. Springer Proceedings in Advanced Robotics, vol 10. Springer, Cham. https://doi.org/10.1007/978-3-030-28619-4_26

Download citation

Publish with us

Policies and ethics