Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3411764.3445398acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Assisting Manipulation and Grasping in Robot Teleoperation with Augmented Reality Visual Cues

Published: 07 May 2021 Publication History

Abstract

Teleoperating industrial manipulators in co-located spaces can be challenging. Facilitating robot teleoperation by providing additional visual information about the environment and the robot affordances using augmented reality (AR), can improve task performance in manipulation and grasping. In this paper, we present two designs of augmented visual cues, that aim to enhance the visual space of the robot operator through hints about the position of the robot gripper in the workspace and in relation to the target. These visual cues aim to improve the distance perception and thus, the task performance. We evaluate both designs against a baseline in an experiment where participants teleoperate a robotic arm to perform pick-and-place tasks. Our results show performance improvements in different levels, reflecting in objective and subjective measures with trade-offs in terms of time, accuracy, and participants’ views of teleoperation. These findings show the potential of AR not only in teleoperation, but in understanding the human-robot workspace.

Supplementary Material

MP4 File (3411764.3445398_videofigure.mp4)
Supplemental video
MP4 File (3411764.3445398_videopreview.mp4)
Preview video

References

[1]
07/09/2020. SDK Download | Vuforia Developer Portal. https://developer.vuforia.com/downloads/sdk
[2]
2011. Appendix A: Human Visual Perception. In Practical Image and Video Processing Using MATLAB, Oge Marques(Ed.). John Wiley & Sons, Inc, Hoboken, NJ, USA, 591–610. https://doi.org/10.1002/9781118093467.app1
[3]
Rosemarie Anderson. 2007. Thematic content analysis (TCA)., 4 pages.
[4]
Filippo Brizzi, Lorenzo Peppoloni, Alessandro Graziano, Erika Di Stefano, Carlo Alberto Avizzano, and Emanuele Ruffaldi. 2018. Effects of Augmented Reality on the Performance of Teleoperated Industrial Assembly Tasks in a Robotic Embodiment. IEEE Transactions on Human-Machine Systems 48, 2 (2018), 197–206. https://doi.org/10.1109/THMS.2017.2782490
[5]
Ravi Teja Chadalavada, Henrik Andreasson, Robert Krug, and Achim J. Lilienthal. 2015. That’s on my mind! robot to human intention communication through on-board projection on shared floor space. In 2015 European Conference on Mobile Robots, European Conference on Mobile Robotsand Tom Duckett (Eds.). IEEE, Piscataway, NJ, 1–6. https://doi.org/10.1109/ECMR.2015.7403771
[6]
E. Laurence Chalmers. 1952. Monocular and Binocular Cues in the Perception of Size and Distance. The American Journal of Psychology 65, 3 (1952), 415. https://doi.org/10.2307/1418762
[7]
J.W.S. Chong, S. K. Ong, A.Y.C. Nee, and K. Youcef-Youmi. 2009. Robot programming using augmented reality: An interactive method for planning collision-free paths. Robotics and Computer-Integrated Manufacturing 25, 3(2009), 689–701. https://doi.org/10.1016/j.rcim.2008.05.002
[8]
Francesco Clemente, Strahinja Dosen, Luca Lonini, Marko Markovic, Dario Farina, and Christian Cipriani. 2017. Humans Can Integrate Augmented Reality Feedback in Their Sensorimotor Control of a Robotic Hand. IEEE Transactions on Human-Machine Systems 47, 4 (2017), 583–589. https://doi.org/10.1109/THMS.2016.2611998
[9]
James E. Cutting and Peter M. Vishton. 1995. Perceiving Layout and Knowing Distances. In Perception of Space and Motion. Elsevier, 69–117. https://doi.org/10.1016/B978-012240530-3/50005-5
[10]
Andreas Dünser, Raphaël Grasset, Hartmut Seichter, and Mark Billinghurst. 2007. Applying HCI principles to AR systems design. (2007).
[11]
Fatima El Jamiy and Ronald Marsh. 2019. Survey on depth perception in head mounted displays: distance estimation in virtual reality, augmented reality, and mixed reality. IET Image Processing 13, 5 (2019), 707–712. https://doi.org/10.1049/iet-ipr.2018.5920
[12]
H. C. Fang, S. K. Ong, and A.Y.C. Nee. 2012. Interactive robot trajectory planning and simulation using Augmented Reality. Robotics and Computer-Integrated Manufacturing 28, 2(2012), 227–237. https://doi.org/10.1016/j.rcim.2011.09.003
[13]
Markus Funk, Andreas Bächler, Liane Bächler, Thomas Kosch, Thomas Heidenreich, and Albrecht Schmidt. 2017. Working with Augmented Reality?. In PETRA 2017(ICPS: ACM international conference proceeding series), Unknown (Ed.). ACM, New York, NY, USA, 222–229. https://doi.org/10.1145/3056540.3056548
[14]
Hind Gacem, Gilles Bailly, James Eagan, and Eric Lecolinet. 2015. Finding Objects Faster in Dense Environments Using a Projection Augmented Robotic Arm. In Human-computer interaction - INTERACT 2015, Julio Abascal (Ed.). LNCS sublibrary. SL 3, Information systems and applications, incl. Internet/Web, and HCI, Vol. 9298. Springer, Cham, 221–238. https://doi.org/10.1007/978-3-319-22698-9_15
[15]
Yiannis Gatsoulis, Gurvinder S. Virk, and Abbas A. Dehghani-Sanij. 2010. On the Measurement of Situation Awareness for Effective Human-Robot Interaction in Teleoperated Systems. Journal of Cognitive Engineering and Decision Making 4, 1 (2010), 69–98. https://doi.org/10.1518/155534310X495591
[16]
Jawaid A. Ghani and Satish P. Deshpande. 1994. Task Characteristics and the Experience of Optimal Flow in Human—Computer Interaction. The Journal of Psychology 128, 4 (1994), 381–391. https://doi.org/10.1080/00223980.1994.9712742
[17]
L. L. Gong, S. K. Ong, and A. Y. C. Nee. 2019. Projection-based Augmented Reality Interface for Robot Grasping Tasks. In Proceedings of the 2019 4th International Conference on Robotics, Control and Automation - ICRCA 2019, Unknown (Ed.). ACM Press, New York, New York, USA, 100–104. https://doi.org/10.1145/3351180.3351204
[18]
Scott A. Green, Mark Billinghurst, XiaoQi Chen, and J. Geoffrey Chase. 2008. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design. International Journal of Advanced Robotic Systems 5, 1 (2008), 1. https://doi.org/10.5772/5664
[19]
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9(2006), 904–908. https://doi.org/10.1177/154193120605000909
[20]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Human mental workload, Peter A. Hancock and Najmedin Meshkati (Eds.). Advances in Psychology, Vol. 52. North-Holland, Amsterdam and Oxford, 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9
[21]
Hooman Hedayati, Michael Walker, and Daniel Szafir. 2018. Improving Collocated Robot Teleoperation with Augmented Reality. In HRI’18, Takayuki Kanda, Selma Ŝabanović, Guy Hoffman, and Adriana Tapus (Eds.). Association for Computing Machinery, New York, New York, 78–86. https://doi.org/10.1145/3171221.3171251
[22]
I. P. Howard. 2012. Perceiving in Depth, Volume 1: Basic Mechanisms. Oxford University Press. https://books.google.de/books?id=A26JAgAAQBAJ
[23]
J. A. Gomer, C. H. Dash, K. S. Moore, and C. C. Pagano. 2009. Using Radial Outflow to Provide Depth Information During Teleoperation. Presence: Teleoperators and Virtual Environments 18, 4(2009), 304–320. https://doi.org/10.1162/pres.18.4.304
[24]
Robert J. K. Jacob. 1995. Eye Tracking in Advanced Interface Design. In Virtual Environments and Advanced Interface Design. Oxford University Press, Inc, USA, 258–288.
[25]
Alistair Kennedy and Diana Inkpen. 2006. SENTIMENT CLASSIFICATION of MOVIE REVIEWS USING CONTEXTUAL VALENCE SHIFTERS. Computational Intelligence 22, 2 (2006), 110–125. https://doi.org/10.1111/j.1467-8640.2006.00277.x
[26]
R. Kitchin. 2001. Cognitive Maps. In International encyclopedia of the social & behavioral sciences, Neil J. edt Smelser and Paul B. edt Baltes (Eds.). Elsevier, Amsterdam and New York, 2120–2124. https://doi.org/10.1016/B0-08-043076-7/02531-6
[27]
Ernst Kruijff, J. Edward Swan, and Steven Feiner. 2010. Perceptual issues in augmented reality revisited. In 9th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2010, Tobias Höllerer (Ed.). IEEE, Piscataway, NJ, 3–12. https://doi.org/10.1109/ISMAR.2010.5643530
[28]
Dennis Krupke, Frank Steinicke, Paul Lubos, Yannick Jonetzko, Michael Gorner, and Jianwei Zhang. 10/1/2018 - 10/5/2018. Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 1–9. https://doi.org/10.1109/IROS.2018.8594043
[29]
Arnold M. Lund. 2001. Measuring usability with the use questionnaire12. Usability Interface 8, 2 (2001), 3–6.
[30]
Magic Leap. [n.d.]. Magic Leap One. https://www.magicleap.com/
[31]
Zhanat Makhataeva and Huseyin Varol. 2020. Augmented Reality for Robotics: A Review. Robotics 9, 2 (2020), 21. https://doi.org/10.3390/robotics9020021
[32]
George Mather and David R. R. Smith. 2004. Combining depth cues: effects upon accuracy and speed of performance in a depth-ordering task. Vision Research 44, 6 (2004), 557–562. https://doi.org/10.1016/j.visres.2003.09.036
[33]
M. Alejandra Menchaca-Brandan, Andrew M. Liu, Charles M. Oman, and Alan Natapoff. 2007. Influence of perspective-taking and mental rotation abilities in space teleoperation. In Proceedings of the 2007 ACM, Cynthia Breazeal, Alan C. Schultz, Terry Fong, and Sara Kiesler (Eds.). ACM, New York, 271. https://doi.org/10.1145/1228716.1228753
[34]
Microsoft. [n.d.]. Microsoft Hololens. https://docs.microsoft.com/en-us/hololens/hololens1-hardware
[35]
Microsoft‘. 2017. Mixed Reality Toolkit. https://github.com/microsoft/MixedRealityToolkit-Unity/releases
[36]
Susanna Nilsson, Torbjörn Gustafsson, and Per Carleberg. 2007. Hands Free Interaction with Virtual Information in a Real Environment: Eye Gaze as an Interaction Tool in an Augmented Reality System.Proceedings of COGAIN(2007), 53–57.
[37]
Thomas Olsson and Markus Salo. 2012. Narratives of satisfying and unsatisfying experiences of current mobile augmented reality applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(ACM Digital Library), Joseph A. Konstan (Ed.). ACM, New York, NY, 2779. https://doi.org/10.1145/2207676.2208677
[38]
Yoon Jung Park, Hyocheol Ro, and Tack-Don Han. 2019. Deep-ChildAR bot. In ACM SIGGRAPH 2019 Posters. Association for Computing Machinery, [S.l.], 1–2. https://doi.org/10.1145/3306214.3338589
[39]
Rajesh P.N. Rao, Gregory J. Zelinsky, Mary M. Hayhoe, and Dana H. Ballard. 2002. Eye movements in iconic visual search. Vision Research 42, 11 (2002), 1447–1463. https://doi.org/10.1016/S0042-6989(02)00040-8
[40]
Robert Remus, Uwe Quasthoff, and Gerhard Heyer. 2010. SentiWS - A Publicly Available German-language Resource for Sentiment Analysis.
[41]
S. Stadler, K. Kain, M. Giuliani, N. Mirnig, G. Stollnberger, and M. Tscheligi. 2016. Augmented reality for industrial robot programmers: Workload analysis for task-based, augmented reality-supported robot control. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 179–184. https://doi.org/10.1109/ROMAN.2016.7745108
[42]
Bennett L. Schwartz and John H. Krantz. 2019. Sensation & perception(second edition ed.). SAGE, Los Angeles.
[43]
Jose Aparecido Da Silva. 1985. Scales for Perceived Egocentric Distance in a Large Open Field: Comparison of Three Psychophysical Methods. The American Journal of Psychology 98, 1 (1985), 119. https://doi.org/10.2307/1422771
[44]
Martin Sundermeyer, Zoltan-Csaba Marton, Maximilian Durner, Manuel Brucker, and Rudolph Triebel. [n.d.]. Implicit 3D Orientation Learning for 6D Object Detection from RGB Images. https://arxiv.org/pdf/1902.01275
[45]
C. T. Swain. 1996. Integration of monocular cues to create depth effect. In 1997 IEEE international conference on acoustics, speech, and signal processing. IEEE Comput. Soc. Press, 2745–2748. https://doi.org/10.1109/ICASSP.1997.595357
[46]
Laurence P. Tidbury, Kevin R. Brooks, Anna R. O’Connor, and Sophie M. Wuerger. 2016. A Systematic Comparison of Static and Dynamic Cues for Depth Perception. Investigative ophthalmology & visual science 57, 8 (2016), 3545–3553. https://doi.org/10.1167/iovs.15-18104
[47]
Michael Walker, Hooman Hedayati, Jennifer Lee, and Daniel Szafir. 2018. Communicating Robot Motion Intent with Augmented Reality. In HRI’18, Takayuki Kanda, Selma Ŝabanović, Guy Hoffman, and Adriana Tapus (Eds.). Association for Computing Machinery, New York, New York, 316–324. https://doi.org/10.1145/3171221.3171253
[48]
Michael E. Walker, Hooman Hedayati, and Daniel Szafir. 3/11/2019 - 3/14/2019. Robot Teleoperation with Augmented Reality Virtual Surrogates. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 202–210. https://doi.org/10.1109/HRI.2019.8673306
[49]
Stuart Walker and Ed Dorsa. 2001. Making Design Work: Sustainability, Product Design and Social Equity. The Journal of Sustainable Product Design 1, 1 (2001), 41–48. https://doi.org/10.1023/A:1014412307092
[50]
X. Wang, S. K. Ong, and A. Y. C. Nee. 2016. A comprehensive survey of augmented reality assembly research. Advances in Manufacturing 4, 1 (2016), 1–22. https://doi.org/10.1007/s40436-015-0131-4
[51]
Atsushi Watanabe, Tetsushi Ikeda, Yoichi Morales, Kazuhiko Shinozawa, Takahiro Miyashita, and Norihiro Hagita. 2015. Communicating robotic navigational intentions. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Wolfram Burgard (Ed.). IEEE, Piscataway, NJ, 5763–5769. https://doi.org/10.1109/IROS.2015.7354195
[52]
Marcus R. Watson and James T. Enns. 2016. Depth Perception. In Reference module in neuroscience and biobehavioral psychology, John Stein(Ed.). Elsevier, [Place of publication not identified]. https://doi.org/10.1016/B978-0-12-809324-5.06398-7
[53]
Rong Wen, Wei-Liang Tay, Binh P. Nguyen, Chin-Boon Chng, and Chee-Kong Chui. 2014. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface. Computer methods and programs in biomedicine 116, 2(2014), 68–80. https://doi.org/10.1016/j.cmpb.2013.12.018
[54]
Tom Williams, Matthew Bussing, Sebastian Cabrol, Elizabeth Boyle, and Nhan Tran. 3/11/2019 - 3/14/2019. Mixed Reality Deictic Gesture for Multi-Modal Robot Communication. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 191–201. https://doi.org/10.1109/HRI.2019.8673275
[55]
Bang Wong. 2011. Color blindness. Nature methods 8, 6 (2011), 441. https://doi.org/10.1038/nmeth.1618

Cited By

View all
  • (2024)A Robotic Teleoperation System with Integrated Augmented Reality and Digital Twin Technologies for Disassembling End-of-Life BatteriesBatteries10.3390/batteries1011038210:11(382)Online publication date: 30-Oct-2024
  • (2024)Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological FeaturesProceedings of the ACM on Human-Computer Interaction10.1145/36981428:ISS(377-406)Online publication date: 24-Oct-2024
  • (2024)Towards Unlimited Task Coverage and Direct (far-off) Manipulation in eXtended Reality Remote CollaborationCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640737(745-749)Online publication date: 11-Mar-2024
  • Show More Cited By

Index Terms

  1. Assisting Manipulation and Grasping in Robot Teleoperation with Augmented Reality Visual Cues
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
    May 2021
    10862 pages
    ISBN:9781450380966
    DOI:10.1145/3411764
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 May 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. augmented reality
    2. human-robot interaction
    3. robot teleoperation
    4. visual cues

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • German Federal Ministry of Education and Research

    Conference

    CHI '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)364
    • Downloads (Last 6 weeks)27
    Reflects downloads up to 19 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Robotic Teleoperation System with Integrated Augmented Reality and Digital Twin Technologies for Disassembling End-of-Life BatteriesBatteries10.3390/batteries1011038210:11(382)Online publication date: 30-Oct-2024
    • (2024)Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological FeaturesProceedings of the ACM on Human-Computer Interaction10.1145/36981428:ISS(377-406)Online publication date: 24-Oct-2024
    • (2024)Towards Unlimited Task Coverage and Direct (far-off) Manipulation in eXtended Reality Remote CollaborationCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640737(745-749)Online publication date: 11-Mar-2024
    • (2024)Exploring of Discrete and Continuous Input Control for AI-enhanced Assistive Robotic ArmsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640626(828-832)Online publication date: 11-Mar-2024
    • (2024)RoboVisAR: Immersive Authoring of Condition-based AR Robot VisualisationsProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610977.3634972(462-471)Online publication date: 11-Mar-2024
    • (2024)Virtual reality‐based dynamic scene recreation and robot teleoperation for hazardous environmentsComputer-Aided Civil and Infrastructure Engineering10.1111/mice.13337Online publication date: 12-Sep-2024
    • (2024)Asynchronously Assigning, Monitoring, and Managing Assembly Goals in Virtual Reality for High-Level Robot Teleoperation2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00066(450-460)Online publication date: 16-Mar-2024
    • (2024)AR-enhanced digital twin for human–robot interaction in manufacturing systemsEnergy, Ecology and Environment10.1007/s40974-024-00327-79:5(530-548)Online publication date: 25-May-2024
    • (2024)Intuitive teleoperation with hand-tracking in VR: a study on master–slave system virtualization and 3D workspace visualizationThe International Journal of Advanced Manufacturing Technology10.1007/s00170-024-14213-3134:5-6(2353-2372)Online publication date: 15-Aug-2024
    • (2023)Usability Evaluation of an Augmented Reality System for Collaborative Fabrication between Multiple Humans and Industrial RobotsProceedings of the 2023 ACM Symposium on Spatial User Interaction10.1145/3607822.3614528(1-10)Online publication date: 13-Oct-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media