Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Leveraging depth data in remote robot teleoperation interfaces for general object manipulation

Published: 01 January 2020 Publication History

Abstract

Robust remote teleoperation of high-degree-of-freedom manipulators is of critical importance across a wide range of robotics applications. Contemporary robot manipulation interfaces primarily utilize a free positioning pose specification approach to independently control each degree of freedom in free space. In this work, we present two novel interfaces, constrained positioning and point-and-click. Both novel approaches incorporate scene information from depth data into the grasp pose specification process, effectively reducing the number of 3D transformations the user must input. The novel interactions are designed for 2D image streams, rather than traditional 3D virtual scenes, further reducing mental transformations by eliminating the controllable camera viewpoint in favor of fixed physical camera viewpoints. We present interface implementations of our novel approaches, as well as free positioning, in both 2D and 3D visualization modes. In addition, we present results of a 90-participant user study evaluation comparing the effectiveness of each approach for a set of general object manipulation tasks, and the effects of implementing each approach in 2D image views versus 3D depth views. The results of our study show that point-and-click outperforms both free positioning and constrained positioning by significantly increasing the number of tasks completed and significantly reducing task failures and grasping errors, while significantly reducing the number of user interactions required to specify poses. In addition, we found that regardless of the interaction approach, the 2D visualization mode resulted in significantly better performance than the 3D visualization mode, with statistically significant reductions in task failures, grasping errors, task completion time, number of interactions, and user workload, all while reducing bandwidth requirements imposed by streaming depth data.

References

[1]
Cancedda L, Cannavò A, Garofalo G, Lamberti F, Montuschi P, and Paravati G (2017) Mixed reality-based user interaction feedback for a hand-controlled interface targeted to robot teleoperation. In: International Conference on Augmented Reality, Virtual Reality and Computer Graphics. New York: Springer, pp. 447–463.
[2]
Chen TL, Ciocarlie M, and Cousins S, et al. (2013) Robots for humanity: A case study in assistive mobile manipulation. Technical Report, GeorgiaTech. Available at: https://smartech.gatech.edu/bitstream/handle/1853/49845/rfh_ram_final_compressed.pdf?sequence=1&isAllowed=y
[3]
Chitta S (2016) Moveit!: An introduction. In: Robot Operating System (ROS). New York: Springer, pp. 3–27.
[4]
Ciocarlie M, Hsiao K, Leeper A, and Gossow D (2012) Mobile manipulation through an assistive home robot. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5313–5320.
[5]
DeDonato M, Dimitrov V, and Du R, et al. (2015) Human-in-the-loop control of a humanoid robot for disaster response: A report from the DARPA Robotics Challenge trials. Journal of Field Robotics 32(2): 275–292.
[6]
DeJong BP, Colgate JE, and Peshkin MA (2004) Improving teleoperation: Reducing mental rotations and translations. In: Proceedings ICRA’04: 2004 IEEE International Conference on Robotics and Automation, Vol. 4. IEEE, pp. 3708–3714.
[7]
Fischinger D, Weiss A, and Vincze M (2015) Learning grasps with topographic features. The International Journal of Robotics Research 34(9): 1167–1194.
[8]
Gossow D, Leeper A, Hershberger D, and Ciocarlie M (2011) Interactive markers: 3-D user interfaces for ROS applications [ros topics]. IEEE Robotics and Automation Magazine 18(4): 14–15.
[9]
Hachet M, Decle F, Knodel S, and Guitton P (2008) Navidget for easy 3D camera positioning from 2D inputs. In: Proceedings of the 2008 IEEE Symposium on 3D User Interfaces (3DUI ’08). Washington, DC: IEEE Computer Society, pp. 83–89.
[10]
Hart SG and Staveland LE (1988) Development of NASA-TLX (task load index): Results of empirical and theoretical research. Advances in Psychology 52: 139–183.
[11]
Henrysson A and Billinghurst M (2007) Using a mobile phone for 6 dof mesh editing. In: Proceedings of the 8th ACM SIGCHI New Zealand Chapter’s International Conference on Computer–Human Interaction: Design Centered HCI. New York: ACM Press, pp. 9–16.
[12]
Hsiao K, Chitta S, Ciocarlie M, and Jones EG (2010) Contact-reactive grasping of objects with partial shape information. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 1228–1235.
[13]
Jiang Y, Moseson S, and Saxena A (2011) Efficient grasping from RGBD images: Learning using a new rectangle representation. In: 2011 IEEE International Conference on Robotics and Automation (ICRA). pp. 3304–3311.
[14]
Katzakis N, Hori M, Kiyokawa K, and Takemura H (2011) Smartphone game controller. In: Proceedings of the 74th HIS SigVR Workshop.
[15]
Kemp CC and Grice PM (2016) Assistive mobile manipulation: Designing for operators with motor impairments. In: RSS 2016 Workshop on Socially and Physically Assistive Robotics for Humanity.
[16]
Kent D, Behrooz M, and Chernova S (2016) Construction of a 3D object recognition and manipulation database from grasp demonstrations. Autonomous Robots 40(1): 175–192.
[17]
Kohlbrecher S, Romay A, and Stumpf A, et al. (2015) Human–robot teaming for rescue missions: Team Vigir’s approach to the 2013 DARPA Robotics Challenge trials. Journal of Field Robotics 32(3): 352–377.
[18]
Leeper A, Hsiao K, Ciocarlie M, Takayama L, and Gossow D (2012) Strategies for human-in-the-loop robotic grasping. In: 2012 7th ACM/IEEE International Conference on Human–Robot Interaction (HRI), pp. 1–8. .
[19]
Lenz I, Lee H, and Saxena A (2015) Deep learning for detecting robotic grasps. The International Journal of Robotics Research 34(4–5): 705–724.
[20]
Levine S, Pastor P, Krizhevsky A, and Quillen D (2016) Learning hand–eye coordination for robotic grasping with deep learning and large-scale data collection. arXiv preprint arXiv:1603.02199.
[21]
Lipton JI, Fay AJ, and Rus D (2018) Baxter’s homunculus: Virtual reality spaces for teleoperation in manufacturing. IEEE Robotics and Automation Letters 3(1): 179–186.
[22]
Mahler J, Liang J, and Niyaz S, et al. (2017) Dex-Net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. In: Robotics: Science and Systems (RSS).
[23]
Masliah MR and Milgram P (2000) Measuring the allocation of control in a 6 degree-of-freedom docking experiment. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems. New York: ACM Press, pp. 25–32.
[24]
Norton A, Ober W, and Baraniecki L, et al. (2017) Analysis of human–robot interaction at the DARPA Robotics Challenge finals. The International Journal of Robotics Research 36(5–7): 483–513.
[25]
Parsons LM (1995) Inability to reason about an object’s orientation using an axis and angle of rotation. Journal of experimental psychology: Human perception and performance 21(6): 1259.
[26]
Peppoloni L, Brizzi F, Ruffaldi E, and Avizzano CA (2015) Augmented reality-aided tele-presence system for robot manipulation in industrial manufacturing. In: Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology. New York: ACM Press, pp. 237–240.
[27]
Phillips-Grafflin C, Alunni N, and Suay HB, et al. (2014) Toward a user-guided manipulation framework for high-DOF robots with limited communication. Intelligent Service Robotics 7(3): 121–131.
[28]
Pinto L and Gupta A (2016) Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 3406–3413.
[29]
Quigley M, Conley K, and Gerkey B, et al. (2009) ROS: An open-source robot operating system. In: ICRA Workshop on Open Source Software, Vol. 3, Kobe, Japan, p. 5.
[30]
Rakita D, Mutlu B, and Gleicher M (2017) A motion retargeting method for effective mimicry-based teleoperation of robot arms. In: Proceedings of the 2017 ACM/IEEE International Conference on Human–Robot Interaction. New York: ACM Press, pp. 361–370.
[31]
Saxena A, Wong LL, and Ng AY (2008) Learning grasp strategies with partial shape information. In: AAAI, Vol. 3, pp. 1491–1494.
[32]
Sheridan TB (1993) Space teleoperation through time delay: Review and prognosis. IEEE Transactions on robotics and Automation 9(5): 592–606.
[33]
ten Pas A and Platt R (2015) Using geometry to detect grasp poses in 3D point clouds. In: International Symposium on Robotics Research.
[34]
ten Pas A and Platt R (2016) Localizing handle-like grasp affordances in 3D point clouds. In: Experimental Robotics. New York: Springer, pp. 623–638.
[35]
Toris R, Kammerl J, and Lu DV, et al. (2015) Robot web tools: Efficient messaging for cloud robotics. In: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE, pp. 4530–4537.
[36]
Yanco HA, Norton A, Ober W, Shane D, Skinner A, and Vice J (2015) Analysis of human–robot interaction at the DARPA Robotics Challenge trials. Journal of Field Robotics 32(3): 420–444.
[37]
Zhai S and Milgram P (1998) Quantifying coordination in multiple DOF movement and its application to evaluating 6 DOF input devices. In: Proceedings of the SIGCHI conference on Human factors in computing systems. New York: ACM Press/Addison-Wesley Publishing Co., pp. 320–327.
[38]
Zucker M, Joo S, and Grey MX, et al. (2015) A general-purpose system for teleoperation of the DRC-HUBO humanoid robot. Journal of Field Robotics 32(3): 336–351.

Cited By

View all
  • (2023)Comparing a Graphical User Interface, Hand Gestures and Controller in Virtual Reality for Robot TeleoperationCompanion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568294.3580165(644-648)Online publication date: 13-Mar-2023
  • (2023)Systematic Literature Review on the User Evaluation of Teleoperation Interfaces for Professional Service RobotsHCI in Business, Government and Organizations10.1007/978-3-031-36049-7_6(66-85)Online publication date: 23-Jul-2023
  • (2023)Research on Mixed Reality Visual Augmentation Method for Teleoperation Interactive SystemVirtual, Augmented and Mixed Reality10.1007/978-3-031-35634-6_35(490-502)Online publication date: 23-Jul-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image International Journal of Robotics Research
International Journal of Robotics Research  Volume 39, Issue 1
Jan 2020
155 pages

Publisher

Sage Publications, Inc.

United States

Publication History

Published: 01 January 2020

Author Tags

  1. Robot teleoperation
  2. object manipulation
  3. user interface design
  4. usability study
  5. RGBD interfaces

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Comparing a Graphical User Interface, Hand Gestures and Controller in Virtual Reality for Robot TeleoperationCompanion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568294.3580165(644-648)Online publication date: 13-Mar-2023
  • (2023)Systematic Literature Review on the User Evaluation of Teleoperation Interfaces for Professional Service RobotsHCI in Business, Government and Organizations10.1007/978-3-031-36049-7_6(66-85)Online publication date: 23-Jul-2023
  • (2023)Research on Mixed Reality Visual Augmentation Method for Teleoperation Interactive SystemVirtual, Augmented and Mixed Reality10.1007/978-3-031-35634-6_35(490-502)Online publication date: 23-Jul-2023
  • (2022)A Haptic Multimodal Interface with Abstract Controls for Semi-Autonomous ManipulationProceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction10.5555/3523760.3523977(1206-1207)Online publication date: 7-Mar-2022
  • (2022)Intuitive, Efficient and Ergonomic Tele-Nursing Robot Interfaces: Design Evaluation and EvolutionACM Transactions on Human-Robot Interaction10.1145/352610811:3(1-41)Online publication date: 13-Jul-2022
  • (2021)Situated Live Programming for Human-Robot CollaborationThe 34th Annual ACM Symposium on User Interface Software and Technology10.1145/3472749.3474773(613-625)Online publication date: 10-Oct-2021
  • (2020)WireOn: Supporting Remote Collaboration for Embedded System DevelopmentCompanion Publication of the 2020 Conference on Computer Supported Cooperative Work and Social Computing10.1145/3406865.3418564(7-11)Online publication date: 17-Oct-2020
  • (2020)Design of a High-level Teleoperation Interface Resilient to the Effects of Unreliable Robot Autonomy2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS45743.2020.9341322(11519-11524)Online publication date: 24-Oct-2020

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media