Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3519391.3522753acmotherconferencesArticle/Chapter ViewAbstractPublication PagesahsConference Proceedingsconference-collections
research-article
Public Access

Synchronous and Asynchronous Manipulation Switching of Multiple Robotic Embodiment Using EMG and Eye Gaze

Published: 18 April 2022 Publication History

Abstract

Through the use of multiple avatars and robots, the construction of an alter ego whose senses and movements are synchronized with those of the operator has been explored. There are two possible states of operation when manipulating those bodies as synchronous operation: multiple bodies are synchronized simultaneously, and asynchronous operation: a specific body is selectively operated. We propose a system that allows intuitive switching between the active robot and the operation state by Eye gaze and EMG. A two-stage pick-and-place task was performed with the proposed system, and task performance and subjective evaluation of embodiment were analyzed. To consider the coordinates of the tracker in the asynchronous operation of the robot arm, we performed a similar study for a coordinate condition: the absolute coordinate condition, where the coordinates of the tracker are stored in space, and the relative coordinate condition, where only the motion of the tracker is added.

Supplementary Material

MP4 File (AHs2022_Multimediafiles_69.mp4)
Supplemental video

References

[1]
Renaud Blanch and Michaël Ortega. 2009. Rake Cursor: Improving Pointing Performance with Concurrent Input Channels. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’09). Association for Computing Machinery, New York, NY, USA, 1415–1418. https://doi.org/10.1145/1518701.1518914
[2]
John Brooke 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4–7.
[3]
Emilie A. Caspar, Axel Cleeremans, and Patrick Haggard. 2015. The relationship between human agency and embodiment. Consciousness and Cognition 33 (2015), 226–236. https://doi.org/10.1016/j.concog.2015.01.007
[4]
Emilie A Caspar, Albert De Beir, Pedro A Magalhaes De Saldanha Da, Florence Yernaux, Axel Cleeremans, Bram Vanderborght, 2015. New frontiers in the rubber hand experiment: when a robotic hand becomes one’s own. Behavior Research Methods 47, 3 (2015), 744–755.
[5]
Enrico Costanza, Alberto Perdomo, Samuel A. Inverso, and Rebecca Allen. 2004. EMG as a Subtle Input Interface for Mobile Computing. In Mobile Human-Computer Interaction - MobileHCI 2004, Stephen Brewster and Mark Dunlop (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 426–430.
[6]
Tim Duente, Justin Schulte, Max Pfeiffer, and Michael Rohs. 2018. MuscleIO: Muscle-Based Input and Output for Casual Notifications. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 2, Article 64 (July 2018), 21 pages. https://doi.org/10.1145/3214267
[7]
Masaaki Fukuoka, Adrien Verhulst, Fumihiko Nakamura, Ryo Takizawa, Katsutoshi Masai, and Maki Sugimoto. 2019. FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms. In SIGGRAPH Asia 2019 XR(SA ’19). Association for Computing Machinery, New York, NY, USA, 9–10. https://doi.org/10.1145/3355355.3361888
[8]
Weidong Geng, Yu Du, Wenguang Jin, Wentao Wei, Yu Hu, and Jiajun Li. 2016. Gesture recognition by instantaneous surface EMG images. Scientific Reports 6 (11 2016), 36571. https://doi.org/10.1038/srep36571
[9]
Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2012. Teleoperation of Multiple Social Robots. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 42, 3 (2012), 530–544. https://doi.org/10.1109/TSMCA.2011.2164243
[10]
Mar Gonzalez-Franco and Tabitha C. Peck. 2018. Avatar Embodiment. Towards a Standardized Questionnaire. Frontiers in Robotics and AI 5 (2018), 74. https://doi.org/10.3389/frobt.2018.00074
[11]
S Gowtham, KM Adithya Krishna, Taarun Srinivas, RG Pranav Raj, and A Joshuva. 2020. EMG-based control of a 5 DOF robotic manipulator. In 2020 International Conference on Wireless Communications Signal Processing and Networking (WiSPNET). IEEE, Institute of Electrical and Electronics Engineers, New York, NY, USA, 52–57.
[12]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, Amsterdam, Netherlands, 139–183.
[13]
Chien-Ming Huang and Bilge Mutlu. 2016. Anticipatory robot control for efficient human-robot collaboration. In The eleventh ACM/IEEE international conference on human robot interaction. IEEE Press, Institute of Electrical and Electronics Engineers, New York, NY, USA, 83–90.
[14]
Hiroyuki Iizuka, Daisuke Kondo, Hiroki Kawasaki, Hideyuki Ando, and Taro Maeda. 2011. Coordinated Behavior between Visually Coupled Dyads. In Proceedings of the 2nd Augmented Human International Conference(AH ’11). Association for Computing Machinery, New York, NY, USA, Article 23, 4 pages. https://doi.org/10.1145/1959826.1959849
[15]
Linda John, Nilesh Vishwakarma, and Rajat Sharma. 2020. Voice Control Human Assistance Robot. In National Conference on Technical Advancements for Social Upliftment, Proceedings of the 2 nd VNC. EasyChair, Manchester, England, 3681.
[16]
Shunichi Kasahara, Mitsuhito Ando, Kiyoshi Suganuma, and Jun Rekimoto. 2016. Parallel Eyes: Exploring Human Capability and Behaviors with Paralleled First Person View Sharing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, USA, 1561–1572. https://doi.org/10.1145/2858036.2858495
[17]
Hiroki Kawasaki, Hiroyuki Iizuka, Shin Okamoto, Hideyuki Ando, and Taro Maeda. 2010. Collaboration and skill transmission by first-person perspective view sharing system. In 19th International Symposium in Robot and Human Interactive Communication. Institute of Electrical and Electronics Engineers, New York, NY, USA, 125–131. https://doi.org/10.1109/ROMAN.2010.5598668
[18]
Sameer Kishore, Xavi Navarro Muncunill, Pierre Bourdin, Keren Or-Berkers, Doron Friedman, and Mel Slater. 2016. Multi-Destination Beaming: Apparently Being in Three Places at Once through Robotic and Virtual Embodiment. Frontiers in Robotics and AI 3 (2016), 65. https://doi.org/10.3389/frobt.2016.00065
[19]
Jinguo Liu, Yifan Luo, and Zhaojie Ju. 2016. An Interactive Astronaut-Robot System with Gesture Control. Computational Intelligence and Neuroscience 2016 (2016), 7845102.
[20]
P.X. Liu, A.D.C. Chan, R. Chen, K. Wang, and Y. Zhu. 2005. Voice based robot control. In 2005 IEEE International Conference on Information Acquisition. Institute of Electrical and Electronics Engineers, New York, NY, USA, 5 pp.–. https://doi.org/10.1109/ICIA.2005.1635148
[21]
Akira Matsuda, Toru Okuzono, Hiromi Nakamura, Hideaki Kuzuoka, and Jun Rekimoto. 2021. A Surgical Scene Replay System for Learning Gastroenterological Endoscopic Surgery Skill by Multiple Synchronized-Video and Gaze Representation. Proc. ACM Hum.-Comput. Interact. 5, EICS, Article 204 (May 2021), 22 pages. https://doi.org/10.1145/3461726
[22]
Reiji Miura, Shunichi Kasahara, Michiteru Kitazaki, Adrien Verhulst, Masahiko Inami, and Maki Sugimoto. 2021. MultiSoma: Distributed Embodiment with Synchronized Behavior and Perception. In Augmented Humans Conference 2021(AHs’21). Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3458709.3458878
[23]
Ali Moin, Andy Zhou, Abbas Rahimi, Simone Benatti, Alisha Menon, Senam Tamakloe, Jonathan Ting, Natasha Yamamoto, Yasser Khan, Fred Burghardt, Luca Benini, Ana Claudia Arias, and Jan M. Rabaey. 2018. An EMG Gesture Recognition System with Flexible High-Density Sensors and Brain-Inspired High-Dimensional Classifier. 2018 IEEE International Symposium on Circuits and Systems (ISCAS) abs/1802.10237(2018), 1–5.
[24]
Roger Newport, Rachel Pearce, and Catherine Preston. 2010. Fake hands in action: embodiment and control of supernumerary limbs. Experimental brain research 204, 3 (2010), 385–395.
[25]
Uchenna Emeoha Ogenyi, Gongyue Zhang, Chenguang Yang, Zhaojie Ju, and Honghai Liu. 2018. An Intuitive Robot Learning from Human Demonstration. In ICIRA. Springer Science+Business Media, Berlin, Germany, 176–185.
[26]
Yun Suen Pai, Tilman Dingler, and Kai Kunze. 2019. Assessing hands-free interactions for VR using eye gaze and electromyography. Virtual Reality 23, 2 (Feb. 2019), 119–131. https://doi.org/10.1007/s10055-018-0371-2
[27]
Xianjie Pu, Hengyu Guo, Qian Tang, Jie Chen, Li Feng, Guanlin Liu, Xue Wang, Yi Xi, Chenguo Hu, and Zhong Lin Wang. 2018. Rotation sensing and gesture control of a robot joint via triboelectric quantization sensor. Nano Energy 54(2018), 453–460. https://doi.org/10.1016/j.nanoen.2018.10.044
[28]
Róbert Adrian Rillab and Kinga Bettina Faragóa. 2018. Gaze-based Cursor Control Impairs Performance in Divided Attention. Acta Cybernetica 23(2018), 1071–1087.
[29]
Linda E. Sibert and Robert J. K. Jacob. 2000. Evaluation of Eye Gaze Interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’00). Association for Computing Machinery, New York, NY, USA, 281–288. https://doi.org/10.1145/332040.332445
[30]
Christopher Stanton, Anton Bogdanovych, and Edward Ratanasena. 2012. Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning, In Proceedings Of Australasian Conference On Robotics And Automation. Australasian Conference on Robotics and Automation, ACRA, 51.
[31]
Martin Tall, Alexandre Alapetite, Javier San Agustin, Henrik H.T Skovsgaard, John Paulin Hansen, Dan Witzner Hansen, and Emilie Møllenbach. 2009. Gaze-Controlled Driving. In CHI ’09 Extended Abstracts on Human Factors in Computing Systems(CHI EA ’09). Association for Computing Machinery, New York, NY, USA, 4387–4392. https://doi.org/10.1145/1520340.1520671
[32]
Colin Ware and Harutune H. Mikaelian. 1986. An Evaluation of an Eye Tracker as a Device for Computer Input2. In Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface(CHI ’87). Association for Computing Machinery, New York, NY, USA, 183–188. https://doi.org/10.1145/29933.275627
[33]
Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’99). Association for Computing Machinery, New York, NY, USA, 246–253. https://doi.org/10.1145/302979.303053

Cited By

View all
  • (2024)Evaluations of Parallel Views for Sequential VR Search TasksProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652928(148-156)Online publication date: 4-Apr-2024
  • (2024)Cognitive Grasp: A Robotic Arm Responding to Human Muscle Intent2024 International Conference on Social and Sustainable Innovations in Technology and Engineering (SASI-ITE)10.1109/SASI-ITE58663.2024.00028(119-124)Online publication date: 23-Feb-2024
  • (2024)Toward AI-Mediated Avatar-Based Telecommunication: Investigating Visual Impression of Switching Between User- and AI-Controlled Avatars in Video ChatIEEE Access10.1109/ACCESS.2024.344123312(113372-113383)Online publication date: 2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
AHs '22: Proceedings of the Augmented Humans International Conference 2022
March 2022
350 pages
ISBN:9781450396325
DOI:10.1145/3519391
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 April 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. EMG Sensor
  2. Eye Gaze
  3. Robot Arm
  4. augmented human
  5. embodiment
  6. multiple bodies

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

AHs 2022
AHs 2022: Augmented Humans 2022
March 13 - 15, 2022
Kashiwa, Chiba, Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)252
  • Downloads (Last 6 weeks)28
Reflects downloads up to 27 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Evaluations of Parallel Views for Sequential VR Search TasksProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652928(148-156)Online publication date: 4-Apr-2024
  • (2024)Cognitive Grasp: A Robotic Arm Responding to Human Muscle Intent2024 International Conference on Social and Sustainable Innovations in Technology and Engineering (SASI-ITE)10.1109/SASI-ITE58663.2024.00028(119-124)Online publication date: 23-Feb-2024
  • (2024)Toward AI-Mediated Avatar-Based Telecommunication: Investigating Visual Impression of Switching Between User- and AI-Controlled Avatars in Video ChatIEEE Access10.1109/ACCESS.2024.344123312(113372-113383)Online publication date: 2024
  • (2023)ShadowClones: an Interface to Maintain a Multiple Sense of Body-space Coordination in Multiple Visual PerspectivesProceedings of the Augmented Humans International Conference 202310.1145/3582700.3582706(171-178)Online publication date: 12-Mar-2023
  • (2023)Exploring Enhancements towards Gaze Oriented Parallel Views in Immersive Tasks2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR55154.2023.00077(620-630)Online publication date: Mar-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media