Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3359996.3364267acmconferencesArticle/Chapter ViewAbstractPublication PagesvrstConference Proceedingsconference-collections
research-article
Open access

Exploring the Use of a Robust Depth-sensor-based Avatar Control System and its Effects on Communication Behaviors

Published: 12 November 2019 Publication History

Abstract

To interact as fully-tracked avatars with rich hand gestures in Virtual Reality (VR), we often need to wear a tracking suit or attach extra sensors on our bodies. User experience and performance may be impacted by the cumbersome devices and low fidelity behavior representations, especially in social scenarios where good communication is required. In this paper, we use multiple depth sensors and focus on increasing the behavioral fidelity of a participant’s virtual body representation. To investigate the impact of the depth-sensor-based avatar system (full-body tracking with hand gestures), we compared it against a controller-based avatar system (partial-body tracking with limited hand gestures). We designed a VR interview simulation for a single user to measure the effects on presence, virtual body ownership, workload, usability, and perceived self-performance. Specifically, the interview process was recorded in VR, together with all the verbal and non-verbal cues. Subjects then took a third-person view to evaluate their previous performance. Our results show that the depth-sensor-based avatar control system increased virtual body ownership and also improved the user experience. In addition, users rated their non-verbal behavior performance higher in the full-body depth-sensor-based avatar system.

Supplementary Material

a11-wu-supplement (vrst19-30.mp4)
video

References

[1]
Mohd Hezri Amir, Albert Quek, Nur Rasyid Bin Sulaiman, and John See. 2016. DUKE: Enhancing Virtual Reality Based FPS Game with Full-body Interactions. In Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology(ACE ’16). ACM, New York, NY, USA, Article 35, 6 pages. https://doi.org/10.1145/3001773.3001804
[2]
Jeremy N Bailenson, Nick Yee, Dan Merget, and Ralph Schroeder. 2006. The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence: Teleoperators and Virtual Environments 15, 4(2006), 359–372.
[3]
Olaf Blanke and Thomas Metzinger. 2009. Full-body illusions and minimal phenomenal selfhood. Trends in cognitive sciences 13, 1 (2009), 7–13.
[4]
Jim Blascovich. 2002. Social Influence within Immersive Virtual Environments. Springer London, London, 127–145. https://doi.org/10.1007/978-1-4471-0277-9_8
[5]
blender. 2019. blender.org - Home of the Blender project - Free and Open 3D Creation Software. https://www.blender.org/
[6]
Matthew Botvinick and Jonathan Cohen. 1998. Rubber hands ’feel’ touch that eyes see. Nature 391, 6669 (1998), 756.
[7]
John Brooke 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4–7.
[8]
Polona Caserman, Augusto Garcia-Agundez, Robert Konrad, Stefan Göbel, and Ralf Steinmetz. 2019. Real-time body tracking in virtual reality using a Vive tracker. Virtual Reality 23, 2 (2019), 155–168.
[9]
Simon Chapple and Maxime Ladaique. 2009. Society at a Glance 2009: OECD social indicators. Organisation for Economic Co-operation and Development, Paris,France.
[10]
ALEX COLGAN. 2014. How Does the Leap Motion Controller Work?http://blog.leapmotion.com/hardware-to-software-how-does-the-leap-motion-controller-work/
[11]
Tara Collingwoode-Williams, Marco Gillies, Cade McCall, and Xueni Pan. 2017. The effect of lip and arm synchronization on embodiment: A pilot study. In 2017 IEEE Virtual Reality (VR). IEEE, IEEE, Los Angeles, CA, 253–254.
[12]
Valve Corporation. 2019. steam vr. https://www.steamvr.com/en/
[13]
Trevor J Dodds, Betty J Mohler, and Heinrich H Bülthoff. 2011. Talk to the virtual hands: Self-animated avatars improve communication in head-mounted display virtual environments. PloS one 6, 10 (2011), e25759.
[14]
M Fabri, DJ Moore, and DJ Hobbs. 2002. Expressive agents: Non-verbal communication in collaborative virtual environments. Proceedings of Autonomous Agents and Multi-Agent Systems (Embodied Conversational Agents)(2002).
[15]
Facebook. 2019a. Facebook Spaces. https://www.facebook.com/spaces
[16]
Facebook. 2019b. Oculus Rift S. https://www.oculus.com/rift-s/
[17]
Shaun Gallagher. 2000. Philosophical conceptions of the self: implications for cognitive science. Trends in cognitive sciences 4, 1 (2000), 14–21.
[18]
Maia Garau, Mel Slater, Vinoba Vinayagamoorthy, Andrea Brogni, Anthony Steed, and M. Angela Sasse. 2003. The Impact of Avatar Realism and Eye Gaze Control on Perceived Quality of Communication in a Shared Immersive Virtual Environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(CHI ’03). ACM, New York, NY, USA, 529–536. https://doi.org/10.1145/642611.642703
[19]
Mar Gonzalez-Franco, Daniel Perez-Marcos, Bernhard Spanlang, and Mel Slater. 2010. The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment. In 2010 IEEE virtual reality conference (VR). IEEE, IEEE, Waltham, MA, USA, 111–114.
[20]
Mar Gonzalez-Franco Gonzalez-Franco and Tabitha C Peck. 2018. Avatar Embodiment. Towards a Standardized Questionnaire.Frontiers in Robotics and AI 5 (2018), 74.
[21]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Human Mental Workload, Peter A. Hancock and Najmedin Meshkati (Eds.). Advances in Psychology, Vol. 52. North-Holland, 139 – 183. https://doi.org/10.1016/S0166-4115(08)62386-9
[22]
Alexander Hornung, Sandip Sar-Dessai, and Leif Kobbelt. 2005. Self-calibrating optical motion tracking for articulated bodies. In IEEE Proceedings. VR 2005. Virtual Reality, 2005. IEEE, IEEE, Bonn, Germany, 75–82.
[23]
HTC. 2016. VIVE| The professional-grade VR headset. https://www.vive.com/nz/product/
[24]
Sungchul Jung and Charles E. Hughes. 2016. The Effects of Indirect Real Body Cues of Irrelevant Parts on Virtual Body Ownership and Presence. In Proceedings of the 26th International Conference on Artificial Reality and Telexistence and the 21st Eurographics Symposium on Virtual Environments(ICAT-EGVE ’16). Eurographics Association, Goslar Germany, Germany, 107–114. https://doi.org/10.2312/egve.20161442
[25]
Sungchul Jung, Christian Sandor, Pamela J. Wisniewski, and Charles E. Hughes. 2017. RealME: The Influence of Body and Hand Representations on Body Ownership and Presence. In Proceedings of the 5th Symposium on Spatial User Interaction(SUI ’17). ACM, New York, NY, USA, 3–11. https://doi.org/10.1145/3131277.3132186
[26]
Sungchul Jung, Pamela J Wisniewski, and Charles E. Hughes. 2018. In Limbo: The Effect of Gradual Visual Transition between Real and Virtual on Virtual Body Ownership Illusion and Presence. In Proceedings of the 25th IEEE Conference on Virtual Reality and 3D User Interfaces, IEEE VR 2018. IEEE, Reutlingen, Germany, 267–272.
[27]
Konstantina Kilteni, Ilias Bergstrom, and Mel Slater. 2013. Drumming in immersive virtual reality: the body shapes the way we play. IEEE transactions on visualization and computer graphics 19, 4(2013), 597–605.
[28]
Konstantina Kilteni, Raphaela Groten, and Mel Slater. 2012. The sense of embodiment in virtual reality. Presence: Teleoperators and Virtual Environments 21, 4(2012), 373–387.
[29]
Marc Erich Latoschik, Jean-Luc Lugrin, and Daniel Roth. 2016. FakeMi: A Fake Mirror System for Avatar Embodiment Studies. In Proceedings of the 22Nd ACM Conference on Virtual Reality Software and Technology(VRST ’16). ACM, New York, NY, USA, 73–76. https://doi.org/10.1145/2993369.2993399
[30]
Marc Erich Latoschik, Daniel Roth, Dominik Gall, Jascha Achenbach, Thomas Waltemate, and Mario Botsch. 2017. The Effect of Avatar Realism in Immersive Social Virtual Realities. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology(VRST ’17). ACM, New York, NY, USA, Article 39, 10 pages. https://doi.org/10.1145/3139131.3139156
[31]
Ultrahaptics Ltd. 2016. Leap Motion. https://www.leapmotion.com/
[32]
Ultrahaptics Ltd. 2017. Unity, Leap Motion Developer. https://developer.leapmotion.com/unity#5436356
[33]
Lara Maister, Mel Slater, Maria V Sanchez-Vives, and Manos Tsakiris. 2015. Changing bodies changes minds: owning another body affects social cognition. Trends in cognitive sciences 19, 1 (2015), 6–12.
[34]
MakeHuman. 2018. www.makehumancommunity.org. http://www.makehumancommunity.org/
[35]
Microsoft. 2014. Kinect - Windows app development. https://developer.microsoft.com/en-us/windows/kinect
[36]
Monoflow. 2016. UniOSC | UniOSC, the OSC solution for Unity3d. http://uniosc.monoflow.org/
[37]
Leonel Morgado, Bernardo Cardoso, Fausto de Carvalho, Luís Fernandes, Hugo Paredes, Luís Barbosa, Benjamim Fonseca, Paulo Martins, and Ricardo Rodrigues Nunes. 2015. Separating gesture detection and application control concerns with a multimodal architecture. In 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing. IEEE, IEEE, Liverpool, UK, 1548–1553.
[38]
NaturalPoint. 2019. OptiTrack - Motion Capture Systems. https://optitrack.com/
[39]
Alexander Nilsson, Ann-Sofie Axelsson, Ilona Heldal, and Ralph Schroeder. 2002. The long-term uses of shared virtual environments: An exploratory study. In The social life of avatars. Springer, London, 112–126.
[40]
Ye Pan and Anthony Steed. 2017. The impact of self-avatars on trust and collaboration in shared virtual environments. PloS one 12, 12 (2017), e0189078.
[41]
RootMotion. 2019. Final IK - RootMotion. http://www.root-motion.com/final-ik.html
[42]
Daniel Roth, Jean-Luc Lugrin, Julia Büser, Gary Bente, Arnulph Fuhrmann, and Marc Erich Latoschik. 2016. A simplified inverse kinematic approach for embodied vr applications. In 2016 IEEE Virtual Reality (VR). IEEE, IEEE, Greenville, SC, USA, 275–276.
[43]
Daniel Roth, David Mal, Christian Felix Purps, Peter Kullmann, and Marc Erich Latoschik. 2018. Injecting Nonverbal Mimicry with Hybrid Avatar-Agent Technologies: A NaÏVe Approach. In Proceedings of the Symposium on Spatial User Interaction(SUI ’18). ACM, New York, NY, USA, 69–73. https://doi.org/10.1145/3267782.3267791
[44]
Eva-Lotta Salinäs. 2002. Collaboration in multi-modal virtual worlds: comparing touch, text, voice and video. In The social life of avatars. Springer, London, UK, 172–187.
[45]
Ralph Schroeder. 2012. The social life of avatars: Presence and interaction in shared virtual environments. Springer Science & Business Media, Berlin, Germany.
[46]
Thomas Schubert, Frank Friedmann, and Holger Regenbrecht. 2001. The experience of presence: Factor analytic insights. Presence: Teleoperators & Virtual Environments 10, 3(2001), 266–281.
[47]
Harrison Jesse Smith and Michael Neff. 2018. Communication Behavior in Embodied Virtual Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI ’18). ACM, New York, NY, USA, Article 289, 12 pages. https://doi.org/10.1145/3173574.3173863
[48]
Bernhard Spanlang, Jean-Marie Normand, David Borland, Konstantina Kilteni, Elias Giannopoulos, Ausiàs Pomés, Mar González-Franco, Daniel Perez-Marcos, Jorge Arroyo-Palacios, Xavi Navarro Muncunill, 2014. How to build an embodiment lab: achieving body representation illusions in virtual reality. Frontiers in Robotics and AI 1 (2014), 9.
[49]
Crazy Minnow Studio. 2014. SALSA LipSync. https://assetstore.unity.com/packages/tools/animation/salsa-lipsync-suite-148442
[50]
K Swinth and Jim Blascovich. 2002. Perceiving and responding to others: Human-human and human-computer social interaction in collaborative virtual environments. In Proceedings of the 5th Annual International Workshop on PRESENCE, Vol. 392. ISPR, London, UK.
[51]
Unity Technologies. 2017. Unity Real-Time Development Platform | 3D, 2D VR & AR Visualizations. https://unity.com/
[52]
Geoffrey C. Urbaniak and Scott Plous. 1997. Research Randomizer. https://www.randomizer.org/
[53]
Daniela Villani, Claudia Repetto, Pietro Cipresso, and Giuseppe Riva. 2012. May I experience more presence in doing the same thing in virtual reality than in reality? An answer from a simulated job interview. Interacting with Computers 24, 4 (2012), 265–272.
[54]
Daniela Villani, Chiara Rotasperti, Pietro Cipresso, Stefano Triberti, Claudia Carissoli, and Giuseppe Riva. 2017. Assessing the emotional state of job applicants through a virtual reality simulation: a psycho-physiological study. In eHealth 360°. Springer, Cham, 119–126.
[55]
Steue Whittaker. 2003. Theories and Methods in Mediated Communication: Steve Whittaker. In Handbook of discourse processes. Routledge, London, UK, 246–289.
[56]
Yuanjie Wu, Yu Wang, Sungchul Jung, Simon Hoermann, and Robert W. Lindeman. 2019. Towards an articulated avatar in VR: Improving body and hand tracking using only depth cameras. Entertainment Computing 31 (2019), 100303. https://doi.org/10.1016/j.entcom.2019.100303
[57]
Nick Yee and Jeremy Bailenson. 2007. The Proteus effect: The effect of transformed self-representation on behavior. Human communication research 33, 3 (2007), 271–290.

Cited By

View all
  • (2024)A Systematic Process to Engineer Dependable Integration of Frame-based Input Devices in a Multimodal Input Chain: Application to Rehabilitation in HealthcareProceedings of the ACM on Human-Computer Interaction10.1145/36646338:EICS(1-31)Online publication date: 17-Jun-2024
  • (2024)Cues to fast‐forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative SystemsComputer Graphics Forum10.1111/cgf.1506643:2Online publication date: 30-Apr-2024
  • (2023)A Systematic Review and Meta-analysis of the Effectiveness of Body Ownership Illusions in Virtual RealityACM Transactions on Computer-Human Interaction10.1145/359076730:5(1-42)Online publication date: 23-Sep-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
VRST '19: Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology
November 2019
498 pages
ISBN:9781450370011
DOI:10.1145/3359996
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 November 2019

Check for updates

Author Tags

  1. avatar control
  2. communication behavior
  3. depth sensor
  4. motion capture
  5. tracking
  6. virtual reality

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

VRST '19
VRST '19: 25th ACM Symposium on Virtual Reality Software and Technology
November 12 - 15, 2019
NSW, Parramatta, Australia

Acceptance Rates

Overall Acceptance Rate 66 of 254 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)209
  • Downloads (Last 6 weeks)39
Reflects downloads up to 13 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)A Systematic Process to Engineer Dependable Integration of Frame-based Input Devices in a Multimodal Input Chain: Application to Rehabilitation in HealthcareProceedings of the ACM on Human-Computer Interaction10.1145/36646338:EICS(1-31)Online publication date: 17-Jun-2024
  • (2024)Cues to fast‐forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative SystemsComputer Graphics Forum10.1111/cgf.1506643:2Online publication date: 30-Apr-2024
  • (2023)A Systematic Review and Meta-analysis of the Effectiveness of Body Ownership Illusions in Virtual RealityACM Transactions on Computer-Human Interaction10.1145/359076730:5(1-42)Online publication date: 23-Sep-2023
  • (2023)Impact of Spatial Environment Design on Cognitive Load2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)10.1109/VRW58643.2023.00267(849-850)Online publication date: Mar-2023
  • (2023)Hand Tracking and Gesture Recognition by Multiple Contactless Sensors: A SurveyIEEE Transactions on Human-Machine Systems10.1109/THMS.2022.318884053:1(35-43)Online publication date: Feb-2023
  • (2022)Realistic Motion Avatars are the Future for Social Interaction in Virtual RealityFrontiers in Virtual Reality10.3389/frvir.2021.7507292Online publication date: 3-Jan-2022
  • (2022)Nonverbal Communication in Immersive Virtual Reality through the Lens of Presence: A Critical ReviewPRESENCE: Virtual and Augmented Reality10.1162/pres_a_0038731(147-187)Online publication date: 1-Dec-2022
  • (2022)Effects of Avatar Face Level of Detail Control on Social Presence in Augmented Reality Remote Collaboration2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)10.1109/ISMAR-Adjunct57072.2022.00161(763-767)Online publication date: Oct-2022
  • (2021)Using a Fully Expressive Avatar to Collaborate in Virtual Reality: Evaluation of Task Performance, Presence, and AttractionFrontiers in Virtual Reality10.3389/frvir.2021.6412962Online publication date: 7-Apr-2021
  • (2021)A novel upper-limb tracking system in a virtual environment for stroke rehabilitationJournal of NeuroEngineering and Rehabilitation10.1186/s12984-021-00957-618:1Online publication date: 27-Nov-2021
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media