Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-031-61356-2_9guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Big Movements or Small Motions: Controlling Digital Avatars with Single-Camera Motion Capture

Published: 29 June 2024 Publication History

Abstract

Digital human avatars provide users with more natural interactions in virtual reality, games, and specific scenarios and enhance user immersion. This study explores the ways of controlling a digital human avatar and the effects of different movement amplitudes on the user experience, using a monocular motion capture technology, Mediapipe, for whole-body and hand joint information capture and digital human control. The study tested the interaction with different movement magnitudes (small, medium, and large) by designing two game tasks - simple walking and complex gesture manipulation. The results show that small-range actions are easy to control and improve ease of use in simple tasks. In contrast, medium-range actions provide a higher immersion and engagement experience in complex tasks. These findings provide new insights into designing user-friendly digital avatar interactions that can help improve user satisfaction and the quality of virtual experiences. We expect these insights to guide the development of digital incarnation technology.

References

[1]
Ducheneaut, N., Wen, M., Yee, N., Wadley, G.: Body and mind: a study of avatar personalization in three virtual worlds. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1151–1160 (2009).
[2]
Chihara T, Fukuchi N, and Seo A Optimal product design method with digital human modeling for physical workload reduction: a case study illustrating its application to handrail position design Jpn. J. Ergon. 2017 53 25-35
[3]
Zhong S Digital human-a new research field combined information and life science Sci. Technol. Rev. 2005 23 9-12
[4]
Reed, M., Faraway, J., Chaffin, D., Martin, B.: The HUMOSIM ergonomics framework: a new approach to digital human simulation for ergonomic analysis. SAE Technical Paper (2006)
[5]
Birk, M., Atkins, C., Bowey, J., Mandryk, R.: Fostering intrinsic motivation through avatar identification in digital games. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 2982–2995 (2016)
[6]
Dechant M, Birk M, Shiban Y, Schnell K, and Mandryk R How avatar customization affects fear in a game-based digital exposure task for social anxiety Proc. ACM Hum.-Comput. Interact. 2021 5 1-27
[7]
Aymerich-Franch L, Kizilcec R, and Bailenson J The relationship between virtual self similarity and social anxiety Front. Hum. Neurosci. 2014 8 944
[8]
Scataglini, S., Danckaers, F., Huysmans, T., Sijbers, J., Andreoni, G.: Design smart clothing using digital human models. DHM Posturography, 683–698 (2019)
[9]
Wang, H., Liu, X., Jiang, M., Zhou, C.: Garment metaverse: parametric digital human and dynamic scene try-on. In: Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning, pp. 60–65 (2023)
[10]
Pallavicini, F., Pepe, A.: Comparing player experience in video games played in virtual reality or on desktop displays: immersion, flow, and positive emotions. In: Extended Abstracts of the Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts, pp. 195–210 (2019)
[11]
Oshita, M., Ishikawa, H.: Gamepad vs. touchscreen: a comparison of action selection interfaces in computer games. In: Proceedings of the Workshop at SIGGRAPH Asia, pp. 27–31 (2012)
[12]
Baldauf, M., Fröhlich, P., Adegeye, F., Suette, S.: Investigating on-screen gamepad designs for smartphone-controlled video games. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 12, 1–21 (2015)
[13]
Soga, A., Matsushita, T.: Movement creation by choreographers with a partially self-controllable human body in VR. In: Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology, pp. 1–2 (2023)
[14]
Oshita, M.: Motion-capture-based avatar control framework in third-person view virtual environments. In: Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, pp. 2–es (2006)
[15]
Gagneré, G., Lavender, A., Plessiet, C., White, T.: Challenges of movement quality using motion capture in theatre. In: Proceedings of the 5th International Conference on Movement and Computing, pp. 1–6 (2018)
[16]
Zhu, H., Deng, C., Zhu, Y.: MediaPipe based gesture recognition system for English letters. In: Proceedings of the 2022 11th International Conference on Networks, Communication and Computing, pp. 24–30 (2022)
[17]
Duy Khuat, B., Thai Phung, D., Thi Thu Pham, H., Ngoc Bui, A., Tung Ngo, S.: Vietnamese sign language detection using Mediapipe. In: 2021 10th International Conference on Software and Computer Applications, pp. 162–165 (2021)
[18]
Dalsgaard, T., Knibbe, J., Bergström, J.: Modeling pointing for 3D target selection in VR. In: Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, pp. 1–10 (2021)
[19]
Gusmao Lafayette, T., et al.: The virtual Kinect. In: Symposium on Virtual and Augmented Reality, pp. 111–119 (2021)
[20]
Jiang F, Zhang S, Wu S, Gao Y, and Zhao D Multi-layered gesture recognition with Kinect J. Mach. Learn. Res. 2015 16 227-254
[21]
Gao M, Kortum P, and Oswald F Psychometric evaluation of the use (usefulness, satisfaction, and ease of use) questionnaire for reliability and validity Proc. Hum. Fact. Ergon. Soc. Ann. Meet. 2018 62 1414-1418
[22]
Hart S NASA-task load index (NASA-TLX); 20 years later Proc. Hum. Fact. Ergon. Soc. Ann. Meet. 2006 50 904-908
[23]
Jia J, Chung N, and Hwang J Assessing the hotel service robot interaction on tourists’ behaviour: the role of anthropomorphism Ind. Manag. Data Syst. 2021 121 1457-1478
[24]
Lu, L., Zhang, P., Zhang, T.: Leveraging “human-likeness” of robotic service at restaurants. Int. J. Hospitality Manag. 94, 102823 (2021)
[25]
Sung, E., Han, D., Bae, S., Kwon, O.: What drives technology-enhanced storytelling immersion? The role of digital humans (2022). https://www.sciencedirect.com/science/article/pii/S0747563222000681
[26]
Tung, Y., et al.: User-defined game input for smart glasses in public space. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 3327–3336 (2015).
[27]
Abramson, J., et al.: Evaluating Multimodal Interactive Agents (2022)

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Design, User Experience, and Usability: 13th International Conference, DUXU 2024, Held as Part of the 26th HCI International Conference, HCII 2024, Washington, DC, USA, June 29 – July 4, 2024, Proceedings, Part III
Jun 2024
347 pages
ISBN:978-3-031-61355-5
DOI:10.1007/978-3-031-61356-2
  • Editors:
  • Aaron Marcus,
  • Elizabeth Rosenzweig,
  • Marcelo M. Soares

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 29 June 2024

Author Tags

  1. Digital Human
  2. Avatar
  3. Motion Capture
  4. Video Game
  5. User Study

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 22 Nov 2024

Other Metrics

Citations

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media