Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3686215.3688821acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article
Open access

Gesture Evaluation in Virtual Reality

Published: 04 November 2024 Publication History

Abstract

Gestures play a crucial role in human communication, enhancing interpersonal interactions through non-verbal expression. Burgeoning technology allows virtual avatars to leverage communicative gestures to enhance their life-likeness and communication quality with AI-generated gestures. Traditionally, evaluations of AI-generated gestures have been confined to 2D settings. However, Virtual Reality (VR) offers an immersive alternative with the potential to affect the perception of virtual gestures.
This paper introduces a novel evaluation approach for computer-generated gestures, investigating the impact of a fully immersive environment compared to a traditional 2D setting. The goal is to find the differences, benefits, and drawbacks of the two alternatives. Furthermore, the study also aims to investigate three gesture generation algorithms submitted to the 2023 GENEA Challenge and evaluate their performance in the two virtual settings.
Experiments showed that the VR setting has an impact on the rating of generated gestures. Participants tended to rate gestures observed in VR slightly higher on average than in 2D. Furthermore, the results of the study showed that the generation models used for the study had a consistent ranking. However, the setting had a limited impact on the models’ performance, having a bigger impact on the perception of ’true movement’ which had higher ratings in VR than in 2D.

References

[1]
[n. d.]. igroup presence questionnaire (IPQ) overview | igroup.org – project consortium. https://www.igroup.org/pq/ipq/index.php
[2]
[n. d.]. TLX @ NASA Ames - Home. https://humansystems.arc.nasa.gov/groups/TLX/
[3]
2023. GENEA Challenge 2023. https://svito-zar.github.io//GENEAchallenge2023/
[4]
G. A. BARNARD. 1947. SIGNIFICANCE TESTS FOR 2×2 TABLES. Biometrika 34, 1-2 (Jan. 1947), 123–138. https://doi.org/10.1093/biomet/34.1-2.123
[5]
Justine Cassell. 2000. Embodied Conversational Agents. MIT Press. Google-Books-ID: tHiKZGh9t7sC.
[6]
Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow. 2023. Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation. In Proceedings of the 25th International Conference on Multimodal Interaction(ICMI ’23). Association for Computing Machinery, New York, NY, USA, 755–762. https://doi.org/10.1145/3577190.3616117
[7]
Anna Deichler, Siyang Wang, Simon Alexanderson, and Jonas Beskow. 2023. Learning to generate pointing gestures in situated embodied conversational agents. Frontiers in Robotics and AI 10 (March 2023). https://doi.org/10.3389/frobt.2023.1110534 Publisher: Frontiers.
[8]
Ioannis Doumanis and Daphne Economou. 2019. Affective Communication between ECAs and Users in Collaborative Virtual Environments: The REVERIE European Parliament Use Case. Multimodal Technologies and Interaction 3, 1 (March 2019), 7. https://doi.org/10.3390/mti3010007 Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.
[9]
Susan Goldin-Meadow. 1999. The role of gesture in communication and thinking. Trends in Cognitive Sciences 3, 11 (Nov. 1999), 419–429. https://doi.org/10.1016/S1364-6613(99)01397-2
[10]
Sture Holm. 1979. A Simple Sequentially Rejective Multiple Test Procedure. Scandinavian Journal of Statistics 6, 2 (1979), 65–70. https://www.jstor.org/stable/4615733 Publisher: [Board of the Foundation of the Scandinavian Journal of Statistics, Wiley].
[11]
Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, and Gustav Eje Henter. 2021. HEMVIP: Human Evaluation of Multiple Videos in Parallel. In Proceedings of the 2021 International Conference on Multimodal Interaction(ICMI ’21). Association for Computing Machinery, New York, NY, USA, 707–711. https://doi.org/10.1145/3462244.3479957
[12]
Adam Kendon. 1997. Gesture. Annual Review of Anthropology 26, 1 (1997), 109–128. https://doi.org/10.1146/annurev.anthro.26.1.109 _eprint: https://doi.org/10.1146/annurev.anthro.26.1.109.
[13]
Khronos Group. [n. d.]. OpenXR. https://www.khronos.org/openxr/. Accessed: 2024-07-10.
[14]
Robert M. Krauss, Yihsiu Chen, and Purnima Chawla. 1996. Nonverbal Behavior and Nonverbal Communication: What do Conversational Hand Gestures Tell Us? In Advances in Experimental Social Psychology, Mark P. Zanna (Ed.). Vol. 28. Academic Press, 389–450. https://doi.org/10.1016/S0065-2601(08)60241-5
[15]
Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Paris France, 792–801. https://doi.org/10.1145/3577190.3616120
[16]
Simon Langener. 2022. Persuasive virtual agents for peer pressure simulations in Immersive Virtual Reality: Designing for trainings in people with mild to borderline intellectual disability and alcohol use disorder.
[17]
Simon Langener, Jan Kolkmeier, Joanne VanDerNagel, Randy Klaassen, Jeannette van Manen, and Dirk Heylen. 2023. Development of an Alcohol Refusal Training in Immersive Virtual Reality for Patients With Mild to Borderline Intellectual Disability and Alcohol Use Disorder: Cocreation With Experts in Addiction Care. JMIR Formative Research 7, 1 (April 2023), e42523. https://doi.org/10.2196/42523 Company: JMIR Formative Research Distributor: JMIR Formative Research Institution: JMIR Formative Research Label: JMIR Formative Research Publisher: JMIR Publications Inc., Toronto, Canada.
[18]
Lex Fridman. 2023. Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398. https://www.youtube.com/watch?v=MVYrJJNdrEg
[19]
Max M. Louwerse, Arthur C. Graesser, Danielle S. McNamara, and Shulan Lu. 2009. Embodied conversational agents as conversational partners. Applied Cognitive Psychology 23, 9 (2009), 1244–1255. https://doi.org/10.1002/acp.1527 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/acp.1527.
[20]
Patrick E. McKnight and Julius Najab. 2010. Mann-Whitney U Test. In The Corsini Encyclopedia of Psychology. John Wiley & Sons, Ltd, 1–1. https://doi.org/10.1002/9780470479216.corpsy0524 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/9780470479216.corpsy0524.
[21]
Michael Studdert-Kennedy. 1994. Hand and Mind: What Gestures Reveal About Thought.Language and Speech 37, 2 (April 1994), 203–209. https://doi.org/10.1177/002383099403700208 Publisher: SAGE Publications Ltd.
[22]
B. L. Welch. 1947. The Generalization of ‘Student’s’ Problem when Several Different Population Variances are Involved. Biometrika 34, 1/2 (Jan. 1947), 28. https://doi.org/10.2307/2332510
[23]
Jonathan Windle, Iain Matthews, Ben Milner, and Sarah Taylor. 2023. The UEA Digital Humans entry to the GENEA Challenge 2023. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. ACM, Paris France, 802–810. https://doi.org/10.1145/3577190.3616116
[24]
Pieter Wolfert, Gustav Henter, and Tony Belpaeme. 2024. Exploring the Effectiveness of Evaluation Practices for Computer-Generated Nonverbal Behaviour. Applied Sciences 14 (Feb. 2024), 1460. https://doi.org/10.3390/app14041460
[25]
Pieter Wolfert, Nicole Robinson, and Tony Belpaeme. 2022. A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents. IEEE Transactions on Human-Machine Systems 52, 3 (June 2022), 379–389. https://doi.org/10.1109/THMS.2022.3149173 Conference Name: IEEE Transactions on Human-Machine Systems.
[26]
Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li, Zhiyong Wu, Xiaofei Wu, Songcen Xu, and Zonghong Dai. 2023. The DiffuseStyleGesture+ entry to the GENEA Challenge 2023. In Proceedings of the 25th International Conference on Multimodal Interaction(ICMI ’23). Association for Computing Machinery, New York, NY, USA, 779–785. https://doi.org/10.1145/3577190.3616114

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI Companion '24: Companion Proceedings of the 26th International Conference on Multimodal Interaction
November 2024
252 pages
ISBN:9798400704635
DOI:10.1145/3686215
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 November 2024

Check for updates

Author Tags

  1. dyadic interaction
  2. embodied conversational agents
  3. evaluation paradigms
  4. gesture generation
  5. virtual reality

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Digital Futures

Conference

ICMI '24
Sponsor:
ICMI '24: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
November 4 - 8, 2024
San Jose, Costa Rica

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 158
    Total Downloads
  • Downloads (Last 12 months)158
  • Downloads (Last 6 weeks)71
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media