Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3505284.3529959acmconferencesArticle/Chapter ViewAbstractPublication PagesimxConference Proceedingsconference-collections
research-article

CalmResponses: Displaying Collective Audience Reactions in Remote Communication

Published: 22 June 2022 Publication History

Abstract

We propose a system displaying audience eye gaze and nod reactions for enhancing synchronous remote communication. Recently, we have had increasing opportunities to speak to others remotely. In contrast to offline situations, however, speakers often have difficulty observing audience reactions at once in remote communication, which makes them feel more anxious and less confident in their speeches. Recent studies have proposed methods of presenting various audience reactions to speakers. Since these methods require additional devices to measure audience reactions, they are not appropriate for practical situations. Moreover, these methods do not present overall audience reactions. In contrast, we design and develop CalmResponses, a browser-based system which measures audience eye gaze and nod reactions only with a built-in webcam and collectively presents them to speakers. The results of our two user studies indicated that the number of fillers in speaker’s speech decreases when audiences’ eye gaze is presented, and their self-rating score increases when audiences’ nodding is presented. Moreover, comments from audiences suggested benefits of CalmResponses for them in terms of co-presence and privacy concerns.

References

[1]
Riku Arakawa and Hiromu Yakura. 2019. REsCUE: A framework for REal-time feedback on behavioral CUEs using multimodal anomaly detection. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019. ACM, New York, NY, 572. https://doi.org/10.1145/3290605.3300802
[2]
Riku Arakawa and Hiromu Yakura. 2021. Mindless Attractor: A False-Positive Resistant Intervention for Drawing Attention Using Auditory Perturbation. In CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021. ACM, New York, NY, 99:1–99:15. https://doi.org/10.1145/3411764.3445339
[3]
Riku Arakawa and Hiromu Yakura. 2021. Reaction or Speculation: Building Computational Support for Users in Catching-Up Series Based on an Emerging Media Consumption Phenomenon. Proc. ACM Hum. Comput. Interact. 5, CSCW1 (2021), 1–28. https://doi.org/10.1145/3449225
[4]
Louise Barkhuus and Tobias Jørgensen. 2008. Engaging the crowd: studies of audience-performer interaction. In Extended Abstracts Proceedings of the 2008 Conference on Human Factors in Computing Systems, CHI 2008, Florence, Italy, April 5-10, 2008. ACM, New York, NY, 2925–2930. https://doi.org/10.1145/1358628.1358785
[5]
Giorgi Basilaia and David Kvavadze. 2020. Transition to online education in schools during a SARS-CoV-2 coronavirus (COVID-19) pandemic in Georgia.Pedagogical Research 5, 4 (2020).
[6]
Ronald Bassett, Ralph R Behnke, Larry W Carlile, and Jimmie Rogers. 1973. The effects of positive and negative audience responses on the autonomic arousal of student speakers. Southern Journal of Communication 38, 3 (1973), 255–261.
[7]
Konstantinos Bousmalis, Marc Mehu, and Maja Pantic. 2013. Towards the automatic detection of spontaneous agreement and disagreement based on nonverbal behaviour: A survey of related cues, databases, and tools. Image and Vision Computing 31, 2 (2013), 203–221. https://doi.org/10.1016/j.imavis.2012.07.003
[8]
Saniye Tugba Bulu. 2012. Place presence, social presence, co-presence, and satisfaction in virtual worlds. Computers & Education 58, 1 (2012), 154–161.
[9]
Benjamin T. Carter and Steven G. Luke. 2020. Best practices in eye tracking research. International Journal of Psychophysiology 155 (Sept. 2020), 49–62. https://doi.org/10.1016/j.ijpsycho.2020.05.010
[10]
Frank R. Castelli and Mark A. Sarvary. 2021. Why students do not turn on their video cameras during online classes and an equitable and inclusive plan to encourage them to do so. Ecology and Evolution 11, 8 (Jan. 2021), 3565–3576. https://doi.org/10.1002/ece3.7123
[11]
Lei Chen, Gary Feng, Jilliam Joe, Chee Wee Leong, Christopher Kitchen, and Chong Min Lee. 2014. Towards Automated Assessment of Public Speaking Skills Using Multimodal Cues. In Proceedings of the 16th International Conference on Multimodal Interaction, ICMI 2014, Istanbul, Turkey, November 12-16, 2014. ACM, New York, NY, 200–203. https://doi.org/10.1145/2663204.2663265
[12]
Mauro Cherubini, Marc-Antoine Nüssli, and Pierre Dillenbourg. 2008. Deixis and gaze in collaborative work at a distance (over a shared map): a computational model to detect misunderstandings. In Proceedings of the Eye Tracking Research & Application Symposium, ETRA 2008, Savannah, Georgia, USA, March 26-28, 2008. ACM, New York, NY, 173–180. https://doi.org/10.1145/1344471.1344515
[13]
Mathieu Chollet and Stefan Scherer. 2017. Perception of Virtual Audiences. IEEE Computer Graphics and Applications 37, 4 (2017), 50–59. https://doi.org/10.1109/MCG.2017.3271465
[14]
Mathieu Chollet, Torsten Wörtwein, Louis-Philippe Morency, Ari Shapiro, and Stefan Scherer. 2015. Exploring feedback strategies to improve public speaking: an interactive virtual audience framework. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp 2015, Osaka, Japan, September 7-11, 2015. ACM, New York, NY, 1143–1154. https://doi.org/10.1145/2750858.2806060
[15]
Martin Cooney, Sepideh Pashami, Anita Pinheiro Sant’Anna, Yuantao Fan, and Slawomir Nowaczyk. 2018. Pitfalls of Affective Computing: How can the automatic visual communication of emotions lead to harm, and what can be done to mitigate such risks. In Companion of the The Web Conference 2018 on The Web Conference 2018, WWW 2018, Lyon, France, April 23-27, 2018. ACM, New York, NY, 1563–1566. https://doi.org/10.1145/3184558.3191611
[16]
Sarah D’Angelo and Andrew Begel. 2017. Improving Communication Between Pair Programmers Using Shared Gaze Awareness. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, May 06-11, 2017. ACM, New York, NY, 6245–6290. https://doi.org/10.1145/3025453.3025573
[17]
Sarah D’Angelo and Darren Gergle. 2016. Gazed and Confused: Understanding and Designing Shared Gaze for Remote Collaboration. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, May 7-12, 2016. ACM, New York, NY, 2492–2496. https://doi.org/10.1145/2858036.2858499
[18]
Sarah D’Angelo and Darren Gergle. 2018. An Eye For Design: Gaze Visualizations for Remote Collaborative Work. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, April 21-26, 2018. ACM, New York, NY, 349. https://doi.org/10.1145/3173574.3173923
[19]
Charles Darwin and Phillip Prodger. 1998. The expression of the emotions in man and animals. Oxford University Press, USA.
[20]
Elena Di Lascio, Shkurta Gashi, and Silvia Santini. 2018. Unobtrusive assessment of students’ emotional engagement during lectures using electrodermal activity sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 3 (2018), 1–21. https://doi.org/10.1145/3264913
[21]
Katsuya Fujii and Jun Rekimoto. 2019. SubMe: An Interactive Subtitle System with English Skill Estimation Using Eye Tracking. In Proceedings of the 10th Augmented Human International Conference 2019, Reims, France, March 11-12, 2019. ACM, New York, NY, 23:1–23:9. https://doi.org/10.1145/3311823.3311865
[22]
Shkurta Gashi, Elena Di Lascio, and Silvia Santini. 2019. Using unobtrusive wearable sensors to measure the physiological synchrony between presenters and audience members. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1 (2019), 1–19.
[23]
Alexander M Goberman, Stephanie Hughes, and Todd Haydock. 2011. Acoustic characteristics of public speaking: Anxiety and practice effects. Speech communication 53, 6 (2011), 867–876.
[24]
Philip J. Guo, Juho Kim, and Rob Rubin. 2014. How video production affects student engagement: an empirical study of MOOC videos. In First (2014) ACM Conference on Learning @ Scale, L@S 2014, Atlanta, GA, USA, March 4-5, 2014. ACM, New York, NY, 41–50. https://doi.org/10.1145/2556325.2566239
[25]
Aman Gupta, Finn L. Strivens, Benjamin Tag, Kai Kunze, and Jamie A. Ward. 2019. Blink as you sync: uncovering eye and nod synchrony in conversation using wearable sensing. In Proceedings of the 23rd International Symposium on Wearable Computers, UbiComp 2019, London, UK, September 09-13, 2019. ACM, New York, NY, 66–71. https://doi.org/10.1145/3341163.3347736
[26]
Joanna Hale, Jamie A Ward, Francesco Buccheri, Dominic Oliver, and Antonia F de C Hamilton. 2020. Are you on my wavelength? Interpersonal coordination in dyadic conversations. Journal of nonverbal behavior 44, 1 (2020), 63–83.
[27]
Mariam Hassib, Daniel Buschek, Pawel W. Wozniak, and Florian Alt. 2017. HeartChat: Heart Rate Augmented Mobile Chat to Support Empathy and Awareness. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, May 06-11, 2017. ACM, New York, NY, 2239–2251. https://doi.org/10.1145/3025453.3025758
[28]
Mariam Hassib, Stefan Schneegass, Philipp Eiglsperger, Niels Henze, Albrecht Schmidt, and Florian Alt. 2017. EngageMeter: A System for Implicit Audience Engagement Sensing Using Electroencephalography. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, May 06-11, 2017. ACM, New York, NY, 5114–5119. https://doi.org/10.1145/3025453.3025669
[29]
Elaine Hatfield, John T. Cacioppo, and Richard L. Rapson. 1993. Emotional Contagion. Current Directions in Psychological Science 2, 3 (June 1993), 96–100. https://doi.org/10.1111/1467-8721.ep10770953
[30]
James D. Hollan and Scott Stornetta. 1992. Beyond Being There. In Conference on Human Factors in Computing Systems, CHI 1992, Monterey, CA, USA, May 3-7, 1992, Proceedings. ACM, New York, NY, 119–125. https://doi.org/10.1145/142750.142769
[31]
Kate S Hone and Ghada R El Said. 2016. Exploring the factors affecting MOOC retention: A survey study. Computers & Education 98 (2016), 157–168.
[32]
Stephen Hutt, Kristina Krasich, James R. Brockmole, and Sidney K. D’Mello. 2021. Breaking out of the Lab: Mitigating Mind Wandering with Gaze-Based Attention-Aware Technology in Classrooms. In CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021. ACM, New York, NY, 52:1–52:14. https://doi.org/10.1145/3411764.3445269
[33]
Dongsik Jo, Ki-Hong Kim, and Gerard Jounghyun Kim. 2016. Effects of avatar and background representation forms to co-presence in mixed reality (MR) tele-conference systems. In SIGGRAPH ASIA 2016, Macao, December 5-8, 2016 – Virtual Reality meets Physical Reality: Modelling and Simulating Virtual Humans and Environments. ACM, New York, NY, 12:1–12:4. https://doi.org/10.1145/2992138.2992146
[34]
Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen (Daniel) Li, Krzysztof Z. Gajos, and Robert C. Miller. 2014. Data-driven interaction techniques for improving navigation of educational videos. In The 27th Annual ACM Symposium on User Interface Software and Technology, UIST ’14, Honolulu, HI, USA, October 5-8, 2014. ACM, New York, NY, 563–572. https://doi.org/10.1145/2642918.2647389
[35]
Seungwon Kim, Gun Lee, Nobuchika Sakata, and Mark Billinghurst. 2014. Improving co-presence with augmented visual communication cues for sharing experience through video conference. In 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, IEEE Computer Society, New York, NY, 83–92. https://doi.org/10.1109/ISMAR.2014.6948412
[36]
Masakatsu Kubota, Tomio Watanabe, and Yutaka Ishii. 2019. A Speech Promotion System by Using Embodied Entrainment Objects of Spoken Words and a Listener Character for Joint Attention. In Proceedings of the 7th International Conference on Human-Agent Interaction, HAI 2019, Kyoto, Japan, October 06-10, 2019. ACM, New York, NY, 311–312. https://doi.org/10.1145/3349537.3352803
[37]
Manu Kumar, Jeff Klingner, Rohan Puranik, Terry Winograd, and Andreas Paepcke. 2008. Improving the accuracy of gaze input for interaction. In Proceedings of the Eye Tracking Research & Application Symposium, ETRA 2008, Savannah, Georgia, USA, March 26-28, 2008. ACM, New York, NY, 65–68. https://doi.org/10.1145/1344471.1344488
[38]
Kai Kunze, Yuzuko Utsumi, Yuki Shiga, Koichi Kise, and Andreas Bulling. 2013. I know what you are reading: recognition of document types using mobile eye tracking. In Proceedings of the 17th Annual International Symposium on Wearable Computers. ISWC 2013, Zurich, Switzerland, September 8-12, 2013. ACM, New York, NY, 113–116. https://doi.org/10.1145/2493988.2494354
[39]
Raja S. Kushalnagar and Christian Vogler. 2020. Teleconference Accessibility and Guidelines for Deaf and Hard of Hearing Users. In ASSETS ’20: The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, Virtual Event, Greece, October 26-28, 2020. ACM, New York, NY, 9:1–9:6. https://doi.org/10.1145/3373625.3417299
[40]
Celine Latulipe, Erin A. Carroll, and Danielle M. Lottridge. 2011. Love, hate, arousal and engagement: exploring audience responses to performing arts. In Proceedings of the International Conference on Human Factors in Computing Systems, CHI 2011, Vancouver, BC, Canada, May 7-12, 2011. ACM, New York, NY, 1845–1854. https://doi.org/10.1145/1978942.1979210
[41]
Yi-Chieh Lee, Wen-Chieh Lin, Fu-Yin Cherng, Hao-Chuan Wang, Ching-Ying Sung, and Jung-Tai King. 2015. Using Time-Anchored Peer Comments to Enhance Social Interaction in Online Educational Videos. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, Seoul, Republic of Korea, April 18-23, 2015. ACM, New York, NY, 689–698. https://doi.org/10.1145/2702123.2702349
[42]
Jie Li, Xinning Gui, Yubo Kou, and Yukun Li. 2019. Live Streaming as Co-Performance: Dynamics between Center and Periphery in Theatrical Engagement. Proc. ACM Hum. Comput. Interact. 3, CSCW (2019), 64:1–64:22. https://doi.org/10.1145/3359166
[43]
Shao-Kang Lo. 2008. The nonverbal communication functions of emoticons in computer-mediated communication. Cyberpsychology & behavior 11, 5 (2008), 595–597.
[44]
Valeria Lo Iacono, Paul Symonds, and David HK Brown. 2016. Skype as a tool for qualitative research interviews. Sociological Research Online 21, 2 (2016), 103–117.
[45]
Zhicong Lu, Rubaiat Habib Kazi, Li-Yi Wei, Mira Dontcheva, and Karrie Karahalios. 2021. StreamSketch: Exploring Multi-Modal Interactions in Creative Live Streams. Proc. ACM Hum. Comput. Interact. 5, CSCW1 (2021), 1–26. https://doi.org/10.1145/3449132
[46]
Peter D MacIntyre, Kimly A Thivierge, and J Renée MacDonald. 1997. The effects of audience interest, responsiveness, and evaluation on public speaking anxiety and related variables. Communication research reports 14, 2 (1997), 157–168.
[47]
Divine Maloney, Guo Freeman, and Donghee Yvette Wohn. 2020. ”Talking without a Voice”: Understanding Non-verbal Communication in Social Virtual Reality. Proc. ACM Hum. Comput. Interact. 4, CSCW2 (2020), 175:1–175:25. https://doi.org/10.1145/3415246
[48]
Evelyn Z McClave. 2000. Linguistic functions of head movements in the context of speech. Journal of pragmatics 32, 7 (2000), 855–878.
[49]
Khadijah Mukhtar, Kainat Javed, Mahwish Arooj, and Ahsan Sethi. 2020. Advantages, Limitations and Recommendations for online learning during COVID-19 pandemic era. Pakistan journal of medical sciences 36, COVID19-S4 (2020), S27.
[50]
Prasanth Murali, Javier Hernandez, Daniel McDuff, Kael Rowan, Jina Suh, and Mary Czerwinski. 2021. AffectiveSpotlight: Facilitating the Communication of Affective Responses from Audience Members during Online Presentations. In CHI ’21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama, Japan, May 8-13, 2021. ACM, New York, NY, 247:1–247:13. https://doi.org/10.1145/3411764.3445235
[51]
Joshua Newn, Eduardo Velloso, Fraser Allison, Yomna Abdelrahman, and Frank Vetere. 2017. Evaluating Real-Time Gaze Representations to Infer Intentions in Competitive Turn-Based Strategy Games. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, CHI PLAY 2017, Amsterdam, The Netherlands, October 15-18, 2017. ACM, New York, NY, 541–552. https://doi.org/10.1145/3116595.3116624
[52]
Kotaro Oomori, Akihisa Shitara, Tatsuya Minagawa, Sayan Sarcar, and Yoichi Ochiai. 2020. A Preliminary Study on Understanding Voice-only Online Meetings Using Emoji-based Captioning for Deaf or Hard of Hearing Users. In ASSETS ’20: The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, Virtual Event, Greece, October 26-28, 2020. ACM, New York, NY, 54:1–54:4. https://doi.org/10.1145/3373625.3418032
[53]
Alexandra Papoutsaki, Patsorn Sangkloy, James Laskey, Nediyana Daskalova, Jeff Huang, and James Hays. 2016. WebGazer: Scalable Webcam Eye Tracking Using User Interactions. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI). AAAI, IJCAI/AAAI Press, New York, NY, 3839–3845. http://www.ijcai.org/Abstract/16/540
[54]
Dhaval Parmar and Timothy W. Bickmore. 2020. Making It Personal: Addressing Individual Audience Members in Oral Presentations Using Augmented Reality. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 2(2020), 55:1–55:22. https://doi.org/10.1145/3397336
[55]
Iana Podkosova and Hannes Kaufmann. 2018. Co-presence and proxemics in shared walkable virtual environments with mixed colocation. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, VRST 2018, Tokyo, Japan, November 28 – December 01, 2018. ACM, New York, NY, 21:1–21:11. https://doi.org/10.1145/3281505.3281523
[56]
Isabella Poggi, Francesca D’Errico, and Laura Vincze. 2010. Types of Nods. The Polysemy of a Social Signal. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2010, 17-23 May 2010, Valletta, Malta. European Language Resources Association. http://www.lrec-conf.org/proceedings/lrec2010/summaries/596.html
[57]
Rebecca S. Portnoff, Linda N. Lee, Serge Egelman, Pratyush Mishra, Derek Leung, and David A. Wagner. 2015. Somebody’s Watching Me?: Assessing the Effectiveness of Webcam Indicator Lights. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, Seoul, Republic of Korea, April 18-23, 2015. ACM, New York, NY, 1649–1658. https://doi.org/10.1145/2702123.2702164
[58]
Stefan Scherer, Georg Layher, John Kane, Heiko Neumann, and Nick Campbell. 2012. An audiovisual political speech analysis incorporating eye-tracking and perception data. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12). European Language Resources Association (ELRA), Istanbul, Turkey, 1114–1120.
[59]
Oleg Spakov, Diederick C. Niehorster, Howell O. Istance, Kari-Jouko Räihä, and Harri Siirtola. 2019. Two-Way Gaze Sharing in Remote Teaching. In Human-Computer Interaction – INTERACT 2019 – 17th IFIP TC 13 International Conference, Paphos, Cyprus, September 2-6, 2019, Proceedings, Part II(Lecture Notes in Computer Science, Vol. 11747). Springer, 242–251. https://doi.org/10.1007/978-3-030-29384-0_16
[60]
Charles Donald Spielberger. 1989. State-trait anxiety inventory: a comprehensive bibliography. Consulting Psychologists Press.
[61]
Moe Sugawa, Taichi Furukawa, George Chernyshov, Danny Hynds, Jiawen Han, Marcelo Padovani, Dingding Zheng, Karola Marky, Kai Kunze, and Kouta Minamizawa. 2021. Boiling Mind: Amplifying the Audience-Performer Connection through Sonification and Visualization of Heart and Electrodermal Activities. In TEI ’21: Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, Online Event / Salzburg, Austria, February 14-19, 2021. ACM, New York, NY, 34:1–34:10. https://doi.org/10.1145/3430524.3440653
[62]
Wei Sun, Yunzhi Li, Feng Tian, Xiangmin Fan, and Hongan Wang. 2019. How Presenters Perceive and React to Audience Flow Prediction In-situ: An Explorative Study of Live Online Lectures. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–19.
[63]
Gahyun Sung, Tianyi Feng, and Bertrand Schneider. 2021. Learners Learn More and Instructors Track Better with Real-time Gaze Sharing. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–23.
[64]
Jaime Teevan, Daniel J. Liebling, Ann Paradiso, Carlos Garcia Jurado Suarez, Curtis von Veh, and Darren Gehring. 2012. Displaying mobile feedback during a presentation. In Mobile HCI ’12, Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services, San Francsico, CA, USA, September 21-24, 2012. ACM, New York, NY, 379–382. https://doi.org/10.1145/2371574.2371633
[65]
Daniel Shu Wei Ting, Lawrence Carin, Victor Dzau, and Tien Y Wong. 2020. Digital technology and COVID-19. Nature medicine 26, 4 (2020), 459–461.
[66]
Ha Trinh, Reza Asadi, Darren Edge, and T Bickmore. 2017. Robocop: A robotic coach for oral presentations. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 2 (2017), 1–24.
[67]
Radu-Daniel Vatavu. 2015. Audience Silhouettes: Peripheral Awareness of Synchronous Audience Kinesics for Social Television. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video, TVX 2015, Brussels, Belgium, June 3-5, 2015. ACM, New York, NY, 13–22. https://doi.org/10.1145/2745197.2745207
[68]
Roel Vertegaal. 1999. The GAZE Groupware System: Mediating Joint Attention in Multiparty Communication and Collaboration. In Proceeding of the CHI ’99 Conference on Human Factors in Computing Systems: The CHI is the Limit, Pittsburgh, PA, USA, May 15-20, 1999, Marian G. Williamsand Mark W. Altom (Eds.). ACM, New York, NY, 294–301. https://doi.org/10.1145/302979.303065
[69]
Petra Wagner, Zofia Malisz, and Stefan Kopp. 2014. Gesture and speech in interaction: An overview.
[70]
Bin Wang, Yukun Liu, Jing Qian, and Sharon K Parker. 2021. Achieving effective remote working during the COVID-19 pandemic: A work design perspective. Applied Psychology 70, 1 (2021), 16–59.
[71]
Xi Wang, Andreas Ley, Sebastian Koch, David Lindlbauer, James Hays, Kenneth Holmqvist, and Marc Alexa. 2019. The Mental Image Revealed by Gaze Tracking. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019. ACM, New York, NY, 609. https://doi.org/10.1145/3290605.3300839
[72]
Tomio Watanabe. 2007. Human-Entrained E-COSMIC: Embodied Communication System for Mind Connection. In Human Interface and the Management of Information. Methods, Techniques and Tools in Information Design, Symposium on Human Interface 2007, Held as Part of HCI International 2007, Beijing, China, July 22-27, 2007, Proceedings Part I(Lecture Notes in Computer Science, Vol. 4557). Springer, 1008–1016. https://doi.org/10.1007/978-3-540-73345-4_114
[73]
Andrew M. Webb, Chen Wang, Andruid Kerne, and Pablo César. 2016. Distributed Liveness: Understanding How New Technologies Transform Performance Experiences. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, CSCW 2016, San Francisco, CA, USA, February 27 – March 2, 2016. ACM, New York, NY, 431–436. https://doi.org/10.1145/2818048.2819974
[74]
Marcin Wlodarczak, Hendrik Buschmeier, Zofia Malisz, Stefan Kopp, and Petra Wagner. 2012. Listener head gestures and verbal feedback expressions in a distraction task. In Proceedings of the Interdisciplinary Workshop on Feedback Behaviors in Dialog, INTERSPEECH2012 Satellite Workshop.
[75]
Nancy Yao, Jeff Brewer, Sarah D’Angelo, Mike Horn, and Darren Gergle. 2018. Visualizing Gaze Information from Multiple Students to Support Remote Instruction. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, April 21-26, 2018. ACM, New York, NY, 1–6. https://doi.org/10.1145/3170427.3188453
[76]
Alfred L. Yarbus. 1967. Eye Movements During Perception of Complex Objects. In Eye Movements and Vision. Springer US, 171–211. https://doi.org/10.1007/978-1-4899-5379-7_8
[77]
Xucong Zhang, Seonwook Park, Thabo Beeler, Derek Bradley, Siyu Tang, and Otmar Hilliges. 2020. ETH-XGaze: A Large Scale Dataset for Gaze Estimation Under Extreme Head Pose and Gaze Variation. In Computer Vision – ECCV 2020. Springer International Publishing, 365–381. https://doi.org/10.1007/978-3-030-58558-7_22
[78]
Jilei Zhou, Jing Zhou, Ying Ding, and Hansheng Wang. 2019. The magic of danmaku: A social interaction perspective of gift sending on live streaming platforms. Electronic Commerce Research and Applications 34 (2019), 100815.

Cited By

View all
  • (2024)Investigating the Role of Real-Time Chat Summaries in Supporting Live StreamersProceedings of the 50th Graphics Interface Conference10.1145/3670947.3670980(1-12)Online publication date: 3-Jun-2024
  • (2023)Investigation of the Effect of Students’ Nodding on Their Arousal Levels in On-Demand LecturesSensors10.3390/s2308385823:8(3858)Online publication date: 10-Apr-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
IMX '22: Proceedings of the 2022 ACM International Conference on Interactive Media Experiences
June 2022
390 pages
ISBN:9781450392129
DOI:10.1145/3505284
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 June 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. audience sensing
  2. eye gaze
  3. feedback design
  4. nodding
  5. remote communication

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

IMX '22

Acceptance Rates

Overall Acceptance Rate 69 of 245 submissions, 28%

Upcoming Conference

IMX '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)62
  • Downloads (Last 6 weeks)10
Reflects downloads up to 24 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Investigating the Role of Real-Time Chat Summaries in Supporting Live StreamersProceedings of the 50th Graphics Interface Conference10.1145/3670947.3670980(1-12)Online publication date: 3-Jun-2024
  • (2023)Investigation of the Effect of Students’ Nodding on Their Arousal Levels in On-Demand LecturesSensors10.3390/s2308385823:8(3858)Online publication date: 10-Apr-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media