Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3491102.3517482acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Glancee: An Adaptable System for Instructors to Grasp Student Learning Status in Synchronous Online Classes

Published: 29 April 2022 Publication History

Abstract

Synchronous online learning has become a trend in recent years. However, instructors often face the challenge of inferring audiences’ reactions and learning status without seeing their faces in video feeds, which prevents instructors from establishing connections with students. To solve this problem, based on a need-finding survey with 67 college instructors, we propose Glancee, a real-time interactive system with adaptable configurations, sidebar-based visual displays, and comprehensive learning status detection algorithms. Then, we conduct a within-subject user study in which 18 college instructors deliver lectures online with Glancee and two baselines, EngageClass and ZoomOnly. Results show that Glancee can effectively support online teaching and is perceived to be significantly more helpful than the baselines. We further investigate how instructors’ emotions, behaviors, attention, cognitive load, and trust are affected during the class. Finally, we offer design recommendations for future online teaching assistant systems.

Supplementary Material

MP4 File (3491102.3517482-video-preview.mp4)
Video Preview

References

[1]
Karan Ahuja, Dohyun Kim, Franceska Xhakaj, Virag Varga, Anne Xie, Stanley Zhang, Jay Eric Townsend, Chris Harrison, Amy Ogan, and Yuvraj Agarwal. 2019. EduSense: Practical classroom sensing at Scale. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1–26.
[2]
Karan Ahuja, Deval Shah, Sujeath Pareddy, Franceska Xhakaj, Amy Ogan, Yuvraj Agarwal, and Chris Harrison. 2021. Classroom Digital Twins with Instrumentation-Free Gaze Tracking. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–9.
[3]
Misna Ali, Shahsad Abdullah, CS Raizal, KF Rohith, and Varun G Menon. 2019. A novel and efficient real time driver Fatigue and Yawn detection-alert system. In 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI). IEEE, 687–691.
[4]
Pengcheng An, Saskia Bakker, Sara Ordanovski, Ruurd Taconis, Chris LE Paffen, and Berry Eggen. 2019. Unobtrusively enhancing reflection-in-action of teachers through spatially distributed ambient information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–14.
[5]
Microsoft Face API. 2021. An AI service that analyzes faces in images. https://azure.microsoft.com/en-us/services/cognitive-services/face/. Accessed September 9, 2021.
[6]
Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. Openface 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, 59–66.
[7]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[8]
Emad Barsoum, Cha Zhang, Cristian Canton Ferrer, and Zhengyou Zhang. 2016. Training deep networks for facial expression recognition with crowd-sourced label distribution. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. 279–283.
[9]
Roman Bednarik, Shahram Eivazi, and Michal Hradis. 2012. Gaze and conversational engagement in multiparty video conversation: an annotation scheme and classification of high and low levels of engagement. In Proceedings of the 4th workshop on eye gaze in intelligent human machine interaction. 1–6.
[10]
Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, 2021. Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 401–413.
[11]
Timothy Bickmore, Everlyne Kimani, Ameneh Shamekhi, Prasanth Murali, Dhaval Parmar, and Ha Trinh. 2021. Virtual agents as supporting media for scientific presentations. Journal on Multimodal User Interfaces 15, 2 (2021), 131–146.
[12]
Charles C Bonwell and James A Eison. 1991. Active Learning: Creating Excitement in the Classroom. 1991 ASHE-ERIC Higher Education Reports.ERIC.
[13]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[14]
Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2019. OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. IEEE transactions on pattern analysis and machine intelligence 43, 1(2019), 172–186.
[15]
AT Chamillard. 2011. Using a student response system in CS1 and CS2. In Proceedings of the 42nd ACM technical symposium on Computer science education. 299–304.
[16]
Xinyue Chen, Si Chen, Xu Wang, and Yun Huang. 2021. ” I was afraid, but now I enjoy being a streamer!” Understanding the Challenges and Prospects of Using Live Streaming for Online Education. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3(2021), 1–32.
[17]
Zhilong Chen, Hancheng Cao, Yuting Deng, Xuan Gao, Jinghua Piao, Fengli Xu, Yu Zhang, and Yong Li. 2021. Learning from Home: A Mixed-Methods Analysis of Live Streaming Based Remote Education Experience in Chinese Colleges during the COVID-19 Pandemic. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[18]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.
[19]
Mathieu Chollet, Torsten Wörtwein, Louis-Philippe Morency, Ari Shapiro, and Stefan Scherer. 2015. Exploring feedback strategies to improve public speaking: an interactive virtual audience framework. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 1143–1154.
[20]
Scotty Craig, Arthur Graesser, Jeremiah Sullins, and Barry Gholson. 2004. Affect and learning: an exploratory look into the role of affect in learning with AutoTutor. Journal of educational media 29, 3 (2004), 241–250.
[21]
Didan Deng, Zhaokang Chen, and Bertram E Shi. [n.d.]. Multitask Emotion Recognition with Incomplete Labels. In 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG). IEEE Computer Society, 828–835.
[22]
Mandalapu Sarada Devi and Preeti R Bajaj. 2008. Driver fatigue detection based on eye tracking. In 2008 First International Conference on Emerging Trends in Engineering and Technology. IEEE, 649–652.
[23]
M Ali Akber Dewan, Mahbub Murshed, and Fuhua Lin. 2019. Engagement detection in online learning: a review. Smart Learning Environments 6, 1 (2019), 1–20.
[24]
Abhinav Dhall, Garima Sharma, Roland Goecke, and Tom Gedeon. 2020. Emotiw 2020: Driver gaze, group emotion, student engagement and physiological signal based challenges. In Proceedings of the 2020 International Conference on Multimodal Interaction. 784–789.
[25]
Elena Di Lascio, Shkurta Gashi, and Silvia Santini. 2018. Unobtrusive assessment of students’ emotional engagement during lectures using electrodermal activity sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 3 (2018), 1–21.
[26]
Sidney D’Mello, Andrew Olney, Claire Williams, and Patrick Hays. 2012. Gaze tutor: A gaze-reactive intelligent tutoring system. International Journal of human-computer studies 70, 5 (2012), 377–398.
[27]
Ciro Donalek, S George Djorgovski, Alex Cioc, Anwell Wang, Jerry Zhang, Elizabeth Lawler, Stacy Yeh, Ashish Mahabal, Matthew Graham, Andrew Drake, 2014. Immersive and collaborative data visualization using virtual reality platforms. In 2014 IEEE International Conference on Big Data (Big Data). IEEE, 609–614.
[28]
Panorama Education. 2019. User Guide: Panorama Teacher and Staff Survey.
[29]
Paul Ekman and Erika L Rosenberg. 1997. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). Oxford University Press, USA.
[30]
Emily K Faulconer, J Griffith, Beverly Wood, S Acharyya, and D Roberts. 2018. A comparison of online, video synchronous, and traditional learning modes for an introductory undergraduate physics course. Journal of Science Education and Technology 27, 5 (2018), 404–411.
[31]
Anne C Frenzel, Reinhard Pekrun, Thomas Goetz, Lia M Daniels, Tracy L Durksen, Betty Becker-Kurz, and Robert M Klassen. 2016. Measuring teachers’ enjoyment, anger, and anxiety: The Teacher Emotions Scales (TES). Contemporary Educational Psychology 46 (2016), 148–163.
[32]
Katsuya Fujii, Plivelic Marian, Dav Clark, Yoshi Okamoto, and Jun Rekimoto. 2018. Sync class: Visualization system for in-class student synchronization. In Proceedings of the 9th Augmented Human International Conference. 1–8.
[33]
Nan Gao, Wei Shao, Mohammad Saiedur Rahaman, and Flora D Salim. 2020. n-Gage: Predicting in-class Emotional, Behavioural and Cognitive Engagement in the Wild. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1–26.
[34]
Katy Ilonka Gero, Zahra Ashktorab, Casey Dugan, Qian Pan, James Johnson, Werner Geyer, Maria Ruiz, Sarah Miller, David R Millen, Murray Campbell, 2020. Mental Models of AI Agents in a Cooperative Game Setting. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[35]
Elena L Glassman, Juho Kim, Andrés Monroy-Hernández, and Meredith Ringel Morris. 2015. Mudslide: A spatially anchored census of student confusion for online lecture videos. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 1555–1564.
[36]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139–183.
[37]
Mariam Hassib, Stefan Schneegass, Philipp Eiglsperger, Niels Henze, Albrecht Schmidt, and Florian Alt. 2017. EngageMeter: A system for implicit audience engagement sensing using electroencephalography. In Proceedings of the 2017 Chi conference on human factors in computing systems. 5114–5119.
[38]
Danah Henriksen, Edwin Creely, and Michael Henderson. 2020. Folk pedagogies for teacher transitions: Approaches to synchronous online learning in the wake of COVID-19. Journal of Technology and Teacher Education 28, 2 (2020), 201–209.
[39]
Kenneth Holstein, Bruce M McLaren, and Vincent Aleven. 2018. Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. In International conference on artificial intelligence in education. Springer, 154–168.
[40]
Hsiu-Fang Hsieh and Sarah E Shannon. 2005. Three approaches to qualitative content analysis. Qualitative health research 15, 9 (2005), 1277–1288.
[41]
Stephen Hutt, Kristina Krasich, James R. Brockmole, and Sidney K. D’Mello. 2021. Breaking out of the Lab: Mitigating Mind Wandering with Gaze-Based Attention-Aware Technology in Classrooms. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[42]
Stephen Hutt, Caitlin Mills, Nigel Bosch, Kristina Krasich, James Brockmole, and Sidney D’mello. 2017. ” Out of the Fr-Eye-ing Pan” Towards Gaze-Based Models of Attention during Learning with Technology in the Classroom. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. 94–103.
[43]
Amanjot Kaur, Aamir Mustafa, Love Mehta, and Abhinav Dhall. 2018. Prediction and localization of student engagement in the wild. In 2018 Digital Image Computing: Techniques and Applications (DICTA). IEEE, 1–8.
[44]
Davis E King. 2009. Dlib-ml: A machine learning toolkit. The Journal of Machine Learning Research 10 (2009), 1755–1758.
[45]
Martin Koestinger, Paul Wohlhart, Peter M Roth, and Horst Bischof. 2011. Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization. In 2011 IEEE international conference on computer vision workshops (ICCV workshops). IEEE, 2144–2151.
[46]
Nataliya Kosmyna and Pattie Maes. 2019. AttentivU: an EEG-based closed-loop biofeedback system for real-time monitoring and improvement of engagement for personalized learning. Sensors 19, 23 (2019), 5200.
[47]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012), 1097–1105.
[48]
Denise G Kutnick and David A Joyner. 2019. Synchronous at scale: investigation and implementation of a semi-synchronous online lecture platform. In Proceedings of the Sixth (2019) ACM Conference on Learning@ Scale. 1–4.
[49]
John D Lee. 2008. Review of a pivotal Human Factors article:“Humans and automation: use, misuse, disuse, abuse”. Human Factors 50, 3 (2008), 404–410.
[50]
Timothy R Levine. 2014. Truth-default theory (TDT) a theory of human deception and deception detection. Journal of Language and Social Psychology 33, 4 (2014), 378–392.
[51]
James J Lin, Lena Mamykina, Silvia Lindtner, Gregory Delajoux, and Henry B Strub. 2006. Fish’n’Steps: Encouraging physical activity with an interactive computer game. In International conference on ubiquitous computing. Springer, 261–278.
[52]
Sampada Marathe and S Shyam Sundar. 2011. What drives customization? Control or identity?. In Proceedings of the SIGCHI conference on human factors in computing systems. 781–790.
[53]
Daniel McDuff, Kael Rowan, Piali Choudhury, Jessica Wolk, ThuVan Pham, and Mary Czerwinski. 2019. A multimodal emotion sensing platform for building emotion-aware applications. arXiv preprint arXiv:1903.12133(2019).
[54]
Prasanth Murali, Javier Hernandez, Daniel McDuff, Kael Rowan, Jina Suh, and Mary Czerwinski. 2021. AffectiveSpotlight: Facilitating the Communication of Affective Responses from Audience Members during Online Presentations. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
[55]
Prasanth Murali, Lazlo Ring, Ha Trinh, Reza Asadi, and Timothy Bickmore. 2018. Speaker hand-offs in collaborative human-agent oral presentations. In Proceedings of the 18th International Conference on Intelligent Virtual Agents. 153–158.
[56]
Kazuo Okamura and Seiji Yamada. 2020. Adaptive trust calibration for human-AI collaboration. Plos one 15, 2 (2020), e0229132.
[57]
Marie-Christine Opdenakker and Jan Van Damme. 2006. Teacher characteristics and teaching styles as effectiveness enhancing factors of classroom practice. Teaching and teacher education 22, 1 (2006), 1–21.
[58]
OpenCV. 2021. OpenCV. https://opencv.org/. Accessed September 9, 2021.
[59]
Dhaval Parmar and Timothy Bickmore. 2020. Making It Personal: Addressing Individual Audience Members in Oral Presentations Using Augmented Reality. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 2 (2020), 1–22.
[60]
Reinhard Pekrun. 2014. Emotions and learning. Educational practices series 24, 1 (2014), 1–31.
[61]
Phuong Pham and Jingtao Wang. 2015. AttentiveLearner: improving mobile MOOC learning via implicit heart rate tracking. In International Conference on Artificial Intelligence in Education. Springer, 367–376.
[62]
Amber Phelps and Dimitrios Vlachopoulos. 2020. Successful transition to synchronous learning environments in distance education: A research on entry-level synchronous facilitator competencies. Education and Information Technologies 25, 3 (2020), 1511–1527.
[63]
Liisa Postareff and Sari Lindblom-Ylänne. 2011. Emotions and confidence within teaching in higher education. Studies in Higher education 36, 7 (2011), 799–813.
[64]
Mirko Raca and Pierre Dillenbourg. 2013. System for assessing classroom attention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge. 265–269.
[65]
Mirko Raca, Lukasz Kidzinski, and Pierre Dillenbourg. 2015. Translating head motion into attention-towards processing of student’s body-language. In Proceedings of the 8th international conference on educational data mining.
[66]
Nina Rajcic and Jon McCormack. 2020. Mirror Ritual: An Affective Interface for Emotional Self-Reflection. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[67]
Daniel C Richardson, Nicole K Griffin, Lara Zaki, Auburn Stephenson, Jiachen Yan, Thomas Curry, Richard Noble, John Hogan, Jeremy I Skipper, and Joseph T Devlin. 2020. Engagement in video and audio narratives: contrasting self-report and physiological measures. Scientific Reports 10, 1 (2020), 1–8.
[68]
Verónica Rivera-Pelayo, Johannes Munk, Valentin Zacharias, and Simone Braun. 2013. Live interest meter: learning from quantified feedback in mass lectures. In Proceedings of the Third International Conference on Learning Analytics and Knowledge. 23–27.
[69]
Paul Rozin and Adam B Cohen. 2003. High frequency of facial expressions corresponding to confusion, concentration, and worry in an analysis of naturally occurring facial expressions of Americans.Emotion 3, 1 (2003), 68.
[70]
David C Rubin and Jennifer M Talarico. 2009. A comparison of dimensional models of emotion: Evidence from emotions, prototypical events, autobiographical memories, and words. Memory 17, 8 (2009), 802–808.
[71]
James A Russell. 1980. A circumplex model of affect.Journal of personality and social psychology 39, 6(1980), 1161.
[72]
Philipp Schmidt, Felix Biessmann, and Timm Teubner. 2020. Transparency and trust in artificial intelligence systems. Journal of Decision Systems 29, 4 (2020), 260–278.
[73]
Hong Shen, Haojian Jin, Ángel Alexander Cabrera, Adam Perer, Haiyi Zhu, and Jason I Hong. 2020. Designing Alternative Representations of Confusion Matrices to Support Non-Expert Public Understanding of Algorithm Performance. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2(2020), 1–22.
[74]
David J Shernoff and Mihaly Csikszentmihalyi. 2009. Cultivating engaged learners and optimal learning environments. Handbook of positive psychology in schools 131 (2009), 145.
[75]
David J Shernoff, Mihaly Csikszentmihalyi, Barbara Schneider, and Elisa Steele Shernoff. 2014. Student engagement in high school classrooms from the perspective of flow theory. In Applications of flow in human development and education. Springer, 475–494.
[76]
Chaklam Silpasuwanchai, Xiaojuan Ma, Hiroaki Shigemasu, and Xiangshi Ren. 2016. Developing a comprehensive engagement framework of gamification for reflective learning. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. 459–472.
[77]
David A Slykhuis, Eric N Wiebe, and Len A Annetta. 2005. Eye-tracking students’ attention to PowerPoint photographs in a science education setting. Journal of Science Education and Technology 14, 5 (2005), 509–520.
[78]
Ömer Sümer, Patricia Goldberg, Sidney D’Mello, Peter Gerjets, Ulrich Trautwein, and Enkelejda Kasneci. 2021. Multimodal engagement analysis from facial videos in the classroom. arXiv preprint arXiv:2101.04215(2021).
[79]
Wei Sun, Yunzhi Li, Feng Tian, Xiangmin Fan, and Hongan Wang. 2019. How Presenters Perceive and React to Audience Flow Prediction In-situ: An Explorative Study of Live Online Lectures. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–19.
[80]
Zhida Sun, Sitong Wang, Wenjie Yang, Onur Yürüten, Chuhan Shi, and Xiaojuan Ma. 2020. ” A Postcard from Your Food Journey in the Past” Promoting Self-Reflection on Social Food Posting. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 1819–1832.
[81]
Shamik Sural, Gang Qian, and Sakti Pramanik. 2002. Segmentation and histogram generation using the HSV color space for image retrieval. In Proceedings. International Conference on Image Processing, Vol. 2. IEEE, II–II.
[82]
Daniel Szafir and Bilge Mutlu. 2012. Pay attention! Designing adaptive agents that monitor and improve user engagement. In Proceedings of the SIGCHI conference on human factors in computing systems. 11–20.
[83]
Daniel Szafir and Bilge Mutlu. 2013. ARTFul: adaptive review technology for flipped learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1001–1010.
[84]
Wenzhao Tan and Gang Rong. 2003. A real-time head nod and shake detector using HMMs. Expert Systems with Applications 25, 3 (2003), 461–466.
[85]
Jaime Teevan, Daniel Liebling, Ann Paradiso, Carlos Garcia Jurado Suarez, Curtis von Veh, and Darren Gehring. 2012. Displaying mobile feedback during a presentation. In Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services. 379–382.
[86]
Van Thong Huynh, Soo-Hyung Kim, Guee-Sang Lee, and Hyung-Jeong Yang. 2019. Engagement Intensity Prediction with Facial Behavior Features. In 2019 International Conference on Multimodal Interaction. 567–571.
[87]
Milka Trajkova and Francesco Cafaro. 2018. Takes Tutu to ballet: Designing visual and verbal feedback for augmented mirrors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 1 (2018), 1–30.
[88]
UNESCO. 2021. Education: From disruption to recovery. https://en.unesco.org/covid19/educationresponse. Accessed September 9, 2021.
[89]
Robert Williams and Roman Yampolskiy. 2021. Understanding and Avoiding AI Failures: A Practical Guide. Philosophies 6, 3 (2021), 53.
[90]
Rainer Winkler, Sebastian Hobert, Antti Salovaara, Matthias Söllner, and Jan Marco Leimeister. 2020. Sara, the lecturer: Improving learning in online education with a scaffolding-based conversational agent. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–14.
[91]
Alexandra Witze. 2020. Universities will never be the same after the coronavirus crisis.Nature 582, 7811 (2020), 162–165.
[92]
Jianming Wu, Bo Yang, Yanan Wang, and Gen Hattori. 2020. Advanced Multi-Instance Learning Method with Multi-features Engineering and Conservative Optimization for Engagement Intensity Prediction. In Proceedings of the 2020 International Conference on Multimodal Interaction. 777–783.
[93]
Xiang Xiao and Jingtao Wang. 2015. Towards attentive, bi-directional MOOC learning on mobile devices. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 163–170.
[94]
Xiang Xiao and Jingtao Wang. 2017. Undertanding and detecting divided attention in mobile mooc learning. In Proceedings of the 2017 CHI conference on human factors in computing systems. 2411–2415.
[95]
Yukang Yan, Chun Yu, Wengrui Zheng, Ruining Tang, Xuhai Xu, and Yuanchun Shi. 2020. FrownOnError: Interrupting Responses from Smart Speakers by Facial Expressions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[96]
Bin Yang and Cheng Huang. 2021. Turn crisis into opportunity in response to COVID-19: experience from a Chinese University and future prospects. Studies in Higher Education 46, 1 (2021), 121–132.
[97]
Fang-Ying Yang, Chun-Yen Chang, Wan-Ru Chien, Yu-Ta Chien, and Yuen-Hsien Tseng. 2013. Tracking learners’ visual attention during a multimedia presentation in a real classroom. Computers & Education 62 (2013), 208–220.
[98]
Matin Yarmand, Jaemarie Solyst, Scott Klemmer, and Nadir Weibel. 2021. “It Feels Like I am Talking into a Void”: Understanding Interaction Gaps in Synchronous Online Classrooms. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–9.
[99]
Telmo Zarraonandia, Paloma Díaz, Álvaro Montero, Ignacio Aedo, and Teresa Onorati. 2019. Using a Google Glass-based Classroom Feedback System to improve students to teacher communication. IEEE Access 7(2019), 16837–16846.
[100]
Haipeng Zeng, Xinhuan Shu, Yanbang Wang, Yong Wang, Liguo Zhang, Ting-Chuen Pong, and Huamin Qu. 2020. EmotionCues: Emotion-oriented visual summarization of classroom videos. IEEE transactions on visualization and computer graphics 27, 7(2020), 3168–3181.
[101]
Xucong Zhang, Seonwook Park, Thabo Beeler, Derek Bradley, Siyu Tang, and Otmar Hilliges. 2020. ETH-XGaze: A large scale dataset for gaze estimation under extreme head pose and gaze variation. In European Conference on Computer Vision. Springer, 365–381.
[102]
Xucong Zhang, Yusuke Sugano, and Andreas Bulling. 2019. Evaluation of appearance-based methods and implications for gaze-based applications. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[103]
Xucong Zhang, Yusuke Sugano, Andreas Bulling, and Otmar Hilliges. 2020. Learning-based region selection for end-to-end gaze estimation. In British Machine Vision Conference (BMVC 2020).
[104]
Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295–305.
[105]
Zhengyou Zhang. 2012. Microsoft kinect sensor and its effect. IEEE multimedia 19, 2 (2012), 4–10.
[106]
Bin Zhu, Xinjie Lan, Xin Guo, Kenneth E Barner, and Charles Boncelet. 2020. Multi-rate attention based gru model for engagement prediction. In Proceedings of the 2020 International Conference on Multimodal Interaction. 841–848.
[107]
Sacha Zyto, David Karger, Mark Ackerman, and Sanjoy Mahajan. 2012. Successful classroom deployment of a social document annotation system. In Proceedings of the sigchi conference on human factors in computing systems. 1883–1892.

Cited By

View all
  • (2024)Research on the Mechanism of Dynamic Monitoring of Undergraduate Students’ Learning Situation and Adaptive Adjustment of Teaching Mode--Based on the Perspective of Educational Evaluation ReformApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-27469:1Online publication date: 4-Oct-2024
  • (2024)EduLive: Re-Creating Cues for Instructor-Learners Interaction in Educational Live Streams with Learners' Transcript-Based AnnotationsProceedings of the ACM on Human-Computer Interaction10.1145/36869608:CSCW2(1-33)Online publication date: 8-Nov-2024
  • (2024)ClassID: Enabling Student Behavior Attribution from Ambient Classroom Sensing SystemsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595868:2(1-28)Online publication date: 15-May-2024
  • Show More Cited By

Index Terms

  1. Glancee: An Adaptable System for Instructors to Grasp Student Learning Status in Synchronous Online Classes

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
      April 2022
      10459 pages
      ISBN:9781450391573
      DOI:10.1145/3491102
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 29 April 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Affective Computing
      2. E-Learning
      3. Human-centered Design
      4. Online Class
      5. Videoconferencing

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • Hong Kong General Research Fund (GRF)

      Conference

      CHI '22
      Sponsor:
      CHI '22: CHI Conference on Human Factors in Computing Systems
      April 29 - May 5, 2022
      LA, New Orleans, USA

      Acceptance Rates

      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI '25
      CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)270
      • Downloads (Last 6 weeks)29
      Reflects downloads up to 13 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Research on the Mechanism of Dynamic Monitoring of Undergraduate Students’ Learning Situation and Adaptive Adjustment of Teaching Mode--Based on the Perspective of Educational Evaluation ReformApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-27469:1Online publication date: 4-Oct-2024
      • (2024)EduLive: Re-Creating Cues for Instructor-Learners Interaction in Educational Live Streams with Learners' Transcript-Based AnnotationsProceedings of the ACM on Human-Computer Interaction10.1145/36869608:CSCW2(1-33)Online publication date: 8-Nov-2024
      • (2024)ClassID: Enabling Student Behavior Attribution from Ambient Classroom Sensing SystemsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595868:2(1-28)Online publication date: 15-May-2024
      • (2024)VizGroup: An AI-assisted Event-driven System for Collaborative Programming Learning AnalyticsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676347(1-22)Online publication date: 13-Oct-2024
      • (2024)Towards Feature Engineering with Human and AI’s Knowledge: Understanding Data Science Practitioners’ Perceptions in Human&AI-Assisted Feature Engineering DesignProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661517(1789-1804)Online publication date: 1-Jul-2024
      • (2024)Investigating the Effects of Real-time Student Monitoring Interface on Instructors’ Monitoring Practices in Online TeachingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642845(1-11)Online publication date: 11-May-2024
      • (2024)Charting the Future of AI in Project-Based Learning: A Co-Design Exploration with StudentsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642807(1-19)Online publication date: 11-May-2024
      • (2024)“Are You Really Sure?” Understanding the Effects of Human Self-Confidence Calibration in AI-Assisted Decision MakingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642671(1-20)Online publication date: 11-May-2024
      • (2024)Metamorpheus: Interactive, Affective, and Creative Dream Narration Through Metaphorical Visual StorytellingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642410(1-16)Online publication date: 11-May-2024
      • (2024)Application Research of Emotion Analysis, Eye Tracking, and Head Movement Monitoring Based on Facial Recognition Algorithms in ESL Student Engagement Assessment2024 5th International Conference on Information Science, Parallel and Distributed Systems (ISPDS)10.1109/ISPDS62779.2024.10667641(409-412)Online publication date: 31-May-2024
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media