Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Predicting Meeting Success With Nuanced Emotions

Published: 01 April 2022 Publication History

Abstract

While current meeting tools are able to capture key analytics (e.g., transcript and summarization), they do not often capture nuanced emotions (e.g., disappointment and feeling impressed). Given the high number of meetings that were held online during the COVID-19 pandemic, we had an unprecedented opportunity to record extensive meeting data with a newly developed meeting companion application. We analyzed 72 h of conversations from 85 real-world virtual meetings and 256 self-reported meeting success scores. We did so by developing a deep-learning framework that can extract 32 nuanced emotions from meeting transcripts, and by then testing a variety of models predicting meeting success from the extracted emotions. We found that rare emotions (e.g., disappointment and excitement) were generally more predictive of success than more common emotions. This demonstrates the importance of quantifying nuanced emotions to further improve productivity analytics, and, in the long term, employee well-being.

References

[1]
A. Akbik, T. Bergmann, D. Blythe, K. Rasul, S. Schweter, and R. Vollgraf, “Flair: An easy-to-use framework for state-of-the-art NLP,” in Proc. Conf. North Amer. Chapter Assoc. Comput. Linguistics (Demonstrations), 2019, pp. 54–59.
[2]
B. A. Aseniero, M. Constantinides, S. Joglekar, K. Zhou, and D. Quercia, “MeetCues: Supporting online meetings experience,” in Proc. IEEE Visual. Conf., 2020, pp. 236–240.
[3]
C. Busso, P. G. Georgiou, and S. S. Narayanan, “Real-time monitoring of participants’ interaction in a meeting using audio-visual sensors,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2007, vol. 2, pp. II-685–II-688.
[4]
M. Constantinides, S. Šćepanović, D. Quercia, H. Li, U. Sassi, and M. Eggleston, “ComFeel: Productivity is a matter of the senses too,” Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol., vol. 4, no. 4, pp. 1–21, 2020.
[5]
E. Coutinho and N. Dibben, “Psychoacoustic cues to emotion in speech prosody and music,” Cogn. Emotion, vol. 27, no. 4, pp. 658–684, 2013.
[6]
A. J. Cowell, et al., “Understanding the dynamics of collaborative multi-party discourse,” Inf. Visual., vol. 5, no. 4, pp. 250–259, 2006.
[7]
P. Gonzalez-Alonso, R. Vilar, and F. Lupiáñez-Villanueva, “Meeting technology and methodology into health big data analytics scenarios,” in Proc. IEEE 30th Int. Symp. Comput.-Based Med. Syst., 2017, pp. 284–285.
[8]
T. Hastie, S. Rosset, J. Zhu, and H. Zou, “Multi-class adaboost,” Statist. Interface, vol. 2, no. 3, pp. 349–360, 2009.
[9]
C. Hutto and E. Gilbert, “VADER: A parsimonious rule-based model for sentiment analysis of social media text,” in Proc. Int. AAAI Conf. Web Social Media, 2014, vol. 8, pp. 216–225.
[10]
V. Këpuska and G. Bohouta, “Comparing speech recognition systems (Microsoft API, Google API and CMU Sphinx),” Int. J. Eng. Res. Appl., vol. 7, no. 3, pp. 20–24, 2017.
[11]
C. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. McClosky, “The stanford CoreNLP natural language processing toolkit,” in Proc. 52nd Annu. Meeting Assoc. Comput. Linguistics, Syst. Demonstrations, 2014, pp. 55–60.
[12]
P. Müller, M. X. Huang, and A. Bulling, “Detecting low rapport during natural interactions in small groups from non-verbal behaviour,” in Proc. 23rd Int. Conf. Intell. User Interfaces, 2018, pp. 153–164.
[13]
E. R. O’Neill, M. N. Parke, H. A. Kreft, and A. J. Oxenham, “Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners,” J. Acoust. Soc. Amer., vol. 149, no. 2, pp. 1224–1239, 2021.
[14]
J. W. Pennebaker, R. L. Boyd, K. Jordan, and K. Blackburn, “The development and psychometric properties of LIWC2015,” Univ. Texas Austin, Austin, TX, USA, Tech. Rep., 2015.
[15]
J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proc. Conf. Empirical Methods Natural Lang. Process., 2014, pp. 1532–1543.
[16]
H. Rashkin, E. M. Smith, M. Li, and Y.-L. Boureau, “Towards empathetic open-domain conversation models: A new benchmark and dataset,” in Proc. 57th Annu. Meeting Assoc. Comput. Linguistics, 2019, pp. 5370–5381.
[17]
S. Somasundaran, J. Ruppenhofer, and J. Wiebe, “Detecting arguing and sentiment in meetings,” in Proc. 8th SIGdial Workshop Discourse Dialogue, 2007, pp. 26–34.
[18]
M. Sundermeyer, R. Schlüter, and H. Ney, “LSTM neural networks for language modeling,” in Proc. 13th Annu. Conf. Int. Speech Commun. Assoc., 2012, pp. 194–197.
[19]
T. Yoshioka, et al., “Advances in online audio-visual meeting transcription,” in Proc. IEEE Autom. Speech Recognit. Understanding Workshop, 2019, pp. 276–283.
[20]
N. Zhang, T. Zhang, I. Bhattacharya, H. Ji, and R. J. Radke, “Visualizing group dynamics based on multiparty meeting understanding,” in Proc. Conf. Empirical Methods Natural Lang. Process., 2018, pp. 96–101.

Cited By

View all
  • (2024)The CoExplorer Technology Probe: A Generative AI-Powered Adaptive Interface to Support Intentionality in Planning and Running Video MeetingsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661507(1638-1657)Online publication date: 1-Jul-2024
  • (2024)Meeting Effectiveness and Inclusiveness: Large-scale Measurement, Identification of Key Features, and Prediction in Real-world Remote MeetingsProceedings of the ACM on Human-Computer Interaction10.1145/36373708:CSCW1(1-39)Online publication date: 26-Apr-2024
  • (2023)Algorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and WellbeingProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581376(1-17)Online publication date: 19-Apr-2023
  • Show More Cited By

Index Terms

  1. Predicting Meeting Success With Nuanced Emotions
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image IEEE Pervasive Computing
          IEEE Pervasive Computing  Volume 21, Issue 2
          April-June 2022
          98 pages

          Publisher

          IEEE Educational Activities Department

          United States

          Publication History

          Published: 01 April 2022

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 01 Nov 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)The CoExplorer Technology Probe: A Generative AI-Powered Adaptive Interface to Support Intentionality in Planning and Running Video MeetingsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661507(1638-1657)Online publication date: 1-Jul-2024
          • (2024)Meeting Effectiveness and Inclusiveness: Large-scale Measurement, Identification of Key Features, and Prediction in Real-world Remote MeetingsProceedings of the ACM on Human-Computer Interaction10.1145/36373708:CSCW1(1-39)Online publication date: 26-Apr-2024
          • (2023)Algorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and WellbeingProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581376(1-17)Online publication date: 19-Apr-2023
          • (2022)The Future of Hybrid MeetingsProceedings of the 1st Annual Meeting of the Symposium on Human-Computer Interaction for Work10.1145/3533406.3533415(1-6)Online publication date: 8-Jun-2022

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media