Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3462244.3479964acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper
Public Access

Knowledge- and Data-Driven Models of Multimodal Trajectories of Public Speaking Anxiety in Real and Virtual Settings

Published: 18 October 2021 Publication History

Abstract

Public speaking skills are essential to professional success. Yet, public speaking anxiety (PSA) is considered one of the most common social phobias. Understanding PSA can help communication experts identify effective ways to treat this communication-based disorder. Existing works on PSA rely on self-reports and aggregate multimodal measures which do not capture the temporal variation in PSA. This paper examines temporal trajectories of acoustic and physiological measures throughout the public speaking encounter with real and virtual audiences, and aims to model those in both knowledge- and data-driven ways. Knowledge-driven models leverage theoretically-grounded patterns through fitting interpretable parametric functions to the corresponding signals. Data-driven models consider the functional nature of multimodal signals via functional principal component analysis. Results indicate that the parameters of the proposed models can successfully estimate individuals’ trait anxiety in both real-life and virtual reality settings, and suggest that models trained on data obtained in virtual public speaking stimuli are able to estimate levels of PSA in real-life.

Supplementary Material

MP4 File (ICMI21-sp1198.mp4)
Presentation video of "Knowledge- and Data-Driven Models of Multimodal Trajectories of Public Speaking Anxiety in Real and Virtual Settings"

References

[1]
Mike Allen, John E Hunter, and William A Donohue. 1989. Meta-analysis of self-report data on the effectiveness of public speaking anxiety treatment techniques. Communication Education 38, 1 (1989), 54–76.
[2]
Page L Anderson, Matthew Price, Shannan M Edwards, Mayowa A Obasaju, Stefan K Schmertz, Elana Zimand, and Martha R Calamaras. 2013. Virtual reality exposure therapy for social anxiety disorder: A randomized controlled trial.Journal of consulting and clinical psychology 81, 5(2013), 751.
[3]
Juan Pablo Arias, Carlos Busso, and Nestor Becerra Yoma. 2013. Energy and F0 contour modeling with functional data analysis for emotional speech detection. In Interspeech. 2871–2875.
[4]
Ligia Batrinca, Giota Stratou, Ari Shapiro, Louis-Philippe Morency, and Stefan Scherer. 2013. Cicero-towards a multimodal virtual audience platform for public speaking training. In International workshop on intelligent virtual agents. Springer, 116–128.
[5]
Michael J Beatty. 1988. Situational and predispositional correlates of public speaking anxiety. Communication education 37, 1 (1988), 28–39.
[6]
Graham D Bodie. 2010. A racing heart, rattling knees, and ruminative thoughts: Defining, explaining, and treating public speaking anxiety. Communication education 59, 1 (2010), 70–105.
[7]
Steven Booth-Butterfield and Malloy Gould. 1986. The communication anxiety inventory: Validation of state-and context-communication apprehension. Communication Quarterly 34, 2 (1986), 194–205.
[8]
Lei Chen, Gary Feng, Jilliam Joe, Chee Wee Leong, Christopher Kitchen, and Chong Min Lee. 2014. Towards automated assessment of public speaking skills using multimodal cues. In Proceedings of the 16th International Conference on Multimodal Interaction. 200–203.
[9]
Mathieu Chollet, Torsten Wörtwein, Louis-Philippe Morency, Ari Shapiro, and Stefan Scherer. 2015. Exploring feedback strategies to improve public speaking: An interactive virtual audience framework. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 1143–1154.
[10]
[10] Empatica E4.2021. https://www.empatica.com/.
[11]
Florian Eyben, Felix Weninger, Florian Gross, and Björn Schuller. 2013. Recent developments in opensmile, the munich open-source multimedia feature extractor. In Proceedings of the 21st ACM international conference on Multimedia. ACM, 835–838.
[12]
Kexin Feng, Megha Yadav, Md Nazmus Sakib, Amir Behzadan, and Theodora Chaspari. 2019. Estimating Public Speaking Anxiety from Speech Signals Using Unsupervised Transfer Learning. In 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 1–5.
[13]
National Collaborating Centre for Mental Health 2013. Social anxiety disorder: the NICE guideline on recognition, assessment and treatment. Royal College of Psychiatrists.
[14]
Everlyne Kimani, Timothy Bickmore, Ha Trinh, and Paola Pedrelli. 2019. You’ll be Great: Virtual Agent-based Cognitive Restructuring to Reduce Public Speaking Anxiety. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 641–647.
[15]
Gaang Lee, Byungjoo Choi, Changbum Ryan Ahn, and SangHyun Lee. 2020. Wearable Biosensor and Hotspot Analysis–Based Framework to Detect Stress Hotspots for Advancing Elderly’s Mobility. Journal of Management in Engineering 36, 3 (2020), 04020010.
[16]
Xi Li, Jidong Tao, Michael T Johnson, Joseph Soltis, Anne Savage, Kirsten M Leong, and John D Newman. 2007. Stress and emotion classification using jitter and shimmer features. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Vol. 4. IEEE, IV–1081.
[17]
James C McCroskey. 1970. Measures of communication-bound anxiety. (1970).
[18]
Angeliki Metallinou, Ruth B Grossman, and Shrikanth Narayanan. 2013. Quantifying atypicality in affective facial expressions of children with autism spectrum disorders. In 2013 IEEE international conference on multimedia and expo (ICME). IEEE, 1–6.
[19]
Ehsanul Haque Nirjhar, Amir Behzadan, and Theodora Chaspari. 2020. Exploring Bio-Behavioral Signal Trajectories of State Anxiety During Public Speaking. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1294–1298.
[20]
[20] Virtual Orator.2021. https://virtualorator.com/.
[21]
[21] PACE.2021. http://www.stat.ucdavis.edu/PACE/.
[22]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825–2830.
[23]
David P Pertaub, Mel Slater, and Chris Barker. 2001. An experiment on fear of public speaking in virtual reality. Studies in health technology and informatics (2001), 372–378.
[24]
[24] Oculus Rift.2021. https://www.oculus.com/.
[25]
Jan Schneider, Dirk Börner, Peter Van Rosmalen, and Marcus Specht. 2015. Presentation trainer, your public speaking multimodal coach. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. acm, 539–546.
[26]
M Iftekhar Tanveer, Samiha Samrose, Raiyan Abdul Baten, and M Ehsan Hoque. 2018. Awe the audience: How the narrative trajectories affect audience perception in public speaking. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–12.
[27]
Azadeh Tavoli, Mahdiyeh Melyani, Maryam Bakhtiari, Gholam Hossein Ghaedi, and Ali Montazeri. 2009. The Brief Fear of Negative Evaluation Scale (BFNE): translation and validation study of the Iranian version. BMC psychiatry 9, 1 (2009), 42.
[28]
Helene S Wallach, Marilyn P Safir, and Margalit Bar-Zvi. 2009. Virtual reality cognitive behavior therapy for public speaking anxiety: a randomized clinical trial. Behavior modification 33, 3 (2009), 314–338.
[29]
Megha Yadav, Md Nazmus Sakib, Kexin Feng, Theodora Chaspari, and Amir Behzadan. 2019. Virtual reality interfaces and population-specific models to mitigate public speaking anxiety. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 1–7.
[30]
Megha Yadav, Md Nazmus Sakib, Ehsanul Haque Nirjhar, Kexin Feng, Amir Behzadan, and Theodora Chaspari. 2020. Exploring individual differences of public speaking anxiety in real-life and virtual presentations. IEEE Transactions on Affective Computing(2020), 1–1. https://doi.org/10.1109/TAFFC.2020.3048299
[31]
Fang Yao, Hans-Georg Müller, and Jane-Ling Wang. 2005. Functional data analysis for sparse longitudinal data. J. Amer. Statist. Assoc. 100, 470 (2005), 577–590.

Cited By

View all
  • (2023)Expression and Perception of Stress Through the Lens of Multimodal Signals: A Case Study in Interpersonal Communication Settings2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)10.1109/ACIIW59127.2023.10388186(1-5)Online publication date: 10-Sep-2023
  • (2022)Real-time Public Speaking Anxiety Prediction Model for Oral PresentationsCompanion Publication of the 2022 International Conference on Multimodal Interaction10.1145/3536220.3563686(30-35)Online publication date: 7-Nov-2022

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '21: Proceedings of the 2021 International Conference on Multimodal Interaction
October 2021
876 pages
ISBN:9781450384810
DOI:10.1145/3462244
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Public speaking anxiety
  2. functional principal component analysis
  3. physiological signals
  4. speech
  5. time trajectory

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Funding Sources

Conference

ICMI '21
Sponsor:
ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
October 18 - 22, 2021
QC, Montréal, Canada

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)87
  • Downloads (Last 6 weeks)11
Reflects downloads up to 21 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Expression and Perception of Stress Through the Lens of Multimodal Signals: A Case Study in Interpersonal Communication Settings2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)10.1109/ACIIW59127.2023.10388186(1-5)Online publication date: 10-Sep-2023
  • (2022)Real-time Public Speaking Anxiety Prediction Model for Oral PresentationsCompanion Publication of the 2022 International Conference on Multimodal Interaction10.1145/3536220.3563686(30-35)Online publication date: 7-Nov-2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media