Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3581783.3610943acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
abstract

MuSe 2023 Challenge: Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of Affects

Published: 27 October 2023 Publication History

Abstract

The 4th Multimodal Sentiment Analysis Challenge (MuSe) focuses on Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of Affects. The workshop takes place in conjunction with ACM Multimedia'23. We provide three datasets as part of the challenge: (i) The Hume-Vidmimic dataset which offers 30+ hours of expressive behaviour data from 557 participants. It involves mimicking and rating emotions: Approval, Disappointment, and Uncertainty. This multimodal resource is valuable for studying human emotional expressions. (ii) The 2023 edition of the Passau Spontaneous Football Coach Humor (Passau-SFCH) dataset comprises German football press conference recordings within the training set, while videos of English football press conferences are included in the unseen test set. This unique configuration offers a cross-cultural evaluation environment for humour recognition. (iii) The Ulm-Trier Social Stress Test (Ulm-TSST) dataset contains recordings of subjects under stress. It involves arousal and valence signals, with some test labels provided to aid personalisation. Based on these datasets, we formulate three multimodal affective computing challenges: (1) Mimicked Emotions Sub-Challenge (MuSe-Mimic) for categorical emotion prediction, (2) Cross-Cultural Humour Detection Sub-Challenge (MuSe-Humour) for cross-cultural humour detection, and (3) Personalisation Sub-Challenge (MuSe-Personalisation) for personalised dimensional emotion recognition. In this summary, we outline the challenge's motivation, participation guidelines, conditions, and results.

References

[1]
Shahin Amiriparian. 2022. The Dos and Don'ts of Affect Analysis. In Proc. of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge. ACM, Ottawa, Canada, 3--3.
[2]
Shahin Amiriparian, Nicholas Cummins, Sandra Ottl, Maurice Gerczuk, and Björn Schuller. 2017. Sentiment Analysis Using Image-based Deep Spectrum Features. In Proc. of 2nd International Workshop on Automatic Sentiment Analysis in the Wild (WASA 2017) held in conjunction with ACII 2017. AAAC, IEEE, San Antonio, TX, 26--29.
[3]
Shahin Amiriparian, Tobias Hübner, Vincent Karas, Maurice Gerczuk, Sandra Ottl, and Björn W. Schuller. 2022. DeepSpectrumLite: A Power-Efficient Transfer Learning Framework for Embedded Speech and Audio Processing From Decentralized Data. Frontiers in Artificial Intelligence, Vol. 5 (2022), 10.
[4]
Shahin Amiriparian, Bjorn W Schuller, Nabiha Asghar, Heiga Zen, and Felix Burkhardt. 2023. Guest Editorial: Special Issue on Affective Speech and Language Synthesis, Generation, and Conversion. IEEE Transactions on Affective Computing, Vol. 14, 01 (2023), 3--5.
[5]
Erik Cambria, Dipankar Das, Sivaji Bandyopadhyay, and Antonio Feraco. 2017. Affective computing and sentiment analysis. In A practical guide to sentiment analysis. Springer, 1--10.
[6]
Lukas Christ, Shahin Amiriparian, Alice Baird, Alexander Kathan, Niklas Müller, Steffen Klug, Chris Gagne, Panagiotis Tzirakis, Lukas Stappen, Eva-Maria Meßner, Andreas König, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2023 a. The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked Emotions, Cross-Cultural Humour, and Personalisation. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[7]
Lukas Christ, Shahin Amiriparian, Alice Baird, Panagiotis Tzirakis, Alexander Kathan, Niklas Müller, Lukas Stappen, Eva-Maria Meßner, Andreas König, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2022. The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress. In Proc. of the 3rd Multimodal Sentiment Analysis Challenge. ACM, Lisbon, Portugal. Workshop held at ACM Multimedia 2022, to appear.
[8]
Lukas Christ, Shahin Amiriparian, Alexander Kathan, Niklas Müller, Andreas König, and Björn W Schuller. 2023 b. Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results. arXiv preprint arXiv:2209.14272 (2023).
[9]
Chaoyue Ding, Daoming Zong, Baoxiang Li, Song Zhang, Xiaoxu Zhu, Guiping Zhong, and Dinghao Zhou. 2023. Multimodal Sentiment Analysis via Efficient Multimodal Transformer and Task-Aware Adaptive Training Loss. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[10]
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proc. of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 320--335. https://doi.org/10.18653/v1/2022.acl-long.26
[11]
Alexander Kathan, Shahin Amiriparian, Lukas Christ, Andreas Triantafyllopoulos, Niklas Müller, Andreas König, and Björn W Schuller. 2022. A personalised approach to audiovisual humour recognition and its individual-level fairness. In Proc. of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge. 29--36.
[12]
Andreas König, Lorenz Graf-Vlachy, Jonathan Bundy, and Laura M Little. 2020. A blessing and a curse: How CEOs' trait empathy affects their management of organizational crises. Academy of Management Review, Vol. 45, 1 (2020), 130--153.
[13]
Jia Li, Wei Qian, Kun Li, Qi Li, Dan Guo, and Meng Wang. 2023 a. Exploiting Diverse Feature for Multimodal Sentiment Analysis. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[14]
Qi Li, Shulei Tang, Feixiang Zhang, Ruotong Wang, Yangyang Xu, Zhuoer Zhao, Xiao Sun, and Meng Wang. 2023 b. Temporal-aware Multimodal Feature Fusion for Sentiment Analysis. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[15]
Qi Li, Yangyang Xu, Zhuoer Zhao, Shulei Tang, Feixiang Zhang, Ruotong Wang, Xiao Sun, and Meng Wang. 2023 c. JTMA: Joint multimodal feature fusion and Temporal Multi-head Attention for Humor Detection. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[16]
Misha Libman and Gelareh Mohammadi. 2023. ECG-Coupled Multimodal Approach for Stress Detection. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[17]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4765--4774. http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
[18]
Ho-Min Park, Ganghyun Kim, Arnout Van Messem, and Wesley De Neve. 2023. MuSe-Personalization 2023: Feature Engineering, Hyperparameter Optimization, and Transformer-Encoder Re-discovery. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[19]
Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Information fusion, Vol. 37 (2017), 98--125.
[20]
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning. PMLR, 28492--28518.
[21]
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, Francc ois Yvon, Matthias Gallé, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022).
[22]
Björn W. Schuller, Anton Batliner, Shahin Amiriparian, Alexander Barnhill, Maurice Gerczuk, Andreas Triantafyllopoulos, Alice Baird, Panagiotis Tzirakis, Chris Gagne, Alan S. Cowen, Nikola Lackovic, Marie-José Caraty, and Claude Montacié. 2023. The ACM Multimedia 2023 Computational Paralinguistics Challenge: Emotion Share & Requests. In Proc. of the 31. ACM International Conference on Multimedia, MM 2023. ACM, ACM, Ottawa, Canada. 5 pages, to appear.
[23]
Lukas Stappen, Alice Baird, Lukas Christ, Lea Schumann, Benjamin Sertolli, Eva-Maria Messner, Erik Cambria, Guoying Zhao, and Björn W Schuller. 2021. The MuSe 2021 multimodal sentiment analysis challenge: sentiment, emotion, physiological-emotion, and stress. In Proc. of the 2nd on Multimodal Sentiment Analysis Challenge. ACM, New York, NY, USA, 5--14.
[24]
Haiyang Sun, Zhuofan Wen, Mingyu Xu, Zheng Lian, Licai Sun, Bin Liu, and Jianhua Tao. 2023. Exclusive Modeling for MuSe-Personalisation Challenge. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[25]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International conference on machine learning. PMLR, 3319--3328.
[26]
Grósz Tamás, Anja Virkkunen, Dejan Porjazovski, and Mikko Kurimo. 2023. Discovering Relevant Sub-spaces of BERT, Wav2Vec 2.0, ELECTRA and ViT Embeddings for Humor and Mimicked Emotion Recognition with Integrated Gradients. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[27]
Heng Xie, Jizhou Cui, Yuhang Cao, Junjie Chen, Jianhua Tao, Cunhang Fan, Xuefei Liu, Zhengqi Wen, Heng Lu, Yuguang Yang, Zhao Lv, and Yongwei Li. 2023. Multimodal Cross-Lingual Features and Weight Fusion for Cross-Cultural Humor Detection. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[28]
Mingyu Xu, Shun Chen, Zheng Lian, and Bin Liu. 2023. Humor Detection System for MuSE 2023: Contextual Modeling, Pesudo Labelling, and Post-smoothing. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[29]
Fanglei Xue, Qiangchang Wang, Zichang Tan, Zhongsong Ma, and Guodong Guo. 2022. Vision transformer with attentive pooling for robust facial expression recognition. IEEE Transactions on Affective Computing (2022).
[30]
Guofeng Yi, Yuguang Yang, Yu Pan, Yuhang Cao, Jixun Yao, Xiang Lv, Cunhang Fan, Zhao Lv, Jianhua Tao, Shan Liang, and Heng Lu. 2023. Exploring the Power of Cross-Contextual Large Language Model in Mimic Emotion Prediction. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.
[31]
Jun Yu, Wangyuan Zhu, Jichao Zhu, Xiaxin Shen, Jianqing Sun, and Jiaen Liang. 2023. MMT-GD: Multi-Modal Transformer with Graph Distillation for Cross-Cultural Humor Detection. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.

Cited By

View all
  • (2024)The MuSe 2024 Multimodal Sentiment Analysis Challenge: Social Perception and Humor RecognitionProceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and Humor10.1145/3689062.3689088(1-9)Online publication date: 28-Oct-2024
  • (2024)A novel hybrid deep learning IChOA-CNN-LSTM model for modality-enriched and multilingual emotion recognition in social mediaScientific Reports10.1038/s41598-024-73452-214:1Online publication date: 27-Sep-2024
  • (2024)A multimodal shared network with a cross-modal distribution constraint for continuous emotion recognitionEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108413133:PDOnline publication date: 24-Jul-2024
  • Show More Cited By

Index Terms

  1. MuSe 2023 Challenge: Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of Affects

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MM '23: Proceedings of the 31st ACM International Conference on Multimedia
      October 2023
      9913 pages
      ISBN:9798400701085
      DOI:10.1145/3581783
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 October 2023

      Check for updates

      Author Tags

      1. affective computing
      2. challenge
      3. emotion mimics, cross-cultural humour detection
      4. emotion recognition
      5. multimodal fusion
      6. multimodal sentiment analysis
      7. summary paper

      Qualifiers

      • Abstract

      Funding Sources

      • Deutsche Forschungsgemeinschaft (DFG)

      Conference

      MM '23
      Sponsor:
      MM '23: The 31st ACM International Conference on Multimedia
      October 29 - November 3, 2023
      Ottawa ON, Canada

      Acceptance Rates

      Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)239
      • Downloads (Last 6 weeks)12
      Reflects downloads up to 30 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)The MuSe 2024 Multimodal Sentiment Analysis Challenge: Social Perception and Humor RecognitionProceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and Humor10.1145/3689062.3689088(1-9)Online publication date: 28-Oct-2024
      • (2024)A novel hybrid deep learning IChOA-CNN-LSTM model for modality-enriched and multilingual emotion recognition in social mediaScientific Reports10.1038/s41598-024-73452-214:1Online publication date: 27-Sep-2024
      • (2024)A multimodal shared network with a cross-modal distribution constraint for continuous emotion recognitionEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108413133:PDOnline publication date: 24-Jul-2024
      • (2023)The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked Emotions, Cross-Cultural Humour, and PersonalisationProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613114(1-10)Online publication date: 1-Nov-2023
      • (2023)Multimodal Cross-Lingual Features and Weight Fusion for Cross-Cultural Humor DetectionProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613110(51-57)Online publication date: 1-Nov-2023
      • (2023)Exploring the Power of Cross-Contextual Large Language Model in Mimic Emotion PredictionProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613109(19-26)Online publication date: 1-Nov-2023
      • (2023)Exclusive Modeling for MuSe-Personalisation ChallengeProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613108(73-80)Online publication date: 1-Nov-2023
      • (2023)Humor Detection System for MuSE 2023: Contextual Modeling, Pesudo Labelling, and Post-smoothingProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613107(35-41)Online publication date: 1-Nov-2023
      • (2023)MMT-GD: Multi-Modal Transformer with Graph Distillation for Cross-Cultural Humor DetectionProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613106(43-49)Online publication date: 1-Nov-2023
      • (2023)ECG-Coupled Multimodal Approach for Stress DetectionProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613103(67-72)Online publication date: 1-Nov-2023
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media