Nothing Special   »   [go: up one dir, main page]

Self-adaptive Context and Modal-interaction Modeling For Multimodal Emotion Recognition

Haozhe Yang, Xianqiang Gao, Jianlong Wu, Tian Gan, Ning Ding, Feijun Jiang, Liqiang Nie


Abstract
The multimodal emotion recognition in conversation task aims to predict the emotion label for a given utterance with its context and multiple modalities. Existing approaches achieve good results but also suffer from the following two limitations: 1) lacking modeling of diverse dependency ranges, i.e., long, short, and independent context-specific representations and without consideration of the different recognition difficulty for each utterance; 2) consistent treatment of the contribution for various modalities. To address the above challenges, we propose the Self-adaptive Context and Modal-interaction Modeling (SCMM) framework. We first design the context representation module, which consists of three submodules to model multiple contextual representations. Thereafter, we propose the modal-interaction module, including three interaction submodules to make full use of each modality. Finally, we come up with a self-adaptive path selection module to select an appropriate path in each module and integrate the features to obtain the final representation. Extensive experiments under four settings on three multimodal datasets, including IEMOCAP, MELD, and MOSEI, demonstrate that our proposed method outperforms the state-of-the-art approaches.
Anthology ID:
2023.findings-acl.390
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6267–6281
Language:
URL:
https://aclanthology.org/2023.findings-acl.390
DOI:
10.18653/v1/2023.findings-acl.390
Bibkey:
Cite (ACL):
Haozhe Yang, Xianqiang Gao, Jianlong Wu, Tian Gan, Ning Ding, Feijun Jiang, and Liqiang Nie. 2023. Self-adaptive Context and Modal-interaction Modeling For Multimodal Emotion Recognition. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6267–6281, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Self-adaptive Context and Modal-interaction Modeling For Multimodal Emotion Recognition (Yang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.390.pdf