Nothing Special   »   [go: up one dir, main page]

Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition

Wenliang Dai, Zihan Liu, Tiezheng Yu, Pascale Fung


Abstract
Despite the recent achievements made in the multi-modal emotion recognition task, two problems still exist and have not been well investigated: 1) the relationship between different emotion categories are not utilized, which leads to sub-optimal performance; and 2) current models fail to cope well with low-resource emotions, especially for unseen emotions. In this paper, we propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues. We use pre-trained word embeddings to represent emotion categories for textual data. Then, two mapping functions are learned to transfer these embeddings into visual and acoustic spaces. For each modality, the model calculates the representation distance between the input sequence and target emotions and makes predictions based on the distances. By doing so, our model can directly adapt to the unseen emotions in any modality since we have their pre-trained embeddings and modality mapping functions. Experiments show that our model achieves state-of-the-art performance on most of the emotion categories. Besides, our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
Anthology ID:
2020.aacl-main.30
Volume:
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing
Month:
December
Year:
2020
Address:
Suzhou, China
Editors:
Kam-Fai Wong, Kevin Knight, Hua Wu
Venue:
AACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
269–280
Language:
URL:
https://aclanthology.org/2020.aacl-main.30
DOI:
Bibkey:
Cite (ACL):
Wenliang Dai, Zihan Liu, Tiezheng Yu, and Pascale Fung. 2020. Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 269–280, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Modality-Transferable Emotion Embeddings for Low-Resource Multimodal Emotion Recognition (Dai et al., AACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.aacl-main.30.pdf
Code
 wenliangdai/Modality-Transferable-MER
Data
CMU-MOSEIIEMOCAP