Nothing Special   »   [go: up one dir, main page]

LRMM: Learning to Recommend with Missing Modalities

Cheng Wang, Mathias Niepert, Hui Li


Abstract
Multimodal learning has shown promising performance in content-based recommendation due to the auxiliary user and item information of multiple modalities such as text and images. However, the problem of incomplete and missing modality is rarely explored and most existing methods fail in learning a recommendation model with missing or corrupted modalities. In this paper, we propose LRMM, a novel framework that mitigates not only the problem of missing modalities but also more generally the cold-start problem of recommender systems. We propose modality dropout (m-drop) and a multimodal sequential autoencoder (m-auto) to learn multimodal representations for complementing and imputing missing modalities. Extensive experiments on real-world Amazon data show that LRMM achieves state-of-the-art performance on rating prediction tasks. More importantly, LRMM is more robust to previous methods in alleviating data-sparsity and the cold-start problem.
Anthology ID:
D18-1373
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3360–3370
Language:
URL:
https://aclanthology.org/D18-1373
DOI:
10.18653/v1/D18-1373
Bibkey:
Cite (ACL):
Cheng Wang, Mathias Niepert, and Hui Li. 2018. LRMM: Learning to Recommend with Missing Modalities. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3360–3370, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
LRMM: Learning to Recommend with Missing Modalities (Wang et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1373.pdf
Attachment:
 D18-1373.Attachment.pdf