Nothing Special   »   [go: up one dir, main page]

MCapsNet: Capsule Network for Text with Multi-Task Learning

Liqiang Xiao, Honglun Zhang, Wenqing Chen, Yongkun Wang, Yaohui Jin


Abstract
Multi-task learning has an ability to share the knowledge among related tasks and implicitly increase the training data. However, it has long been frustrated by the interference among tasks. This paper investigates the performance of capsule network for text, and proposes a capsule-based multi-task learning architecture, which is unified, simple and effective. With the advantages of capsules for feature clustering, proposed task routing algorithm can cluster the features for each task in the network, which helps reduce the interference among tasks. Experiments on six text classification datasets demonstrate the effectiveness of our models and their characteristics for feature clustering.
Anthology ID:
D18-1486
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4565–4574
Language:
URL:
https://aclanthology.org/D18-1486
DOI:
10.18653/v1/D18-1486
Bibkey:
Cite (ACL):
Liqiang Xiao, Honglun Zhang, Wenqing Chen, Yongkun Wang, and Yaohui Jin. 2018. MCapsNet: Capsule Network for Text with Multi-Task Learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4565–4574, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
MCapsNet: Capsule Network for Text with Multi-Task Learning (Xiao et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1486.pdf
Data
SSTSST-2