Nothing Special   »   [go: up one dir, main page]

Specializing Multi-domain NMT via Penalizing Low Mutual Information

Jiyoung Lee, Hantae Kim, Hyunchang Cho, Edward Choi, Cheonbok Park


Abstract
Multi-domain Neural Machine Translation (NMT) trains a single model with multiple domains. It is appealing because of its efficacy in handling multiple domains within one model. An ideal multi-domain NMT learns distinctive domain characteristics simultaneously, however, grasping the domain peculiarity is a non-trivial task. In this paper, we investigate domain-specific information through the lens of mutual information (MI) and propose a new objective that penalizes low MI to become higher.Our method achieved the state-of-the-art performance among the current competitive multi-domain NMT models. Also, we show our objective promotes low MI to be higher resulting in domain-specialized multi-domain NMT.
Anthology ID:
2022.emnlp-main.680
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10015–10026
Language:
URL:
https://aclanthology.org/2022.emnlp-main.680
DOI:
10.18653/v1/2022.emnlp-main.680
Bibkey:
Cite (ACL):
Jiyoung Lee, Hantae Kim, Hyunchang Cho, Edward Choi, and Cheonbok Park. 2022. Specializing Multi-domain NMT via Penalizing Low Mutual Information. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10015–10026, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Specializing Multi-domain NMT via Penalizing Low Mutual Information (Lee et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.680.pdf