Authors
Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li
Publication date
2016/1/19
Journal
arXiv preprint arXiv:1601.04811
Description
Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT.
Total citations
201520162017201820192020202120222023202433890143163154152877745
Scholar articles
Z Tu, Z Lu, Y Liu, X Liu, H Li - arXiv preprint arXiv:1601.04811, 2016