Nothing Special   »   [go: up one dir, main page]

Lexical Translation Inconsistency-Aware Document-Level Translation Repair

Zhen Zhang, Junhui Li, Shimin Tao, Hao Yang


Abstract
Following the idea of “one translation per discourse”, in this paper we aim to improve translation consistency via document-level translation repair (DocRepair), i.e., automatic post-editing on translations of documents. To this end, we propose a lexical translation inconsistency-aware DocRepair to explicitly model translation inconsistency. First we locate the inconsistency in automatic translation. Then we provide translation candidates for those inconsistency. Finally, we propose lattice-like input to properly model inconsistent tokens and phrases and their candidates. Experimental results on three document-level translation datasets show that based on G-Transformer, a state-of-the-art document-to-document (Doc2Doc) translation model, our Doc2Doc DocRepair achieves significant improvement on translation quality in BLEU scores, but also greatly improves lexical translation consistency.
Anthology ID:
2023.findings-acl.791
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12492–12505
Language:
URL:
https://aclanthology.org/2023.findings-acl.791
DOI:
10.18653/v1/2023.findings-acl.791
Bibkey:
Cite (ACL):
Zhen Zhang, Junhui Li, Shimin Tao, and Hao Yang. 2023. Lexical Translation Inconsistency-Aware Document-Level Translation Repair. In Findings of the Association for Computational Linguistics: ACL 2023, pages 12492–12505, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Lexical Translation Inconsistency-Aware Document-Level Translation Repair (Zhang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.791.pdf