2020
pdf
bib
abs
Computer Assisted Translation with Neural Quality Estimation and Automatic Post-Editing
Ke Wang
|
Jiayi Wang
|
Niyu Ge
|
Yangbin Shi
|
Yu Zhao
|
Kai Fan
Findings of the Association for Computational Linguistics: EMNLP 2020
With the advent of neural machine translation, there has been a marked shift towards leveraging and consuming the machine translation results. However, the gap between machine translation systems and human translators needs to be manually closed by post-editing. In this paper, we propose an end-to-end deep learning framework of the quality estimation and automatic post-editing of the machine translation output. Our goal is to provide error correction suggestions and to further relieve the burden of human translators through an interpretable model. To imitate the behavior of human translators, we design three efficient delegation modules – quality estimation, generative post-editing, and atomic operation post-editing and construct a hierarchical model based on them. We examine this approach with the English–German dataset from WMT 2017 APE shared task and our experimental results can achieve the state-of-the-art performance. We also verify that the certified translators can significantly expedite their post-editing processing with our model in human evaluation.
pdf
bib
abs
Long-Short Term Masking Transformer: A Simple but Effective Baseline for Document-level Neural Machine Translation
Pei Zhang
|
Boxing Chen
|
Niyu Ge
|
Kai Fan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Many document-level neural machine translation (NMT) systems have explored the utility of context-aware architecture, usually requiring an increasing number of parameters and computational complexity. However, few attention is paid to the baseline model. In this paper, we research extensively the pros and cons of the standard transformer in document-level translation, and find that the auto-regressive property can simultaneously bring both the advantage of the consistency and the disadvantage of error accumulation. Therefore, we propose a surprisingly simple long-short term masking self-attention on top of the standard transformer to both effectively capture the long-range dependence and reduce the propagation of errors. We examine our approach on the two publicly available document-level datasets. We can achieve a strong result in BLEU and capture discourse phenomena.
2019
pdf
bib
abs
Lattice Transformer for Speech Translation
Pei Zhang
|
Niyu Ge
|
Boxing Chen
|
Kai Fan
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Recent advances in sequence modeling have highlighted the strengths of the transformer architecture, especially in achieving state-of-the-art machine translation results. However, depending on the up-stream systems, e.g., speech recognition, or word segmentation, the input to translation system can vary greatly. The goal of this work is to extend the attention mechanism of the transformer to naturally consume the lattice in addition to the traditional sequential input. We first propose a general lattice transformer for speech translation where the input is the output of the automatic speech recognition (ASR) which contains multiple paths and posterior scores. To leverage the extra information from the lattice structure, we develop a novel controllable lattice attention mechanism to obtain latent representations. On the LDC Spanish-English speech translation corpus, our experiments show that lattice transformer generalizes significantly better and outperforms both a transformer baseline and a lattice LSTM. Additionally, we validate our approach on the WMT 2017 Chinese-English translation task with lattice inputs from different BPE segmentations. In this task, we also observe the improvements over strong baselines.
2011
pdf
bib
Improving Reordering for Statistical Machine Translation with Smoothed Priors and Syntactic Features
Bing Xiang
|
Niyu Ge
|
Abraham Ittycheriah
Proceedings of Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation
pdf
bib
An Effective and Robust Framework for Transliteration Exploration
Ea-Ee Jan
|
Niyu Ge
|
Shih-Hsiang Lin
|
Berlin Chen
Proceedings of 5th International Joint Conference on Natural Language Processing
2010
pdf
bib
A Direct Syntax-Driven Reordering Model for Phrase-Based Machine Translation
Niyu Ge
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
pdf
bib
abs
Enriching Word Alignment with Linguistic Tags
Xuansong Li
|
Niyu Ge
|
Stephen Grimes
|
Stephanie M. Strassel
|
Kazuaki Maeda
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
Incorporating linguistic knowledge into word alignment is becoming increasingly important for current approaches in statistical machine translation research. To improve automatic word alignment and ultimately machine translation quality, an annotation framework is jointly proposed by LDC (Linguistic Data Consortium) and IBM. The framework enriches word alignment corpora to capture contextual, syntactic and language-specific features by introducing linguistic tags to the alignment annotation. Two annotation schemes constitute the framework: alignment and tagging. The alignment scheme aims to identify minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. The framework produces a solid ground-level alignment base upon which larger translation unit alignment can be automatically induced. To test the soundness of this work, evaluation is performed on a pilot annotation, resulting in inter- and intra- annotator agreement of above 90%. To date LDC has produced manual word alignment and tagging on 32,823 Chinese-English sentences following this framework.
2008
pdf
bib
Multiple Reorderings in Phrase-Based Machine Translation
Niyu Ge
|
Abe Ittycheriah
|
Kishore Papineni
Proceedings of the ACL-08: HLT Second Workshop on Syntax and Structure in Statistical Translation (SSST-2)
2005
pdf
bib
Inner-Outer Bracket Models for Word Alignment using Hidden Blocks
Bing Zhao
|
Niyu Ge
|
Kishore Papineni
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing
1998
pdf
bib
A Statistical Approach to Anaphora Resolution
Niyu Ge
|
John Hale
|
Eugene Charniak
Sixth Workshop on Very Large Corpora