2024
pdf
bib
abs
A Novel Paradigm Boosting Translation Capabilities of Large Language Models
Jiaxin Guo
|
Hao Yang
|
Zongyao Li
|
Daimeng Wei
|
Hengchao Shang
|
Xiaoyu Chen
Findings of the Association for Computational Linguistics: NAACL 2024
This paper presents a study on strategies to enhance the translation capabilities of large language models (LLMs) in the context of machine translation (MT) tasks. The paper proposes a novel paradigm consisting of three stages: Secondary Pre-training using Extensive Monolingual Data, Continual Pre-training with Interlinear Text Format Documents, and Leveraging Source-Language Consistent Instruction for Supervised Fine-Tuning. Previous research on LLMs focused on various strategies for supervised fine-tuning (SFT), but their effectiveness has been limited. While traditional machine translation approaches rely on vast amounts of parallel bilingual data, our paradigm highlights the importance of using smaller sets of high-quality bilingual data. We argue that the focus should be on augmenting LLMs’ cross-lingual alignment abilities during pre-training rather than solely relying on extensive bilingual data during SFT. Experimental results conducted using the Llama2(CITATION)model, particularly on Chinese-Llama2(CITATION) after monolingual augmentation, demonstrate the improved translation capabilities of LLMs. A significant contribution of our approach lies in Stage2: Continual Pre-training with Interlinear Text Format Documents, which requires less than 1B training data, making our method highly efficient. Additionally, in Stage3, we observed that setting instructions consistent with the source language benefits the supervised fine-tuning process. Experimental results demonstrate that our approach surpasses previous work and achieves superior performance compared to models such as NLLB-54B(CITATION) and GPT3.5-text-davinci-003, despite having a significantly smaller parameter count of only 7B or 13B. This achievement establishes our method as a pioneering strategy in the field of machine translation.
pdf
bib
abs
Improving the Quality of IWLST 2024 Cascade Offline Speech Translation and Speech-to-Speech Translation via Translation Hypothesis Ensembling with NMT models and Large Language Models
Zhanglin Wu
|
Jiaxin Guo
|
Daimeng Wei
|
Zhiqiang Rao
|
Zongyao Li
|
Hengchao Shang
|
Yuanchang Luo
|
Shaojun Li
|
Hao Yang
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
This paper presents HW-TSC’s submission to the IWSLT 2024 Offline Speech Translation Task and Speech-to-Speech Translation Task. The former includes three translation directions: English to German, English to Chinese, and English to Japanese, while the latter only includes the translation direction of English to Chinese. We attend all three tracks (Constraint training, Constrained with Large Language Models training, and Unconstrained training) of offline speech translation task, using the cascade model architecture. Under the constrained training track, we train an ASR model from scratch, and then employ R-Drop and domain data selection to train the NMT model. In the constrained with Large Language Models training track, we use Wav2vec 2.0 and mBART50 for ASR model training initialization, and then train the LLama2-7B-based MT model using continuous training with sentence-aligned parallel data, supervised fine-tuning, and contrastive preference optimization. In the unconstrained training track, we fine-tune the whisper model for speech recognition, and then ensemble the translation results of NMT models and LLMs to produce superior translation output. For the speech-to-speech translation Task, we initially employ the offline speech translation system described above to generate the translated text. Then, we utilize the VITS model to generate the corresponding speech and employ the OpenVoice model for timbre cloning.
pdf
bib
abs
HW-TSC’s Speech to Text Translation System for IWSLT 2024 in Indic track
Bin Wei
|
Zongyao Li
|
Jiaxin Guo
|
Daimeng Wei
|
Zhanglin Wu
|
Xiaoyu Chen
|
Zhiqiang Rao
|
Shaojun Li
|
Yuanchang Luo
|
Hengchao Shang
|
Hao Yang
|
Yanfei Jiang
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
This article introduces the process of HW-TSC and the results of IWSLT 2024 Indic Track Speech to Text Translation. We designed a cascade system consisting of an ASR model and a machine translation model to translate speech from one language to another. For the ASR part, we directly use whisper large v3 as our ASR model. Our main task is to optimize the machine translation model (en2ta, en2hi, en2bn). In the process of optimizing the translation model, we first use bilingual corpus to train the baseline model. Then we use monolingual data to construct pseudo-corpus data to further enhance the baseline model. Finally, we filter the parallel corpus data through the labse filtering method and finetune the model again, which can further improve the bleu value. We also selected domain data from bilingual corpus to finetune previous model to achieve the best results.
pdf
bib
abs
HW-TSC’s Submissions To the IWSLT2024 Low-resource Speech Translation Tasks
Zheng Jiawei
|
Hengchao Shang
|
Zongyao Li
|
Zhanglin Wu
|
Daimeng Wei
|
Zhiqiang Rao
|
Shaojun Li
|
Jiaxin Guo
|
Bin Wei
|
Yuanchang Luo
|
Hao Yang
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
In this work, we submitted our systems to the low-resource track of the IWSLT 2024 Speech Translation Campaign. Our systems tackled the unconstrained condition of the Dialectal Arabic North Levantine (ISO-3 code: apc) to English language pair. We proposed a cascaded solution consisting of an automatic speech recognition (ASR) model and a machine translation (MT) model. It was noted that the ASR model employed the pre-trained Whisper-large-v3 model to process the speech data, while the MT model adopted the Transformer architecture. To improve the quality of the MT model, it was stated that our system utilized not only the data provided by the competition but also an additional 54 million parallel sentences. Ultimately, we reported that our final system achieved a BLEU score of 24.7 for apc-to-English translation.
pdf
bib
abs
HW-TSC’s Simultaneous Speech Translation System for IWSLT 2024
Shaojun Li
|
Zhiqiang Rao
|
Bin Wei
|
Yuanchang Luo
|
Zhanglin Wu
|
Zongyao Li
|
Hengchao Shang
|
Jiaxin Guo
|
Daimeng Wei
|
Hao Yang
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
This paper outlines our submission for the IWSLT 2024 Simultaneous Speech-to-Text (SimulS2T) and Speech-to-Speech (SimulS2S) Translation competition. We have engaged in all four language directions and both the SimulS2T and SimulS2S tracks: English-German (EN-DE), English-Chinese (EN-ZH), English-Japanese (EN-JA), and Czech-English (CS-EN). For the S2T track, we have built upon our previous year’s system and further honed the cascade system composed of ASR model and MT model. Concurrently, we have introduced an end-to-end system specifically for the CS-EN direction. This end-to-end (E2E) system primarily employs the pre-trained seamlessM4T model. In relation to the SimulS2S track, we have integrated a novel TTS model into our SimulS2T system. The final submission for the S2T directions of EN-DE, EN-ZH, and EN-JA has been refined over our championship system from last year. Building upon this foundation, the incorporation of the new TTS into our SimulS2S system has resulted in the ASR-BLEU surpassing last year’s best score.
pdf
bib
abs
HW-TSC’s submission to the IWSLT 2024 Subtitling track
Yuhao Xie
|
Yuanchang Luo
|
Zongyao Li
|
Zhanglin Wu
|
Xiaoyu Chen
|
Zhiqiang Rao
|
Shaojun Li
|
Hengchao Shang
|
Jiaxin Guo
|
Daimeng Wei
|
Hao Yang
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
This paper introduces HW-TSC’s submission to the IWSLT 2024 Subtitling track. For the automatic subtitling track, we use an unconstrained cascaded strategy, with the main steps being: ASR with word-level timestamps, sentence segmentation based on punctuation restoration, further alignment using CTC or using machine translation with length penalty. For the subtitle compression track, we employ a subtitle compression strategy that integrates machine translation models and extensive rewriting models. We acquire the subtitle text requiring revision through the CPS index, then utilize a translation model to obtain the English version of this text. Following this, we extract the compressed-length subtitle text through controlled decoding. If this method fails to compress the text successfully, we resort to the Llama2 few-shot model for further compression.
pdf
bib
abs
Choose the Final Translation from NMT and LLM Hypotheses Using MBR Decoding: HW-TSC’s Submission to the WMT24 General MT Shared Task
Zhanglin Wu
|
Daimeng Wei
|
Zongyao Li
|
Hengchao Shang
|
Jiaxin Guo
|
Shaojun Li
|
Zhiqiang Rao
|
Yuanchang Luo
|
Ning Xie
|
Hao Yang
Proceedings of the Ninth Conference on Machine Translation
This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT24 general machine translation (MT) shared task, where we participate in the English to Chinese (en→zh) language pair. Similar to previous years’ work, we use training strategies such as regularized dropout, bidirectional training, data diversification, forward translation, back translation, alternated training, curriculum learning, and transductive ensemble learning to train the neural machine translation (NMT) model based on the deep Transformer-big architecture. The difference is that we also use continue pre-training, supervised fine-tuning, and contrastive preference optimization to train the large language model (LLM) based MT model. By using Minimum Bayesian risk (MBR) decoding to select the final translation from multiple hypotheses for NMT and LLM-based MT models, our submission receives competitive results in the final evaluation.
pdf
bib
abs
Machine Translation Advancements of Low-Resource Indian Languages by Transfer Learning
Bin Wei
|
Zheng Jiawei
|
Zongyao Li
|
Zhanglin Wu
|
Jiaxin Guo
|
Daimeng Wei
|
Zhiqiang Rao
|
Shaojun Li
|
Yuanchang Luo
|
Hengchao Shang
|
Jinlong Yang
|
Yuhao Xie
|
Hao Yang
Proceedings of the Ninth Conference on Machine Translation
This paper introduces the submission by Huawei Translation Center (HW-TSC) to the WMT24 Indian Languages Machine Translation (MT) Shared Task. To develop a reliable machine translation system for low-resource Indian languages, we employed two distinct knowledge transfer strategies, taking into account the characteristics of the language scripts and the support available from existing open-source models for Indian languages. For Assamese(as) and Manipuri(mn), we fine-tuned the existing IndicTrans2 open-source model to enable bidirectional translation between English and these languages. For Khasi(kh) and Mizo(mz), we trained a multilingual model as the baseline using bilingual data from this four language pairs as well as additional Bengali data, which share the same language family. This was followed by fine-tuning to achieve bidirectional translation between English and Khasi, as well as English and Mizo. Our transfer learning experiments produced significant results: 23.5 BLEU for en→as, 31.8 BLEU for en→mn, 36.2 BLEU for as→en, and 47.9 BLEU for mn→en on their respective test sets. Similarly, the multilingual model transfer learning experiments yielded impressive outcomes, achieving 19.7 BLEU for en→kh, 32.8 BLEU for en→mz, 16.1 BLEU for kh→en, and 33.9 BLEU for mz→en on their respective test sets. These results not only highlight the effectiveness of transfer learning techniques for low-resource languages but also contribute to advancing machine translation capabilities for low-resource Indian languages.
pdf
bib
abs
Multilingual Transfer and Domain Adaptation for Low-Resource Languages of Spain
Yuanchang Luo
|
Zhanglin Wu
|
Daimeng Wei
|
Hengchao Shang
|
Zongyao Li
|
Jiaxin Guo
|
Zhiqiang Rao
|
Shaojun Li
|
Jinlong Yang
|
Yuhao Xie
|
Zheng Jiawei
|
Bin Wei
|
Hao Yang
Proceedings of the Ninth Conference on Machine Translation
This article introduces the submission status of the Translation into Low-Resource Languages of Spain task at (WMT 2024) by Huawei Translation Service Center (HW-TSC). We participated in three translation tasks: spanish to aragonese (es2arg), spanish to aranese (es2arn), and spanish to asturian (es2ast). For these three translation tasks, we use training strategies such as multilingual transfer, regularized dropout, forward translation and back translation, labse denoising, transduction ensemble learning and other strategies to neural machine translation (NMT) model based on training deep transformer-big architecture. By using these enhancement strategies, our submission achieved a competitive result in the final evaluation.
pdf
bib
abs
Context-aware and Style-related Incremental Decoding Framework for Discourse-Level Literary Translation
Yuanchang Luo
|
Jiaxin Guo
|
Daimeng Wei
|
Hengchao Shang
|
Zongyao Li
|
Zhanglin Wu
|
Zhiqiang Rao
|
Shaojun Li
|
Jinlong Yang
|
Hao Yang
Proceedings of the Ninth Conference on Machine Translation
This report outlines our approach for the WMT24 Discourse-Level Literary Translation Task, focusing on the Chinese-English language pair in the Constrained Track. Translating literary texts poses significant challenges due to the nuanced meanings, idiomatic expressions, and intricate narrative structures inherent in such works. To address these challenges, we leveraged the Chinese-Llama2 model, specifically enhanced for this task through a combination of Continual Pre-training (CPT) and Supervised Fine-Tuning (SFT). Our methodology includes a novel Incremental Decoding framework, which ensures that each sentence is translated with consideration of its broader context, maintaining coherence and consistency throughout the text. This approach allows the model to capture long-range dependencies and stylistic elements, producing translations that faithfully preserve the original literary quality. Our experiments demonstrate significant improvements in both sentence-level and document-level BLEU scores, underscoring the effectiveness of our proposed framework in addressing the complexities of document-level literary translation.
pdf
bib
abs
Exploring the Traditional NMT Model and Large Language Model for Chat Translation
Jinlong Yang
|
Hengchao Shang
|
Daimeng Wei
|
Jiaxin Guo
|
Zongyao Li
|
Zhanglin Wu
|
Zhiqiang Rao
|
Shaojun Li
|
Yuhao Xie
|
Yuanchang Luo
|
Zheng Jiawei
|
Bin Wei
|
Hao Yang
Proceedings of the Ninth Conference on Machine Translation
This paper describes the submissions of Huawei Translation Services Center(HW-TSC) to WMT24 chat translation shared task on English↔Germany (en-de) bidirection. The experiments involved fine-tuning models using chat data and exploring various strategies, including Minimum Bayesian Risk (MBR) decoding and self-training. The results show significant performance improvements in certain directions, with the MBR self-training method achieving the best results. The Large Language Model also discusses the challenges and potential avenues for further research in the field of chat translation.
2023
pdf
bib
abs
Text Style Transfer Back-Translation
Daimeng Wei
|
Zhanglin Wu
|
Hengchao Shang
|
Zongyao Li
|
Minghan Wang
|
Jiaxin Guo
|
Xiaoyu Chen
|
Zhengzhe Yu
|
Hao Yang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Back Translation (BT) is widely used in the field of machine translation, as it has been proved effective for enhancing translation quality. However, BT mainly improves the translation of inputs that share a similar style (to be more specific, translation-liked inputs), since the source side of BT data is machine-translated. For natural inputs, BT brings only slight improvements and sometimes even adverse effects. To address this issue, we propose Text Style Transfer Back Translation (TST BT), which uses a style transfer to modify the source side of BT data. By making the style of source-side text more natural, we aim to improve the translation of natural inputs. Our experiments on various language pairs, including both high-resource and low-resource ones, demonstrate that TST BT significantly improves translation performance against popular BT benchmarks. In addition, TST BT is proved to be effective in domain adaptation so this strategy can be regarded as a generalized data augmentation method. Our training code and text style transfer model are open-sourced.
pdf
bib
abs
Length-Aware NMT and Adaptive Duration for Automatic Dubbing
Zhiqiang Rao
|
Hengchao Shang
|
Jinlong Yang
|
Daimeng Wei
|
Zongyao Li
|
Jiaxin Guo
|
Shaojun Li
|
Zhengzhe Yu
|
Zhanglin Wu
|
Yuhao Xie
|
Bin Wei
|
Jiawei Zheng
|
Lizhi Lei
|
Hao Yang
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper presents the submission of Huawei Translation Services Center for the IWSLT 2023 dubbing task in the unconstrained setting. The proposed solution consists of a Transformer-based machine translation model and a phoneme duration predictor. The Transformer is deep and multiple target-to-source length-ratio class labels are used to control target lengths. The variation predictor in FastSpeech2 is utilized to predict phoneme durations. To optimize the isochrony in dubbing, re-ranking and scaling are performed. The source audio duration is used as a reference to re-rank the translations of different length-ratio labels, and the one with minimum time deviation is preferred. Additionally, the phoneme duration outputs are scaled within a defined threshold to narrow the duration gap with the source audio.
pdf
bib
abs
Improving Neural Machine Translation Formality Control with Domain Adaptation and Reranking-based Transductive Learning
Zhanglin Wu
|
Zongyao Li
|
Daimeng Wei
|
Hengchao Shang
|
Jiaxin Guo
|
Xiaoyu Chen
|
Zhiqiang Rao
|
Zhengzhe Yu
|
Jinlong Yang
|
Shaojun Li
|
Yuhao Xie
|
Bin Wei
|
Jiawei Zheng
|
Ming Zhu
|
Lizhi Lei
|
Hao Yang
|
Yanfei Jiang
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper presents Huawei Translation Service Center (HW-TSC)’s submission on the IWSLT 2023 formality control task, which provides two training scenarios: supervised and zero-shot, each containing two language pairs, and sets constrained and unconstrained conditions. We train the formality control models for these four language pairs under these two conditions respectively, and submit the corresponding translation results. Our efforts are divided into two fronts: enhancing general translation quality and improving formality control capability. According to the different requirements of the formality control task, we use a multi-stage pre-training method to train a bilingual or multilingual neural machine translation (NMT) model as the basic model, which can improve the general translation quality of the base model to a relatively high level. Then, under the premise of affecting the general translation quality of the basic model as little as possible, we adopt domain adaptation and reranking-based transductive learning methods to improve the formality control capability of the model.
pdf
bib
abs
HW-TSC at IWSLT2023: Break the Quality Ceiling of Offline Track via Pre-Training and Domain Adaptation
Zongyao Li
|
Zhanglin Wu
|
Zhiqiang Rao
|
Xie YuHao
|
Guo JiaXin
|
Daimeng Wei
|
Hengchao Shang
|
Wang Minghan
|
Xiaoyu Chen
|
Zhengzhe Yu
|
Li ShaoJun
|
Lei LiZhi
|
Hao Yang
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper presents HW-TSC’s submissions to the IWSLT 2023 Offline Speech Translation task, including speech translation of talks from English to German, Chinese, and Japanese, respectively. We participate in all three conditions (constrained training, constrained with large language models training, and unconstrained training) with models of cascaded architectures. We use data enhancement, pre-training models and other means to improve the ASR quality, and R-Drop, deep model, domain data selection, etc. to improve the translation quality. Compared with last year’s best results, we achieve 2.1 BLEU improvement on the MuST-C English-German test set.
pdf
bib
abs
The HW-TSC’s Speech-to-Speech Translation System for IWSLT 2023
Minghan Wang
|
Yinglu Li
|
Jiaxin Guo
|
Zongyao Li
|
Hengchao Shang
|
Daimeng Wei
|
Min Zhang
|
Shimin Tao
|
Hao Yang
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper describes our work on the IWSLT2023 Speech-to-Speech task. Our proposed cascaded system consists of an ensemble of Conformer and S2T-Transformer-based ASR models, a Transformer-based MT model, and a Diffusion-based TTS model. Our primary focus in this competition was to investigate the modeling ability of the Diffusion model for TTS tasks in high-resource scenarios and the role of TTS in the overall S2S task. To this end, we proposed DTS, an end-to-end diffusion-based TTS model that takes raw text as input and generates waveform by iteratively denoising on pure Gaussian noise. Compared to previous TTS models, the speech generated by DTS is more natural and performs better in code-switching scenarios. As the training process is end-to-end, it is relatively straightforward. Our experiments demonstrate that DTS outperforms other TTS models on the GigaS2S benchmark, and also brings positive gains for the entire S2S system.
pdf
bib
abs
The HW-TSC’s Simultaneous Speech-to-Text Translation System for IWSLT 2023 Evaluation
Jiaxin Guo
|
Daimeng Wei
|
Zhanglin Wu
|
Zongyao Li
|
Zhiqiang Rao
|
Minghan Wang
|
Hengchao Shang
|
Xiaoyu Chen
|
Zhengzhe Yu
|
Shaojun Li
|
Yuhao Xie
|
Lizhi Lei
|
Hao Yang
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
In this paper, we present our submission to the IWSLT 2023 Simultaneous Speech-to-Text Translation competition. Our participation involves three language directions: English-German, English-Chinese, and English-Japanese. Our proposed solution is a cascaded incremental decoding system that comprises an ASR model and an MT model. The ASR model is based on the U2++ architecture and can handle both streaming and offline speech scenarios with ease. Meanwhile, the MT model adopts the Deep-Transformer architecture. To improve performance, we explore methods to generate a confident partial target text output that guides the next MT incremental decoding process. In our experiments, we demonstrate that our simultaneous strategies achieve low latency while maintaining a loss of no more than 2 BLEU points when compared to offline systems.
pdf
bib
abs
The HW-TSC’s Simultaneous Speech-to-Speech Translation System for IWSLT 2023 Evaluation
Hengchao Shang
|
Zhiqiang Rao
|
Zongyao Li
|
Zhanglin Wu
|
Jiaxin Guo
|
Minghan Wang
|
Daimeng Wei
|
Shaojun Li
|
Zhengzhe Yu
|
Xiaoyu Chen
|
Lizhi Lei
|
Hao Yang
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
In this paper, we present our submission to the IWSLT 2023 Simultaneous Speech-to-Speech Translation competition. Our participation involves three language directions: English-German, English-Chinese, and English-Japanese. Our solution is a cascaded incremental decoding system, consisting of an ASR model, an MT model, and a TTS model. By adopting the strategies used in the Speech-to-Text track, we have managed to generate a more confident target text for each audio segment input, which can guide the next MT incremental decoding process. Additionally, we have integrated the TTS model to seamlessly reproduce audio files from the translation hypothesis. To enhance the effectiveness of our experiment, we have utilized a range of methods to reduce error conditions in the TTS input text and improve the smoothness of the TTS output audio.
pdf
bib
abs
INarIG: Iterative Non-autoregressive Instruct Generation Model For Word-Level Auto Completion
Hengchao Shang
|
Zongyao Li
|
Daimeng Wei
|
Jiaxin Guo
|
Minghan Wang
|
Xiaoyu Chen
|
Lizhi Lei
|
Hao Yang
Findings of the Association for Computational Linguistics: EMNLP 2023
Computer-aided translation (CAT) aims to enhance human translation efficiency and is still important in scenarios where machine translation cannot meet quality requirements. One fundamental task within this field is Word-Level Auto Completion (WLAC). WLAC predicts a target word given a source sentence, translation context, and a human typed character sequence. Previous works either employ word classification models to exploit contextual information from both sides of the target word or directly disregarded the dependencies from the right-side context. Furthermore, the key information, i.e. human typed sequences, is only used as prefix constraints in the decoding module. In this paper, we propose the INarIG (Iterative Non-autoregressive Instruct Generation) model, which constructs the human typed sequence into Instruction Unit and employs iterative decoding with subwords to fully utilize input information given in the task. Our model is more competent in dealing with low-frequency words (core scenario of this task), and achieves state-of-the-art results on the WMT22 and benchmark datasets, with a maximum increase of over 10% prediction accuracy.
pdf
bib
abs
Treating General MT Shared Task as a Multi-Domain Adaptation Problem: HW-TSC’s Submission to the WMT23 General MT Shared Task
Zhanglin Wu
|
Daimeng Wei
|
Zongyao Li
|
Zhengzhe Yu
|
Shaojun Li
|
Xiaoyu Chen
|
Hengchao Shang
|
Jiaxin Guo
|
Yuhao Xie
|
Lizhi Lei
|
Hao Yang
|
Yanfei Jiang
Proceedings of the Eighth Conference on Machine Translation
This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT23 general machine translation (MT) shared task, in which we participate in Chinese↔English (zh↔en) language pair. We use Transformer architecture and obtain the best performance via a variant with larger parameter size. We perform fine-grained pre-processing and filtering on the provided large-scale bilingual and monolingual datasets. We mainly use model enhancement strategies, including Regularized Dropout, Bidirectional Training, Data Diversification, Forward Translation, Back Translation, Alternated Training, Curriculum Learning and Transductive Ensemble Learning. Our submissions obtain competitive results in the final evaluation.
pdf
bib
abs
Multifaceted Challenge Set for Evaluating Machine Translation Performance
Xiaoyu Chen
|
Daimeng Wei
|
Zhanglin Wu
|
Ting Zhu
|
Hengchao Shang
|
Zongyao Li
|
Jiaxin Guo
|
Ning Xie
|
Lizhi Lei
|
Hao Yang
|
Yanfei Jiang
Proceedings of the Eighth Conference on Machine Translation
Machine Translation Evaluation is critical to Machine Translation research, as the evaluation results reflect the effectiveness of training strategies. As a result, a fair and efficient evaluation method is necessary. Many researchers have raised questions about currently available evaluation metrics from various perspectives, and propose suggestions accordingly. However, to our knowledge, few researchers has analyzed the difficulty level of source sentence and its influence on evaluation results. This paper presents HW-TSC’s submission to the WMT23 MT Test Suites shared task. We propose a systematic approach for construing challenge sets from four aspects: word difficulty, length difficulty, grammar difficulty and model learning difficulty. We open-source two Multifaceted Challenge Sets for Zh→En and En→Zh. We also present results of participants in this year’s General MT shared task on our test sets.
pdf
bib
abs
The Path to Continuous Domain Adaptation Improvements by HW-TSC for the WMT23 Biomedical Translation Shared Task
Zhanglin Wu
|
Daimeng Wei
|
Zongyao Li
|
Zhengzhe Yu
|
Shaojun Li
|
Xiaoyu Chen
|
Hengchao Shang
|
Jiaxin Guo
|
Yuhao Xie
|
Lizhi Lei
|
Hao Yang
|
Yanfei Jiang
Proceedings of the Eighth Conference on Machine Translation
This paper presents the domain adaptation methods adopted by Huawei Translation Service Center (HW-TSC) to train the neural machine translation (NMT) system on the English↔German (en↔de) language pair of the WMT23 biomedical translation task. Our NMT system is built on deep Transformer with larger parameter sizes. Based on the biomedical NMT system trained last year, we leverage Curriculum Learning, Data Diversification, Forward translation, Back translation, and Transductive Ensemble Learning to further improve system performance. Overall, we believe our submission can achieve highly competitive result in the official final evaluation.
pdf
bib
abs
HW-TSC’s Submissions to the WMT23 Discourse-Level Literary Translation Shared Task
Yuhao Xie
|
Zongyao Li
|
Zhanglin Wu
|
Daimeng Wei
|
Xiaoyu Chen
|
Zhiqiang Rao
|
Shaojun Li
|
Hengchao Shang
|
Jiaxin Guo
|
Lizhi Lei
|
Hao Yang
|
Yanfei Jiang
Proceedings of the Eighth Conference on Machine Translation
This paper introduces HW-TSC’s submission to the WMT23 Discourse-Level Literary Translation shared task. We use standard sentence-level transformer as a baseline, and perform domain adaptation and discourse modeling to enhance discourse-level capabilities. Regarding domain adaptation, we employ Back-Translation, Forward-Translation and Data Diversification. For discourse modeling, we apply strategies such as Multi-resolutional Document-to-Document Translation and TrAining Data Augmentation.
2022
pdf
bib
abs
Diformer: Directional Transformer for Neural Machine Translation
Minghan Wang
|
Jiaxin Guo
|
Yuxia Wang
|
Daimeng Wei
|
Hengchao Shang
|
Yinglu Li
|
Chang Su
|
Yimeng Chen
|
Min Zhang
|
Shimin Tao
|
Hao Yang
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
Autoregressive (AR) and Non-autoregressive (NAR) models have their own superiority on the performance and latency, combining them into one model may take advantage of both. Current combination frameworks focus more on the integration of multiple decoding paradigms with a unified generative model, e.g. Masked Language Model. However, the generalization can be harmful on the performance due to the gap between training objective and inference. In this paper, we aim to close the gap by preserving the original objective of AR and NAR under a unified framework. Specifically, we propose the Directional Transformer (Diformer) by jointly modelling AR and NAR into three generation directions (left-to-right, right-to-left and straight) with a newly introduced direction variable, which works by controlling the prediction of each token to have specific dependencies under that direction. The unification achieved by direction successfully preserves the original dependency assumption used in AR and NAR, retaining both generalization and performance. Experiments on 4 WMT benchmarks demonstrate that Diformer outperforms current united-modelling works with more than 1.5 BLEU points for both AR and NAR decoding, and is also competitive to the state-of-the-art independent AR and NAR models.
pdf
bib
abs
HW-TSC’s Submissions to the WMT 2022 General Machine Translation Shared Task
Daimeng Wei
|
Zhiqiang Rao
|
Zhanglin Wu
|
Shaojun Li
|
Yuanchang Luo
|
Yuhao Xie
|
Xiaoyu Chen
|
Hengchao Shang
|
Zongyao Li
|
Zhengzhe Yu
|
Jinlong Yang
|
Miaomiao Ma
|
Lizhi Lei
|
Hao Yang
|
Ying Qin
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper presents the submissions of Huawei Translate Services Center (HW-TSC) to the WMT 2022 General Machine Translation Shared Task. We participate in 6 language pairs, including Zh↔En, Ru↔En, Uk↔En, Hr↔En, Uk↔Cs and Liv↔En. We use Transformer architecture and obtain the best performance via multiple variants with larger parameter sizes. We perform fine-grained pre-processing and filtering on the provided large-scale bilingual and monolingual datasets. For medium and highresource languages, we mainly use data augmentation strategies, including Back Translation, Self Training, Ensemble Knowledge Distillation, Multilingual, etc. For low-resource languages such as Liv, we use pre-trained machine translation models, and then continue training with Regularization Dropout (R-Drop). The previous mentioned data augmentation methods are also used. Our submissions obtain competitive results in the final evaluation.
pdf
bib
abs
Exploring Robustness of Machine Translation Metrics: A Study of Twenty-Two Automatic Metrics in the WMT22 Metric Task
Xiaoyu Chen
|
Daimeng Wei
|
Hengchao Shang
|
Zongyao Li
|
Zhanglin Wu
|
Zhengzhe Yu
|
Ting Zhu
|
Mengli Zhu
|
Ning Xie
|
Lizhi Lei
|
Shimin Tao
|
Hao Yang
|
Ying Qin
Proceedings of the Seventh Conference on Machine Translation (WMT)
Contextual word embeddings extracted from pre-trained models have become the basis for many downstream NLP tasks, including machine translation automatic evaluations. Metrics that leverage embeddings claim better capture of synonyms and changes in word orders, and thus better correlation with human ratings than surface-form matching metrics (e.g. BLEU). However, few studies have been done to examine robustness of these metrics. This report uses a challenge set to uncover the brittleness of reference-based and reference-free metrics. Our challenge set1 aims at examining metrics’ capability to correlate synonyms in different areas and to discern catastrophic errors at both word- and sentence-levels. The results show that although embedding-based metrics perform relatively well on discerning sentence-level negation/affirmation errors, their performances on relating synonyms are poor. In addition, we find that some metrics are susceptible to text styles so their generalizability compromised.
pdf
bib
abs
HW-TSC’s Submission for the WMT22 Efficiency Task
Hengchao Shang
|
Ting Hu
|
Daimeng Wei
|
Zongyao Li
|
Xianzhi Yu
|
Jianfei Feng
|
Ting Zhu
|
Lizhi Lei
|
Shimin Tao
|
Hao Yang
|
Ying Qin
|
Jinlong Yang
|
Zhiqiang Rao
|
Zhengzhe Yu
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to WMT 2022 Efficiency Shared Task. For this year’s task, we still apply sentence-level distillation strategy to train small models with different configurations. Then, we integrate the average attention mechanism into the lightweight RNN model to pursue more efficient decoding. We tried adding a retrain step to our 8-bit and 4-bit models to achieve a balance between model size and quality. We still use Huawei Noah’s Bolt for INT8 inference and 4-bit storage. Coupled with Bolt’s support for batch inference and multi-core parallel computing, we finally submit models with different configurations to the CPU latency and throughput tracks to explore the Pareto frontiers.
pdf
bib
abs
HW-TSC Translation Systems for the WMT22 Biomedical Translation Task
Zhanglin Wu
|
Jinlong Yang
|
Zhiqiang Rao
|
Zhengzhe Yu
|
Daimeng Wei
|
Xiaoyu Chen
|
Zongyao Li
|
Hengchao Shang
|
Shaojun Li
|
Ming Zhu
|
Yuanchang Luo
|
Yuhao Xie
|
Miaomiao Ma
|
Ting Zhu
|
Lizhi Lei
|
Song Peng
|
Hao Yang
|
Ying Qin
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the translation systems trained by Huawei translation services center (HW-TSC) for the WMT22 biomedical translation task in five language pairs: English↔German (en↔de), English↔French (en↔fr), English↔Chinese (en↔zh), English↔Russian (en↔ru) and Spanish→English (es→en). Our primary systems are built on deep Transformer with a large filter size. We also utilize R-Drop, data diversification, forward translation, back translation, data selection, finetuning and ensemble to improve the system performance. According to the official evaluation results in OCELoT or CodaLab, our unconstrained systems in en→de, de→en, en→fr, fr→en, en→zh and es→en (clinical terminology sub-track) get the highest BLEU scores among all submissions for the WMT22 biomedical translation task.
pdf
bib
abs
HW-TSC Translation Systems for the WMT22 Chat Translation Task
Jinlong Yang
|
Zongyao Li
|
Daimeng Wei
|
Hengchao Shang
|
Xiaoyu Chen
|
Zhengzhe Yu
|
Zhiqiang Rao
|
Shaojun Li
|
Zhanglin Wu
|
Yuhao Xie
|
Yuanchang Luo
|
Ting Zhu
|
Yanqing Zhao
|
Lizhi Lei
|
Hao Yang
|
Ying Qin
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the submissions of Huawei Translation Services Center (HW-TSC) to WMT22 chat translation shared task on English-Germany (en-de) bidirection with results of zore-shot and few-shot tracks. We use the deep transformer architecture with a lager parameter size. Our submissions to the WMT21 News Translation task are used as the baselines. We adopt strategies such as back translation, forward translation, domain transfer, data selection, and noisy forward translation in task, and achieve competitive results on the development set. We also test the effectiveness of document translation on chat tasks. Due to the lack of chat data, the results on the development set show that it is not as effective as sentence-level translation models.
pdf
bib
abs
HW-TSC Systems for WMT22 Very Low Resource Supervised MT Task
Shaojun Li
|
Yuanchang Luo
|
Daimeng Wei
|
Zongyao Li
|
Hengchao Shang
|
Xiaoyu Chen
|
Zhanglin Wu
|
Jinlong Yang
|
Zhiqiang Rao
|
Zhengzhe Yu
|
Yuhao Xie
|
Lizhi Lei
|
Hao Yang
|
Ying Qin
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the submissions of Huawei translation services center (HW-TSC) to the WMT22 Very Low Resource Supervised MT task. We participate in all 6 supervised tracks including all combinations between Upper/Lower Sorbian (Hsb/Dsb) and German (De). Our systems are build on deep Transformer with a large filter size. We use multilingual transfer with German-Czech (De-Cs) and German-Polish (De-Pl) parallel data. We also utilize regularized dropout (R-Drop), back translation, fine-tuning and ensemble to improve the system performance. According to the official evaluation results on OCELoT, our supervised systems of all 6 language directions get the highest BLEU scores among all submissions. Our pre-trained multilingual model for unsupervised De2Dsb and Dsb2De translation also gain highest BLEU.
pdf
bib
abs
HW-TSC’s Submissions to the WMT22 Word-Level Auto Completion Task
Hao Yang
|
Hengchao Shang
|
Zongyao Li
|
Daimeng Wei
|
Xianghui He
|
Xiaoyu Chen
|
Zhengzhe Yu
|
Jiaxin Guo
|
Jinlong Yang
|
Shaojun Li
|
Yuanchang Luo
|
Yuhao Xie
|
Lizhi Lei
|
Ying Qin
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper presents the submissions of Huawei Translation Services Center (HW-TSC) to WMT 2022 Word-Level AutoCompletion Task. We propose an end-to-end autoregressive model with bi-context based on Transformer to solve current task. The model uses a mixture of subword and character encoding units to realize the joint encoding of human input, the context of the target side and the decoded sequence, which ensures full utilization of information. We uses one model to solve four types of data structures in the task. During training, we try using a machine translation model as the pre-trained model and fine-tune it for the task. We also add BERT-style MLM data at the fine-tuning stage to improve model performance. We participate in zh→en, en→de, and de→en directions and win the first place in all the three tracks. Particularly, we outperform the second place by more than 5% in terms of accuracy on the zh→en and en→de tracks. The result is buttressed by human evaluations as well, demonstrating the effectiveness of our model.
pdf
bib
abs
The HW-TSC’s Speech to Speech Translation System for IWSLT 2022 Evaluation
Jiaxin Guo
|
Yinglu Li
|
Minghan Wang
|
Xiaosong Qiao
|
Yuxia Wang
|
Hengchao Shang
|
Chang Su
|
Yimeng Chen
|
Min Zhang
|
Shimin Tao
|
Hao Yang
|
Ying Qin
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
The paper presents the HW-TSC’s pipeline and results of Offline Speech to Speech Translation for IWSLT 2022. We design a cascade system consisted of an ASR model, machine translation model and TTS model to convert the speech from one language into another language(en-de). For the ASR part, we find that better performance can be obtained by ensembling multiple heterogeneous ASR models and performing reranking on beam candidates. And we find that the combination of context-aware reranking strategy and MT model fine-tuned on the in-domain dataset is helpful to improve the performance. Because it can mitigate the problem that the inconsistency in transcripts caused by the lack of context. Finally, we use VITS model provided officially to reproduce audio files from the translation hypothesis.
pdf
bib
abs
HW-TSC’s Participation in the IWSLT 2022 Isometric Spoken Language Translation
Zongyao Li
|
Jiaxin Guo
|
Daimeng Wei
|
Hengchao Shang
|
Minghan Wang
|
Ting Zhu
|
Zhanglin Wu
|
Zhengzhe Yu
|
Xiaoyu Chen
|
Lizhi Lei
|
Hao Yang
|
Ying Qin
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
This paper presents our submissions to the IWSLT 2022 Isometric Spoken Language Translation task. We participate in all three language pairs (English-German, English-French, English-Spanish) under the constrained setting, and submit an English-German result under the unconstrained setting. We use the standard Transformer model as the baseline and obtain the best performance via one of its variants that shares the decoder input and output embedding. We perform detailed pre-processing and filtering on the provided bilingual data. Several strategies are used to train our models, such as Multilingual Translation, Back Translation, Forward Translation, R-Drop, Average Checkpoint, and Ensemble. We investigate three methods for biasing the output length: i) conditioning the output to a given target-source length-ratio class; ii) enriching the transformer positional embedding with length information and iii) length control decoding for non-autoregressive translation etc. Our submissions achieve 30.7, 41.6 and 36.7 BLEU respectively on the tst-COMMON test sets for English-German, English-French, English-Spanish tasks and 100% comply with the length requirements.
2021
pdf
bib
abs
How Length Prediction Influence the Performance of Non-Autoregressive Translation?
Minghan Wang
|
Guo Jiaxin
|
Yuxia Wang
|
Yimeng Chen
|
Su Chang
|
Hengchao Shang
|
Min Zhang
|
Shimin Tao
|
Hao Yang
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Length prediction is a special task in a series of NAT models where target length has to be determined before generation. However, the performance of length prediction and its influence on translation quality has seldom been discussed. In this paper, we present comprehensive analyses on length prediction task of NAT, aiming to find the factors that influence performance, as well as how it associates with translation quality. We mainly perform experiments based on Conditional Masked Language Model (CMLM) (Ghazvininejad et al., 2019), a representative NAT model, and evaluate it on two language pairs, En-De and En-Ro. We draw two conclusions: 1) The performance of length prediction is mainly influenced by properties of language pairs such as alignment pattern, word order or intrinsic length ratio, and is also affected by the usage of knowledge distilled data. 2) There is a positive correlation between the performance of the length prediction and the BLEU score.
pdf
bib
abs
HW-TSC’s Participation in the WMT 2021 News Translation Shared Task
Daimeng Wei
|
Zongyao Li
|
Zhanglin Wu
|
Zhengzhe Yu
|
Xiaoyu Chen
|
Hengchao Shang
|
Jiaxin Guo
|
Minghan Wang
|
Lizhi Lei
|
Min Zhang
|
Hao Yang
|
Ying Qin
Proceedings of the Sixth Conference on Machine Translation
This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT 2021 News Translation Shared Task. We participate in 7 language pairs, including Zh/En, De/En, Ja/En, Ha/En, Is/En, Hi/Bn, and Xh/Zu in both directions under the constrained condition. We use Transformer architecture and obtain the best performance via multiple variants with larger parameter sizes. We perform detailed pre-processing and filtering on the provided large-scale bilingual and monolingual datasets. Several commonly used strategies are used to train our models, such as Back Translation, Forward Translation, Multilingual Translation, Ensemble Knowledge Distillation, etc. Our submission obtains competitive results in the final evaluation.
pdf
bib
abs
HW-TSC’s Participation in the WMT 2021 Triangular MT Shared Task
Zongyao Li
|
Daimeng Wei
|
Hengchao Shang
|
Xiaoyu Chen
|
Zhanglin Wu
|
Zhengzhe Yu
|
Jiaxin Guo
|
Minghan Wang
|
Lizhi Lei
|
Min Zhang
|
Hao Yang
|
Ying Qin
Proceedings of the Sixth Conference on Machine Translation
This paper presents the submission of Huawei Translation Service Center (HW-TSC) to WMT 2021 Triangular MT Shared Task. We participate in the Russian-to-Chinese task under the constrained condition. We use Transformer architecture and obtain the best performance via a variant with larger parameter sizes. We perform detailed data pre-processing and filtering on the provided large-scale bilingual data. Several strategies are used to train our models, such as Multilingual Translation, Back Translation, Forward Translation, Data Denoising, Average Checkpoint, Ensemble, Fine-tuning, etc. Our system obtains 32.5 BLEU on the dev set and 27.7 BLEU on the test set, the highest score among all submissions.
pdf
bib
abs
HW-TSC’s Participation in the WMT 2021 Large-Scale Multilingual Translation Task
Zhengzhe Yu
|
Daimeng Wei
|
Zongyao Li
|
Hengchao Shang
|
Xiaoyu Chen
|
Zhanglin Wu
|
Jiaxin Guo
|
Minghan Wang
|
Lizhi Lei
|
Min Zhang
|
Hao Yang
|
Ying Qin
Proceedings of the Sixth Conference on Machine Translation
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to the WMT 2021 Large-Scale Multilingual Translation Task. We participate in Samll Track #2, including 6 languages: Javanese (Jv), Indonesian (Id), Malay (Ms), Tagalog (Tl), Tamil (Ta) and English (En) with 30 directions under the constrained condition. We use Transformer architecture and obtain the best performance via multiple variants with larger parameter sizes. We train a single multilingual model to translate all the 30 directions. We perform detailed pre-processing and filtering on the provided large-scale bilingual and monolingual datasets. Several commonly used strategies are used to train our models, such as Back Translation, Forward Translation, Ensemble Knowledge Distillation, Adapter Fine-tuning. Our model obtains competitive results in the end.
pdf
bib
abs
HW-TSC’s Participation in the WMT 2021 Efficiency Shared Task
Hengchao Shang
|
Ting Hu
|
Daimeng Wei
|
Zongyao Li
|
Jianfei Feng
|
ZhengZhe Yu
|
Jiaxin Guo
|
Shaojun Li
|
Lizhi Lei
|
ShiMin Tao
|
Hao Yang
|
Jun Yao
|
Ying Qin
Proceedings of the Sixth Conference on Machine Translation
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to WMT 2021 Efficiency Shared Task. We explore the sentence-level teacher-student distillation technique and train several small-size models that find a balance between efficiency and quality. Our models feature deep encoder, shallow decoder and light-weight RNN with SSRU layer. We use Huawei Noah’s Bolt, an efficient and light-weight library for on-device inference. Leveraging INT8 quantization, self-defined General Matrix Multiplication (GEMM) operator, shortlist, greedy search and caching, we submit four small-size and efficient translation models with high translation quality for the one CPU core latency track.
pdf
bib
abs
HW-TSC’s Submissions to the WMT21 Biomedical Translation Task
Hao Yang
|
Zhanglin Wu
|
Zhengzhe Yu
|
Xiaoyu Chen
|
Daimeng Wei
|
Zongyao Li
|
Hengchao Shang
|
Minghan Wang
|
Jiaxin Guo
|
Lizhi Lei
|
Chuanfei Xu
|
Min Zhang
|
Ying Qin
Proceedings of the Sixth Conference on Machine Translation
This paper describes the submission of Huawei Translation Service Center (HW-TSC) to WMT21 biomedical translation task in two language pairs: Chinese↔English and German↔English (Our registered team name is HuaweiTSC). Technical details are introduced in this paper, including model framework, data pre-processing method and model enhancement strategies. In addition, using the wmt20 OK-aligned biomedical test set, we compare and analyze system performances under different strategies. On WMT21 biomedical translation task, Our systems in English→Chinese and English→German directions get the highest BLEU scores among all submissions according to the official evaluation results.
2020
pdf
bib
abs
HW-TSC’s Participation in the WMT 2020 News Translation Shared Task
Daimeng Wei
|
Hengchao Shang
|
Zhanglin Wu
|
Zhengzhe Yu
|
Liangyou Li
|
Jiaxin Guo
|
Minghan Wang
|
Hao Yang
|
Lizhi Lei
|
Ying Qin
|
Shiliang Sun
Proceedings of the Fifth Conference on Machine Translation
This paper presents our work in the WMT 2020 News Translation Shared Task. We participate in 3 language pairs including Zh/En, Km/En, and Ps/En and in both directions under the constrained condition. We use the standard Transformer-Big model as the baseline and obtain the best performance via two variants with larger parameter sizes. We perform detailed pre-processing and filtering on the provided large-scale bilingual and monolingual dataset. Several commonly used strategies are used to train our models such as Back Translation, Ensemble Knowledge Distillation, etc. We also conduct experiment with similar language augmentation, which lead to positive results, although not used in our submission. Our submission obtains remarkable results in the final evaluation.
pdf
bib
abs
HW-TSC’s Participation at WMT 2020 Automatic Post Editing Shared Task
Hao Yang
|
Minghan Wang
|
Daimeng Wei
|
Hengchao Shang
|
Jiaxin Guo
|
Zongyao Li
|
Lizhi Lei
|
Ying Qin
|
Shimin Tao
|
Shiliang Sun
|
Yimeng Chen
Proceedings of the Fifth Conference on Machine Translation
The paper presents the submission by HW-TSC in the WMT 2020 Automatic Post Editing Shared Task. We participate in the English-German and English-Chinese language pairs. Our system is built based on the Transformer pre-trained on WMT 2019 and WMT 2020 News Translation corpora, and fine-tuned on the APE corpus. Bottleneck Adapter Layers are integrated into the model to prevent over-fitting. We further collect external translations as the augmented MT candidates to improve the performance. The experiment demonstrates that pre-trained NMT models are effective when fine-tuning with the APE corpus of a limited size, and the performance can be further improved with external MT augmentation. Our system achieves competitive results on both directions in the final evaluation.
pdf
bib
abs
HW-TSC’s Participation at WMT 2020 Quality Estimation Shared Task
Minghan Wang
|
Hao Yang
|
Hengchao Shang
|
Daimeng Wei
|
Jiaxin Guo
|
Lizhi Lei
|
Ying Qin
|
Shimin Tao
|
Shiliang Sun
|
Yimeng Chen
|
Liangyou Li
Proceedings of the Fifth Conference on Machine Translation
This paper presents our work in the WMT 2020 Word and Sentence-Level Post-Editing Quality Estimation (QE) Shared Task. Our system follows standard Predictor-Estimator architecture, with a pre-trained Transformer as the Predictor, and specific classifiers and regressors as Estimators. We integrate Bottleneck Adapter Layers in the Predictor to improve the transfer learning efficiency and prevent from over-fitting. At the same time, we jointly train the word- and sentence-level tasks with a unified model with multitask learning. Pseudo-PE assisted QE (PEAQE) is proposed, resulting in significant improvements on the performance. Our submissions achieve competitive result in word/sentence-level sub-tasks for both of En-De/Zh language pairs.
pdf
bib
abs
HW-TSC’s Participation in the WAT 2020 Indic Languages Multilingual Task
Zhengzhe Yu
|
Zhanglin Wu
|
Xiaoyu Chen
|
Daimeng Wei
|
Hengchao Shang
|
Jiaxin Guo
|
Zongyao Li
|
Minghan Wang
|
Liangyou Li
|
Lizhi Lei
|
Hao Yang
|
Ying Qin
Proceedings of the 7th Workshop on Asian Translation
This paper describes our work in the WAT 2020 Indic Multilingual Translation Task. We participated in all 7 language pairs (En<->Bn/Hi/Gu/Ml/Mr/Ta/Te) in both directions under the constrained condition—using only the officially provided data. Using transformer as a baseline, our Multi->En and En->Multi translation systems achieve the best performances. Detailed data filtering and data domain selection are the keys to performance enhancement in our experiment, with an average improvement of 2.6 BLEU scores for each language pair in the En->Multi system and an average improvement of 4.6 BLEU scores regarding the Multi->En. In addition, we employed language independent adapter to further improve the system performances. Our submission obtains competitive results in the final evaluation.
pdf
bib
abs
The HW-TSC Video Speech Translation System at IWSLT 2020
Minghan Wang
|
Hao Yang
|
Yao Deng
|
Ying Qin
|
Lizhi Lei
|
Daimeng Wei
|
Hengchao Shang
|
Ning Xie
|
Xiaochun Li
|
Jiaxian Guo
Proceedings of the 17th International Conference on Spoken Language Translation
The paper presents details of our system in the IWSLT Video Speech Translation evaluation. The system works in a cascade form, which contains three modules: 1) A proprietary ASR system. 2) A disfluency correction system aims to remove interregnums or other disfluent expressions with a fine-tuned BERT and a series of rule-based algorithms. 3) An NMT System based on the Transformer and trained with massive publicly available corpus.