2024
pdf
bib
abs
xTower: A Multilingual LLM for Explaining and Correcting Translation Errors
Marcos V Treviso
|
Nuno M Guerreiro
|
Sweta Agrawal
|
Ricardo Rei
|
José Pombal
|
Tania Vaz
|
Helena Wu
|
Beatriz Silva
|
Daan Van Stigt
|
Andre Martins
Findings of the Association for Computational Linguistics: EMNLP 2024
While machine translation (MT) systems are achieving increasingly strong performance on benchmarks, they often produce translations with errors and anomalies. Understanding these errors can potentially help improve the translation quality and user experience. This paper introduces xTower, an open large language model (LLM) built on top of TowerBase designed to provide free-text explanations for translation errors in order to guide the generation of a corrected translation. The quality of the generated explanations by xTower are assessed via both intrinsic and extrinsic evaluation. We ask expert translators to evaluate the quality of the explanations across two dimensions: relatedness towards the error span being explained and helpfulness in error understanding and improving translation quality. Extrinsically, we test xTower across various experimental setups in generating translation corrections, demonstrating significant improvements in translation quality. Our findings highlight xTower’s potential towards not only producing plausible and helpful explanations of automatic translations, but also leveraging them to suggest corrected translations.
pdf
bib
abs
Cultural Transcreation with LLMs as a new product
Beatriz Silva
|
Helena Wu
|
Yan Jingxuan
|
Vera Cabarrão
|
Helena Moniz
|
Sara Guerreiro de Sousa
|
João Almeida
|
Malene Sjørslev Søholm
|
Ana Farinha
|
Paulo Dimas
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2)
We present how at Unbabel we have been using Large Language Models to apply a Cultural Transcreation (CT) product on customer support (CS) emails and how we have been testing the quality and potential of this product. We discuss our preliminary evaluation of the performance of different MT models in the task of translating rephrased content and the quality of the translation outputs. Furthermore, we introduce the live pilot programme and the corresponding relevant findings, showing that transcreated content is not only culturally adequate but it is also of high rephrasing and translation quality.
2023
pdf
bib
abs
Empirical Assessment of kNN-MT for Real-World Translation Scenarios
Pedro Henrique Martins
|
João Alves
|
Tânia Vaz
|
Madalena Gonçalves
|
Beatriz Silva
|
Marianna Buchicchio
|
José G. C. de Souza
|
André F. T. Martins
Proceedings of the 24th Annual Conference of the European Association for Machine Translation
This paper aims to investigate the effectiveness of the k-Nearest Neighbor Machine Translation model (kNN-MT) in real-world scenarios. kNN-MT is a retrieval-augmented framework that combines the advantages of parametric models with non-parametric datastores built using a set of parallel sentences. Previous studies have primarily focused on evaluating the model using only the BLEU metric and have not tested kNN-MT in real world scenarios. Our study aims to fill this gap by conducting a comprehensive analysis on various datasets comprising different language pairs and different domains, using multiple automatic metrics and expert evaluated Multidimensional Quality Metrics (MQM). We compare kNN-MT with two alternate strategies: fine-tuning all the model parameters and adapter-based finetuning. Finally, we analyze the effect of the datastore size on translation quality, and we examine the number of entries necessary to bootstrap and configure the index.
pdf
bib
abs
Findings of the WMT 2023 Shared Task on Quality Estimation
Frederic Blain
|
Chrysoula Zerva
|
Ricardo Rei
|
Nuno M. Guerreiro
|
Diptesh Kanojia
|
José G. C. de Souza
|
Beatriz Silva
|
Tânia Vaz
|
Yan Jingxuan
|
Fatemeh Azadi
|
Constantin Orasan
|
André Martins
Proceedings of the Eighth Conference on Machine Translation
We report the results of the WMT 2023 shared task on Quality Estimation, in which the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels, without access to reference translations. This edition introduces a few novel aspects and extensions that aim to enable more fine-grained, and explainable quality estimation approaches. We introduce an updated quality annotation scheme using Multidimensional Quality Metrics to obtain sentence- and word-level quality scores for three language pairs. We also extend the provided data to new language pairs: we specifically target low-resource languages and provide training, development and test data for English-Hindi, English-Tamil, English-Telegu and English-Gujarati as well as a zero-shot test-set for English-Farsi. Further, we introduce a novel fine-grained error prediction task aspiring to motivate research towards more detailed quality predictions.