Abstract
Despite the pressure to reduce costs in the advent of machine translation plus post-editing (PE), many professional translators are reluctant to accept PE jobs, which are perceived as requiring less skill and yielding poorer quality products than human translation (HT). This trend in turn raises an issue in the industry, namely, a lack of post-editors. To meet the growing demand for PE, new populations—such as college language learners—should be assessed as potential post-editor candidates. This paper investigates this possibility through an experiment focusing on college language learners’ PE qualifications and resultant performance. Data collected on perceived ease of task, editing quantity, and quality of final product were correlated with the students’ course grades. The investigation found that over 74 % of students felt PE to be an easier task than HT, whereas 26 % did not. Those students who did not find PE easier were determined to be unqualified post-editors. Students who received poor grades in a traditional translation course were also confirmed to be unqualified, though A-students were not always qualified post-editors. The variable performance among A-students may be understood in terms of different approaches to PE, characterized as utilizing either analytic or integrated processing. An analysis using this framework tentatively concludes that A-students who apply an analytic approach, more typical of novice translators, may perform better as post-editors than those who take an integrated approach.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
We acknowledge here that it may to be entirely fair to compare perceived relaxed load between professionals and students, given that the former group are much more used to doing translations. Note, however, that while the individuals in this study had not performed PE this task before, the difference in skill-sets may skew the results a little.
Test of English for International Communication. See http://www.toeic.or.jp/english.html. The maximum achievable score is 990. It would be interesting to track TOEIC score versus class grade, as there may be an interaction here. We leave this for future work.
Note that while this is clearly an uncontrolled environment, we do take steps to investigate whether quantitative measures of PE performance reinforce or correlate with such qualitative measures.
The reason for selecting GTM is related to Tatsumi’s research (2009) that investigates the correlation between automatic metric scales (textual similarity) and human PE effort in terms of time. Among the tested metrics (BLEU (Papineni et al. 2002), TER (Snover et al. 2006), NIST (Doddington 2002), and GTM), GTM shows the highest correlation with PE speed (ibid.). However, the correlation is still weak, and the level of correlation differs greatly depending on the structure of the sentence being translated.
Mann-Whitney’s U test is applied.
A model translation for Gil Amelio NeXT Computer, produced by a professional translator reads:
..[Gil Amelio wa NeXT computer e no torikumi ni chakushu shi...] (Back-translation: Gil Amelio started to work on NeXT computer). As is apparent, the professional translator has used a sense-based translation approach.
References
Allen J (2003) Post-editing. In: Somers H (ed) Computers and translation: a translator’s guide. John Benjamins, Amsterdam, pp 297–317
Bowker L (2005) Productivity vs. quality: a pilot study on the impact of translation memory systems. Localisation Focus 4(1):13–20
Doddington G (2002) Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In: HLT 2002: Human Language Technology Conference: proceedings of the second international conference on human language technology research. San Diego, California, pp 138–145
Dragsted B (2004) Segmentation in translation and translation memory systems: An empirical investigation of cognitive segmentation and effects of integrating a TM system into the translation process. PhD Thesis, Copenhagen Business School, Copenhagen
Fiederer R, O’Brien S (2009) Quality and machine translation: a realistic objective? J Special Transl 11: 52–74
García I (2010) Is machine translation ready yet? Target 22(1):7–21
Groves D, Schmidtke D (2009) Identification and analysis of post-editing patterns for MT. In: Proceedings of MT Summit XII. Ottawa, pp 429–436
Guerberof A (2008) Productivity and quality in the post-editing of outputs from translation memories and machine translation (Unpublished minor dissertation). Universitat Rovira i Virgili, Tarragona
Krings HP (2001) Repairing texts: Empirical investigations of machine translation post-editing processes, Trans. G.S. Koby. The Kent State University Press, Kent
Mossop B (2001) Revising and editing for translators. St Jerome, Manchester
O’Brien S (2002) Teaching post-editing: a proposal for course content. In: Proceedings of the 6th EAMT Workshop on “Teaching Machine Translation”. Manchester, pp 99–106
O’Brien S (2007) An empirical investigation of temporal and technical post-editing effort. Transl Interpret Stud II(I):83–136
Papineni K, Roukos S, Ward T, Zhu W-J (2002) BLEU: a method for automatic evaluation of machine translation. In: ACL-2002: 40th Annual meeting of the Association for Computational Linguistics, Philadelphia, PA, pp 311–318
Plitt M, Masselot F (2010) A productivity test of statistical machine translation post-editing in a typical localization context. Prague Bull Math Linguist 93:7–16
Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: AMTA 2006: Proceedings of the 7th conference of the Association for Machine Translation in the Americas, “Visions for the Future of Machine Translation’, Cambridge, MA, pp 223–231
Tatsumi M (2009) Correlation between automatic evaluation scores, post-editing speed and some other factors. In: Proceedings of MT Summit XII. Ottawa, pp 332–339
TAUS (2010) Machine translation postediting guidelines. http://www.translationautomation.com/postediting/machine-translation-post-editing-guidelines. Accessed 10 Jan 2014
Turian J, Shen L, Melamed D (2003) Evaluation of machine translation and its evaluation. In: Proceedings of the MT Summit IX, New Orleans, pp 386–393
Veale T, Way A (1997) Gaijin: A bootstrapping approach to example-based machine translation. In: International conference on recent advances in natural language processing, Tzigov Chark, pp 239–244
Wagner E (1985) Post-editing Systran: a challenge for commission translators. Terminol Trad 3:1–7
Way A (2013) Traditional and emerging use-cases for machine translation. In: Proceedings of translating and the computer 35, London
Yamada M (2012) Revising text: An empirical investigation of revision and the effects of integrating a TM and MT system into the translation process. PhD Thesis, Rikkyo University, Tokyo
Yamada M (2013) Dare ga post-editor ni naruno ka? [Who will be post editors]. Honyaku Kenkyuu e no Shootai [Introducing Translation Studies], 10. http://honyakukenkyu.sakura.ne.jp/shotai_vol10/No_10-004-Yamada.pdf Accessed 10 January, 2014
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Yamada, M. Can college students be post-editors? An investigation into employing language learners in machine translation plus post-editing settings. Machine Translation 29, 49–67 (2015). https://doi.org/10.1007/s10590-014-9167-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10590-014-9167-7