Chapter published in:Translation in Transition: Between cognition, computing and technology
Edited by Arnt Lykke Jakobsen and Bartolomé Mesa-Lao
[Benjamins Translation Library 133] 2017
► pp. 188–206
Quality is in the eyes of the reviewer
A report on post-editing quality evaluation
As part of a larger research project exploring correlations between productivity, quality and experience in the post-editing of machine-translated and translation-memory outputs in a team of 24 professional translators, three reviewers were asked to review the translations/post-editions completed by these translators and to fill in the corresponding quality evaluation forms. The data obtained from the three reviewers’ evaluation was analysed in order to determine if there was agreement in terms of time as well as in number and type of errors marked to complete the task. The results show that there were statistically significant differences between reviewers, although there were also correlations on pairs of reviewers depending on the provenance of the text analysed. Reviewers tended to agree on the general number of errors found in the No match category but their agreement in Fuzzy and MT match was either weak or there was no agreement, perhaps indicating that the origin of the text might have influenced their evaluation. The reviewers also tended to agree on best and worst performers, but there was great disparity in the translators’ classifications if they were ranked according to the number of errors.
Published online: 30 September 2017
Bowker, Lynne, and Ehgoetz, Melissa
Brunette, Louise, Gagnon, Chantal, and Hine, Jonathan
Carl, Michael, Dragsted, Barbara, Elming, Jakob, Hardt, Daniel, and Jakobsen, Arnt L.
Fiederer, Rebecca, and O’Brien, Sharon
2009 “Quality and machine translation: a realistic objective?” The Journal of Specialised Translation (11). Available from http://www.jostrans.org/issue11/art_fiederer_obrien.pdf Accessed June 2012.
2012 Wikilengua. http://www.wikilengua.org/index.php/Gerundio
2012 Productivity and quality in the post-editing of outputs from translation memories and machine translation. PhD Thesis. Universitat Rovira I Virgili. Available from http://www.tdx.cat/handle/10803/90247.
2012 “What is a Better Translation?” Reflections on Six Years of Running Evaluation Campaigns.” Tralogy [En ligne session 5- Quality in Translation / La qualité en traduction, mis à jour le 31/01/2012] Available from http://homepages.inf.ed.ac.uk/pkoehn/publications/tralogy11.pdf
Koehn Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst
2007 “Moses: Open source toolkit for statistical machine translation.” In Proceedings of the 45th Annual Meeting of the ACL Companion Volume Proceedings of the Demo and Poster Sessions , 177–180. Prague: Association for Computational Linguistics.
2007 “Empirical studies of revision: what we know and need to know.” Journal of Specialised Translation. Issue 8. Available from http://www.jostrans.org/issue08/art_mossop.pdf.
2012 “Towards a Dynamic Quality Evaluation Model for Translation.” Journal of Specialised Translation 17 Available from http://www.jostrans.org/issue17/art_obrien.pdf
Papineni, Kishore, Roukos, Salim, Ward, Todd, Zhu, Wei-Jing
2002 “BLEU: A method for automatic evaluation of machine translation.” In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL) , 311–318. Amsterdam: John Benjamins.
Cited by 1 other publications
Guerberof-Arenas, Ana & Antonio Toral
This list is based on CrossRef data as of 11 july 2021. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.