Chapter 7
Quality is in the eyes of the reviewer
A report on post-editing quality evaluation
As part of a larger research project exploring correlations between productivity, quality and experience in the post-editing of machine-translated and translation-memory outputs in a team of 24 professional translators, three reviewers were asked to review the translations/post-editions completed by these translators and to fill in the corresponding quality evaluation forms. The data obtained from the three reviewers’ evaluation was analysed in order to determine if there was agreement in terms of time as well as in number and type of errors marked to complete the task. The results show that there were statistically significant differences between reviewers, although there were also correlations on pairs of reviewers depending on the provenance of the text analysed. Reviewers tended to agree on the general number of errors found in the No match category but their agreement in Fuzzy and MT match was either weak or there was no agreement, perhaps indicating that the origin of the text might have influenced their evaluation. The reviewers also tended to agree on best and worst performers, but there was great disparity in the translators’ classifications if they were ranked according to the number of errors.
Article outline
- 1.Introduction
- 2.Related work
- 3.Material and methodology
- 4.LISA QA process
- 5.Results
- 5.1Results on reviewers’ time
- 5.2Results on reviewers’ errors
- 5.3Comparing reviewers
- 5.4Error classification
- 5.5Overcorrections
- 6.Conclusions
-
References
References (17)
References
Bowker, Lynne, and Ehgoetz, Melissa. 2007. “Exploring User Acceptance of Machine Translation Output: A Recipient Evaluation.” In Across Boundaries: International Perspectives on Translation ed. by D. Kenny, and K. Ryou, 209–224. Newcastle-upon-Tyne: Cambridge Scholars Publishing.
Brunette, Louise, Gagnon, Chantal, and Hine, Jonathan. 2005. “The Grevis Project. Revise or Court Calamity.” Across Languages and Cultures 6 (1): 29–45.
Carl, Michael, Dragsted, Barbara, Elming, Jakob, Hardt, Daniel, and Jakobsen, Arnt L. 2011. “The process of post-editing: a pilot study.” In Proceedings of the 8th international NLPSC workshop, ed. by Sharp, Bernadette, Michael Zock, Michael Carl, and Arnt Lykke Jakobsen (Copenhagen Studies in Language 41), 131–142. Frederiksberg: Samfundslitteratur.
Fiederer, Rebecca, and O’Brien, Sharon. 2009. “Quality and machine translation: a realistic objective?” The Journal of Specialised Translation (11). Available from [URL] Accessed June 2012.
Fundéu. 2012. Wikilengua. [URL]
García, Ignacio. 2011. “Translating by post-editing: Is it the way forward?” Machine Translation, 25 (3): 217–237
Guerberof, Ana. 2012. Productivity and quality in the post-editing of outputs from translation memories and machine translation. PhD Thesis. Universitat Rovira I Virgili. Available from [URL].
Koehn, Philipp. 2012. “What is a Better Translation?” Reflections on Six Years of Running Evaluation Campaigns.” Tralogy [En ligne session 5- Quality in Translation / La qualité en traduction, mis à jour le 31/01/2012] Available from [URL]
Koehn Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondřej Bojar, Alexandra Constantin, and Evan Herbst. 2007. “Moses: Open source toolkit for statistical machine translation.” In Proceedings of the
45th Annual Meeting of the ACL Companion Volume Proceedings of the Demo and Poster Sessions
, 177–180. Prague: Association for Computational Linguistics.
Künzli, Alexander. 2006. “Translation revision – A study of the performance of ten professional translators revising a technical text.” In Insights into specialized translation, ed. by Maurizio Gotti, and Susan Sarcevic, 195–214. Bern & Frankfurt: Peter Lang.
Mossop, Brian. 2001. Revising and Editing for Translators. Manchester: St. Jerome Publishing.
Mossop, Brian. 2007. “Empirical studies of revision: what we know and need to know.” Journal of Specialised Translation. Issue 8. Available from [URL].
O’Brien, Sharon. 2012. “Towards a Dynamic Quality Evaluation Model for Translation.” Journal of Specialised Translation 17 Available from [URL]
Papineni, Kishore, Roukos, Salim, Ward, Todd, Zhu, Wei-Jing. 2002. “BLEU: A method for automatic evaluation of machine translation.” In
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)
, 311–318. Amsterdam: John Benjamins.
Real Academia Española. Asociación de Academias Americanas. 2009.
Nueva Gramática de la Lengua Española. Ignacio Bosque, ed. Madrid: Espasa Libros, S.L.U.
Cited by (2)
Cited by two other publications
Zouhar, Vilém, Věra Kloudová, Martin Popel & Ondřej Bojar
2024.
Evaluating optimal reference translations.
Natural Language Processing ► pp. 1 ff.
Guerberof-Arenas, Ana & Antonio Toral
This list is based on CrossRef data as of 3 december 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.