Edited by Arnt Lykke Jakobsen and Bartolomé Mesa-Lao
[Benjamins Translation Library 133] 2017
► pp. 187–205
Chapter 7Quality is in the eyes of the reviewer
A report on post-editing quality evaluation
As part of a larger research project exploring correlations between productivity, quality and experience in the post-editing of machine-translated and translation-memory outputs in a team of 24 professional translators, three reviewers were asked to review the translations/post-editions completed by these translators and to fill in the corresponding quality evaluation forms. The data obtained from the three reviewers’ evaluation was analysed in order to determine if there was agreement in terms of time as well as in number and type of errors marked to complete the task. The results show that there were statistically significant differences between reviewers, although there were also correlations on pairs of reviewers depending on the provenance of the text analysed. Reviewers tended to agree on the general number of errors found in the No match category but their agreement in Fuzzy and MT match was either weak or there was no agreement, perhaps indicating that the origin of the text might have influenced their evaluation. The reviewers also tended to agree on best and worst performers, but there was great disparity in the translators’ classifications if they were ranked according to the number of errors.
- 2.Related work
- 3.Material and methodology
- 4.LISA QA process
- 5.1Results on reviewers’ time
- 5.2Results on reviewers’ errors
- 5.3Comparing reviewers
- 5.4Error classification
Cited by 1 other publications
This list is based on CrossRef data as of 31 march 2022. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.