Re-thinking translation quality: Revision in the digital age

Christopher D. Mellinger
University of North Carolina at Charlotte
Abstract

Editing and revision are regularly incorporated into professional translation projects as a means of quality assurance. Underlying the decision to include these tasks in translation workflows lay implicit assumptions about what constitutes quality. This article examines how quality is operationalized with respect to editing and revision and considers these assumptions. The case is made for incorporating revision into translation quality assessment models and employs the concepts of adequacy, distributed cognition, and salience – and their treatment in the research on cognitive translation processes, post-editing, and translation technology – in order to re-think translation quality.

Keywords:
Publication history
Table of contents

1.Introduction

Editing and revision tasks are often an integral part of the traditional document lifecycle. The importance of editing and revision is underscored by industry standards for human translation services (e.g., ASTM 2575; ISO 17100) that require their inclusion in the translation workflow for quality assurance purposes. Language service providers typically adopt the prescribed linear workflow in which a reviser or editor reviews a translator’s work in an effort to detect and correct errors in the draft target text (e.g., Mossop 2014Mossop, Brian 2014Revising and Editing for Translators. 3rd ed. New York: Routledge. DOI logoGoogle Scholar; Lee 2006Lee, Hyang 2006 “Révision: Définitions et paramètres.” Meta 51 (2): 410–419. DOI logoGoogle Scholar). A similar approach is regularly employed in large international organizations such as the United Nations. The three-tiered classification of translators, revisers, and self-revisers emphasizes revision as a means to identify flaws in translated texts to be corrected by a more experienced colleague (Orellana 1990Orellana, Marina 1990La traducción del inglés al castellano. Santiago: Editorial Universitaria.Google Scholar).11.Similar revision practices are seen in the European Union, particularly with respect to multilingual legislation (Wagner, Bech and Martínez 2002Wagner, Emma, Svend Bech, and Jesús M. Martínez 2002Translating for the European Union Institutions. Manchester: St. Jerome.Google Scholar). Revision processes are sometimes carried out by senior colleagues. In other cases lawyers or lawyer-linguists perform this task, in addition to other drafting and linguistic tasks that fall outside the scope of the traditional translation workflow (Šarčević and Robertson 2015Šarčević, Susan, and Colin Robertson 2015 “The Work of Lawyer-Linguists in the EU Institutions.” In Legal Translation in Context: Professional Issues and Prospects, edited by Anabel Borja Albi and Fernando Prieto Ramos, 181–202. Bern: Peter Lang.Google Scholar). This approach has been heralded as indicative of knowledgeable use and consumption of translations (Hine 2003Hine Jr., Jonathan T. 2003 “Teaching Text Revision in a Multilingual Environment.” In Beyond the Ivory Tower: Rethinking Translation Pedagogy, edited by Brian J. Baer and Geoffrey S. Koby, 135–156. Amsterdam: John Benjamins. DOI logoGoogle Scholar; see also Horguelin and Brunette 1998Horguelin, Paul A., and Louise Brunnette 1998Pratique de la révision. Montreal: Linguatech.Google Scholar).

Likewise, projects that incorporate machine translation may include editing at several stages and to varying degrees, in the form of pre- and post-editing, in order to optimize translation output (Spalink, Levy and Merrill 1997Spalink, Karin, Rachel Levy, and Carla Merrill 1997The Level Edit ™ Post-Editing Process: A Tutorial for Post-Editors of Machine Translation Output. Internationalization and Translation Services.Google Scholar; TAUS 2010TAUS 2010 “Machine Translation Post-Editing Guidelines.” Accessed October 26, 2017. https://​www​.taus​.net​/academy​/best​-practices​/postedit​-best​-practices​/machine​-translation​-post​-editing​-guidelines). Editors may revise source texts to remove specific linguistic features that can prove problematic to machine translation systems (often termed “negative translatability indicators,” or NTIs; see Underwood and Jongejan 2001Underwood, Nancy L., and Bart Jongejan 2001 “Translatability Checker: A Tool to Help Decide Whether to Use MT.” Proceedings of MT Summit VIII: Machine Translation in the Information Age, edited by Bente Maegaard, 363–368. Santiago de Compostela.Google Scholar). Post-editors, in contrast, work with the machine-translated target text to “clean up” mistakes or errors introduced by the machine translation system. The level to which these editors correct texts can range from a cursory review for grammar and spelling mistakes to a complete comparison of the source and target texts with revisions being made for style, terminological choices, or logic.

In both human- and machine-produced texts, revisers and editors are positioned as the bastion of quality; editing and/or revision are necessarily included in the translation process as an assurance to both the language service provider and the client that a quality translation has been delivered. Yet, despite the prevalence of revision in translation process workflows and the important role of revisers in approaches to quality assurance, revision remains largely absent from extant translation quality assessment (TQA) models. Current models that adopt a predominantly product-based approach are too narrow in their failing to recognize revision as an integral part of the translation workflow, nor do the models account for the role played by self-revision. Likewise, existing TQA models ought to be refined to address current process research that investigates cognitive aspects of the translation task. The present article calls for the inclusion of editing and revision in TQA models as vital components to complement existing quality frameworks and to account for this emerging body of knowledge.

Current research on translation revision highlights the importance of revisiting both editing and revision and their relationship to translation quality assessment. Robert and Brunette (2016)Robert, Isabelle S., and Louise Brunette 2016 “Should Revision Trainees Think Aloud While Revising Somebody Else’s Translation? Insights from an Empirical Study with Professionals.” Meta 61 (2): 320–345. DOI logoGoogle Scholar, for example, note a lack of availability of a translation revision model. Their research indicates the importance of problem recognition during the revision task and suggests that the ability to explain a potential solution may improve overall revision quality. This finding is in line with Muñoz Martín’s (2014) 2014 “A Blurred Snapshot of Advances in Translation Process Research.” MonTI Special Issue – Minding Translation 1: 49–84.Google Scholar description of revision as it relates to meta-cognition and is also reminiscent of Angelone’s (2010)Angelone, Erik 2010 “Uncertainty, Uncertainty Management and Metacognitive Problem Solving in the Translation Task.” In Translation and Cognition, edited by Gregory M. Shreve and Erik Angelone, 17–40. Amsterdam: John Benjamins. DOI logoGoogle Scholar tripartite model of uncertainty management. Angelone (2010)Angelone, Erik 2010 “Uncertainty, Uncertainty Management and Metacognitive Problem Solving in the Translation Task.” In Translation and Cognition, edited by Gregory M. Shreve and Erik Angelone, 17–40. Amsterdam: John Benjamins. DOI logoGoogle Scholar describes how translators must first recognize a problem in the text before proposing and evaluating a translation solution. In contrast, Robert (2014)Robert, Isabelle S. 2014 “Investigating the Problem-Solving Strategies of Revisers through Triangulation: An Exploratory Study.” Translation and Interpreting Studies 9 (1): 88–108. DOI logoGoogle Scholar notes that, while reflection and reformulation may form a strategy during the revision process, this is not necessarily indicative of higher translation quality. These more recent empirical studies on translation revision, coupled with findings that are at times contradictory, point to the need to revisit the role of revision in translation quality and the need to include revision in TQA models.

One resulting challenge of these revised TQA models is the need to determine what constitutes quality during the revision task, particularly in light of the shifted translation process caused by the introduction of computer-assisted translation tools, such as translation memories and terminology management systems, as well as machine translation in translation workflows. Therefore, this article also contributes to the discussion of quality assessment by examining multiple perspectives on quality assessment by reviewing cognitive research investigating both product and process and its relationship to quality. Several scholars (e.g., Halverson 2013Halverson, Sandra L. 2013 “Implications of Cognitive Linguistics for Translation Studies.” In Cognitive Linguistics and Translation, edited by Ana Rojo and Iraide Ibarretxe-Antuñano, 33–73. Berlin: De Gruyter Mouton. DOI logoGoogle Scholar; Muñoz Martín 2014 2014 “A Blurred Snapshot of Advances in Translation Process Research.” MonTI Special Issue – Minding Translation 1: 49–84.Google Scholar; Alves 2015Alves, Fabio 2015 “Translation Process Research at the Interface: Paradigmatic, Theoretical, and Methodological Issues in Dialogue with Cognitive Science, Expertise Studies, and Psycholinguistics.” In Psycholinguistic and Cognitive Inquiries into Translation and Interpreting, edited by Aline Ferreira and John W. Schwieter, 17–40. Amsterdam: John Benjamins.Google Scholar) have argued that using cognitive linguistic research paradigms or cognitive science to examine traditional questions or issues in the field may enrich our understanding of these concepts. Risku (2010)Risku, Hanna 2010 “A Cognitive Scientific View on Technical Communication and Translation: Do Embodiment and Situatedness Really Make a Difference?Target 22 (1): 94–111. DOI logoGoogle Scholar, for instance, demonstrates that applying a cognitive science perspective to research involving technical communication and translation is useful. In particular, she suggests how findings in cognitive science may alter our understanding of commonly understood concepts in translation studies. Jääskeläinen (2016)Jääskeläinen, Riitta 2016 “Quality and Translation Process Research.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 89–106. Amsterdam: John Benjamins. DOI logoGoogle Scholar similarly stresses the importance of revisiting the notion of quality in translation by examining process-oriented research in conjunction with product-oriented approaches to translation quality. Jääskeläinen argues that the translation process be included in translation quality assessment rather than quality being determined solely from a product-oriented perspective. In line with this approach, the present article proposes to examine three concepts in relation to quality – namely adequacy, distributed cognition, and salience – and how these are treated in the research related to the translation process, post-editing, and translation technology. Through the examination of current scholarship in the field related to these concepts and revealing implicit assumptions related to quality, the case will be made to incorporate these tasks into translation quality assessment models.

The choice of these three concepts helps to elucidate several aspects of translation quality. The first of these concepts, adequacy, represents a more traditional, product-oriented approach to translation quality, while the latter two concepts are more germane to the discussion of editing and revision tasks during the translation process. In particular, distributed cognition reveals changes in the translation task brought about by the inclusion of translation technologies in translation workflows. The combination of a product- and process-oriented approach provides a broader perspective when examining translation quality. Likewise, the construct of salience complements the previous two constructs, insofar as it highlights changes in the task paradigm resulting from translators’ interaction with translation technologies. The choice of these three concepts by no means represents an attempt at an exhaustive overview of cognitive constructs that may merit investigation. Instead, these concepts prove useful to examine assumptions related to translation quality assessment and illustrate the importance of including editing and revision in TQA models.

The remainder of this article is structured as follows. First, the terms editing and revision are discussed and differentiated to clarify their usage. The three subsequent sections examine the relationships between translation quality assessment and adequacy, distributed cognition, and salience, respectively. Each of these three perspectives offers important insights for the notions of quality and the role of editing and revision in the translation process. The article concludes with an expansion and reiteration of the argument for including these tasks in TQA models.

2.Revising and editing

Prior to examining potential assumptions that underlie the decision to incorporate editing and revision in translation process workflows, an important terminological distinction should first be drawn between editing and revision. These two concepts are sometimes confused as being interchangeable, but in fact represent different behaviors. Both tasks regularly figure into translation projects, yet the scope of work entailed in each differs considerably. While difficult to ascertain the true reason for the terminological confusion between editing and revision, researchers ought to know several potential reasons that these terms are regularly interchanged and carefully interrogate which concept is referenced in the extant literature. As noted in ISO 17100, the term revision is occasionally referred to as bilingual editing. Likewise, several industry entities (e.g., Common Sense Advisory; American Translators Association) and the American standard, ASTM 2575, use the term editing to refer to a check of a translation against the source language material. The ISO standard, however, would refer to this bilingual check of the target text against the source language content as revision. This issue is further compounded when translation workflows are often described as following a TEP paradigm, in which the acronym stands for translation-editing-proofreading and it is unclear whether this second step is a reference to monolingual editing or a bilingual revision of the target text against the source language version. Whether this terminological conflation of editing and revision in certain contexts is a regional variant or linguistic shorthand, for the purposes of this article, the ISO standard definitions will be observed.

The editing task is typically regarded as a task solely involving the target language version of the translated text (Mossop 2014Mossop, Brian 2014Revising and Editing for Translators. 3rd ed. New York: Routledge. DOI logoGoogle Scholar; ISO 17100). Depending on the level to which a text is to be reviewed, an editor may make changes in the text with regard to style, structure, or content in an effort to adhere to a pre-determined style guide or set of linguistic conventions. The editor, however, is focused on the target text alone and is not checking the translation against the source language version of the text.

The revision task constitutes a check of the target language version of a text against the source language text (Mossop 2014Mossop, Brian 2014Revising and Editing for Translators. 3rd ed. New York: Routledge. DOI logoGoogle Scholar; ISO 17100). In contrast to editing, revision involves the evaluation of the content that appears in both versions of the text and evaluation of the translation both in its form and its suitability for purpose (ISO 17100). While both the editing and revision tasks are preoccupied with the quality of the target text with respect to terminology, syntax, style, and compliance with a client-specified style guide, the comparative nature of revision fundamentally differentiates these tasks in the translation process.

As a related concept, self-revision describes a task that could be described as a hybrid of both editing and revision, insofar as the translator herself will review the translation in conjunction with the source language version. The translator may also choose to review the translation without checking the source text (i.e., edit the translation); however, the overlap of editing and revision behavior make it difficult to differentiate clearly between these two tasks. Künzli (2007Künzli, Alexander 2007 “Translation Revision: A Study of the Performance of Ten Professional Translators Revising a Legal Text.” In Doubts and Directions in Translation Studies: Selected Contributions from the EST Congress, Lisbon 2004, edited by Radegundis Stolze, Miriam Shlesinger, and Yves Gambier, 115–126. Amsterdam: John Benjamins. DOI logoGoogle Scholar, 116), drawing on empirical research, describes self-revision as a unique stage of the translation process. His research, along with Shih (2006)Shih, Claire Yi-Yi 2006 ““Revision from Translators’ Point of View. An Interview Study.” Target 18 (2): 295–312. DOI logoGoogle Scholar and Englund Dimitrova (2005)Englund Dimitrova, Birgitta 2005Expertise and Explicitation in the Translation Process. Amsterdam: John Benjamins. DOI logoGoogle Scholar, indicates the importance of task definition in shaping the approach adopted by translators during self-revision. Englund Dimitrova (2005)Englund Dimitrova, Birgitta 2005Expertise and Explicitation in the Translation Process. Amsterdam: John Benjamins. DOI logoGoogle Scholar also emphasizes that self-revision may alter the translation after the initial production of a target text. Furthermore, self-revision does not always represent a temporally-unique stage or phase of the translation process, since editing and revision behaviors may occur as the target text is drafted. Mellinger (2014)Mellinger, Christopher D. 2014Computer-Assisted Translation: An Empirical Investigation of Cognitive Effort. PhD diss. Kent State University. Available at: http://​bit​.ly​/1ybBY7W describes potential differences in drafting and editing behavior when translators work with translation memory and posits that translators may differ with regard to specific drafting behaviors.

3.Revision and adequacy

The concept of adequacy will first be reviewed since it is representative of a traditional, product-oriented approach to translation quality assessment. The assumptions with respect to adequacy should be considered prior to examining more process-oriented approaches. Previous research by scholars of both human and machine translation has described adequacy largely from a product-oriented discussion of the translation. Even-Zohar (1975Even-Zohar, Itamar 1975 “Decisions in Translating Poetry.” Ha-sifrut/Literature 21: 32–45.Google Scholar, 43; quoted in Toury 2012Toury, Gideon 2012Descriptive Translation Studies – and Beyond. Revised edition. Amsterdam: John Benjamins. DOI logoGoogle Scholar, 79), for example, describes an adequate translation produced by a human translator as one that “realizes in the target language the textual relationships of a source text with no breach of its own [basic] linguistic system.” This definition highlights the intrinsic relationship between source and target language versions and positions the source text as the reference against which the target text will be compared. Toury (2012Toury, Gideon 2012Descriptive Translation Studies – and Beyond. Revised edition. Amsterdam: John Benjamins. DOI logoGoogle Scholar, 79ff) describes this norm-governed approach to translation as potentially incompatible with certain target language conventions, particularly extra-linguistic considerations. By contrast, acceptable translations might be those that subscribe to target language norms.

Notably absent from this discussion is the notion of quality. Instead, the positioning of a translation on a continuum between adequate and acceptable allows for micro-level textual features to be described in terms of macro-level norms. Rather than specific instances being compared between source and target language to determine whether an amorphous, difficult-to-operationalize error has been introduced, these target language renditions are considered to be in the service of a larger orientation toward either source or target linguistic systems.22.While some textual features may be difficult to classify as an error when in the service of an overarching norm, certain linguistic or terminological errors can be more easily identified. Toury readily notes that this analysis has been predominantly applied to more humanistic translations, yet he concedes its appropriateness for translations that are more pragmatic in nature.

Vermeer ([1989] 2004)Vermeer, Hans J. (1989) 2004 “Skopos and Commission in Translational Action.” Translated by Andrew Chesterman. In The Translation Studies Reader. 2nd ed., edited by Lawrence Venuti, 227–238. New York: Routledge.Google Scholar, in outlining skopos theory, similarly focuses on the translation product to describe adequacy in translation but focuses on the orientation of the target text toward the target culture. This approach decouples the source and target language versions with regard to the form and structure of the texts, thereby allowing a translation to be deemed adequate should it fulfill the overarching function. The achievement of what Vermeer describes as “intertextual coherence” (229) situates the translator as actively adjusting the target language version of the text on the basis of its intended function.

Descriptive translation studies, however, finds little traction in pragmatic, non-literary translations for hire; the translation standards cited above are largely product-focused and emphasize the complicated idea of equivalence. For example, ASTM 2575 describes a translation as a “target text based on a source text in such a way that the content and in many cases, the form of the two texts, can be considered to be equivalent.” This standard, along with its European counterpart EN 15038 and the international standard ISO 17100, describe the quality of a translation as based on pre-negotiated standards or instructions of the client.33. ISO 17100 (2015)ISO 17100 2015Translation Services – Requirements for Translation Services. Geneva: ISO.Google Scholar has superseded the European standard; however, given the prevalence of EN 15038 in the literature, both standards are mentioned here as a point of reference. These instructions are variable and flexible, but both standards agree that translation and revision are at the core of quality translation services. Gouadec (2010)Gouadec, Daniel 2010Translation as a Profession. 2nd ed. Amsterdam: John Benjamins.Google Scholar also emphasizes client specifications as being of paramount importance.

Client specifications, however, obviate the possibility of an approach to quality that is solely oriented toward the target text. Instead, quality must be considered as a confluence of several factors, which might include project requirements, technological and resource constraints, as well as process-specific behaviors. Bass (2006Bass, Scott 2006 “Quality in the Real World.” In Perspectives on Localization, edited by Keiran J. Dunne, 69–94. Amsterdam: John Benjamins. DOI logoGoogle Scholar, 94) recognizes this imperative with respect to translation; translation projects must negotiate “competing imperatives of cost, time, and quality” while taking into account extra-linguistic, non-textual obstacles to quality that have a direct bearing on the final translation product.44. Bass’s (2006)Bass, Scott 2006 “Quality in the Real World.” In Perspectives on Localization, edited by Keiran J. Dunne, 69–94. Amsterdam: John Benjamins. DOI logoGoogle Scholar work focuses largely on work in localization. However, these comments can be extended to other translation contexts. For a description of quality management specific to localization, see Dunne (2006)Dunne, Keiran J. 2006 “Putting the Cart behind the Horse: Rethinking Localization Quality Management.” In Perspectives on Localization, edited by Keiran J. Dunne, 95–117. Amsterdam: John Benjamins. DOI logoGoogle Scholar. This issue is further compounded if we consider that language service providers and clients may have difficulty identifying a quality translation product and rely instead on a probabilistic estimate on the delivery of a quality product (Dunne 2012 2012 “The Industrialization of Translation: Causes, Consequences and Challenges.” Translation Spaces 1: 143–168. DOI logoGoogle Scholar). This information asymmetry among translation buyers and clients, language service providers, and translators challenges the ability to adopt a product-oriented approach to translation quality.

In addition to human translation projects, the conception of equivalence has served as the starting point for many studies in machine translation in the pursuit of adequate translations. For instance, BLEU scores are used as a metric to approximate the relative closeness of a machine-translated text to that of a human translation. Initially proposed by Papineni et al. (2002)Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu 2002 “BLEU: A Method for Automatic Evaluation of Machine Translation.” In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311–318. Stroudsburg, PA: ACL.Google Scholar, this measure has become a yardstick against which all new quality metrics in machine translation can be compared; however, this metric remains focused squarely on the product and the relative similarity of a target text to reference texts. Additionally, human assessment of MT texts has helped identify MT quality and can be used in conjunction with automated measures of adequacy to determine translation quality (Specia et al. 2011Specia, Lucia, Najeh Hajlaoui, Catalina Hallett, and Wilker Aziz 2011 “Predicting Machine Translation Accuracy.” MT Summit XIII: The Thirteenth Machine Translation Summit, 513–520. Xiamen, China.Google Scholar). These metrics alone only serve as indicators of the quality of initial MT output, and post-editors are often employed to adjust raw output to serve a specific purpose.

Still other inquiries into translation quality in the context of pragmatic translation performed by human translators provide additional approaches to the conception of quality. Williams (2004)Williams, Malcolm 2004Translation Quality Assessment: An Argumentation-Centred Approach. Ottawa: University of Ottawa Press.Google Scholar, for instance, examines argumentation theory as a means by which translators can assess the translation product beyond micro-level errors. Instead of lexical and syntactic structures, Williams advocates for examining a translation at the level of text, message, and argument. This approach allows assessment decisions to be guided by a degree of criticality of the identified error and the end use and function of the text itself.55.Though not explicitly stated in the volume, Williams’s (2004)Williams, Malcolm 2004Translation Quality Assessment: An Argumentation-Centred Approach. Ottawa: University of Ottawa Press.Google Scholar approach to translation quality assessment seems to draw on Mann and Thompson’s (1988)Mann, William C., and Sandra A. Thompson 1988 “Rhetorical Structure Theory: Toward a Functional Theory of Text Organization.” Text 8 (3): 243–281. DOI logoGoogle Scholar Rhetorical Structure Theory. With a different approach, House (2015)House, Juliane 2015Translation Quality Assessment: Past and Present. New York: Routledge.Google Scholar revises previous versions of her translation quality assessment model to consider both the source and target texts and proposes that covert and overt translation form part of a continuum and may be prioritized in a given translation. This described “double-linkage” of both texts squarely focuses on translation as “a linguistic-textual operation” (142–143) and provides an encompassing model in which translations can be evaluated.66.The description here of translation quality assessment models is by no means exhaustive. Drugan’s (2013)Drugan, Joanna 2013Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.Google Scholar monograph on quality in professional translation traces the development of top-down and bottom-up approaches to assessment and quality that are often at odds. Lee (2006)Lee, Hyang 2006 “Révision: Définitions et paramètres.” Meta 51 (2): 410–419. DOI logoGoogle Scholar and Brunnette (2000)Brunette, Louise 2000 “Towards a Terminology for Translation Quality Assessment: A Comparison of TQA Practices.” The Translator 6 (2): 169–182. DOI logoGoogle Scholar similarly point to competing conceptions of translation quality. For a more extensive discussion of various approaches to translation quality assessment and the significant debate surrounding what constitutes quality, see House (2015)House, Juliane 2015Translation Quality Assessment: Past and Present. New York: Routledge.Google Scholar.

The position of editing and revision in the context of translation quality, however, is not without its detractors. Some scholars think that editing and revision do not belong to the discussion on translation quality assessment (see Drugan 2013Drugan, Joanna 2013Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.Google Scholar, 69 for an extended literature review), but the present article aligns with Mossop’s (2014)Mossop, Brian 2014Revising and Editing for Translators. 3rd ed. New York: Routledge. DOI logoGoogle Scholar assertion that editing and revision should be addressed in scholarship on translation quality assessment. Brunnette’s (2000)Brunette, Louise 2000 “Towards a Terminology for Translation Quality Assessment: A Comparison of TQA Practices.” The Translator 6 (2): 169–182. DOI logoGoogle Scholar comments on the text with which editors and revisers work are particularly illuminating. Often, editors and revisers work with final (or nearly final) versions of a translation. In this respect, translation quality assessment is performed before their editorial work. Yet advances in translation tools, coupled with changes in text production workflows, move the reviser’s role in the translation workflow forward. Tasks performed by the editor – e.g., the provision of iterative feedback to translators on earlier drafts; terminology management prior to and during translation work; style harmonization across multiple translators; and the creation style guides for the end client – no longer exclusively occur after the entire translation has been completed.77.The traditional waterfall approach to project management has been described in the context of translation and localization (e.g., Dunne 2011 2011 “From Vicious to Virtuous cycle: Customer-Focused Translation Quality Management Using ISO 9001 Principles and Agile Methodologies.” In Translation and Localization Project Management, edited by Keiran J. Dunne and Elena S. Dunne, 153–187. Amsterdam: John Benjamins. DOI logoGoogle Scholar; Drugan 2013Drugan, Joanna 2013Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.Google Scholar) as have more iterative approaches, such as the Agile methodology (Dunne 2011 2011 “From Vicious to Virtuous cycle: Customer-Focused Translation Quality Management Using ISO 9001 Principles and Agile Methodologies.” In Translation and Localization Project Management, edited by Keiran J. Dunne and Elena S. Dunne, 153–187. Amsterdam: John Benjamins. DOI logoGoogle Scholar). In particular, a shift from solely text-based translation projects to content-based localization necessitates rethinking a linear progression through translation-editing-proofreading. These tasks are linked to the translation task and the ultimate production of the translation product, and, while not translation in the most traditional of senses, should at the very least be considered in conjunction as contributing to the notion of translation quality.

Revision, as a central feature, plays an integral role in producing adequate or acceptable translations in the eyes of language service providers. The underlying assumption for including revision in the translation workflow is succinctly described in the American standard as “the first opportunity to confirm specifications compliance,” in which the source and target texts are compared and that the latter is “complete, accurate, and free from misinterpretations of the source text and that the appropriate terminology has been used throughout.” As such, the revision task can be perceived as a quality control task aimed at detecting and correcting errors.88.Incidentally, Koskinen’s (2008)Koskinen, Kaisa 2008Translating Institutions: An Ethnographic Study of EU Translation. Manchester: St. Jerome.Google Scholar ethnographic study of the European Commission notes that the revision process, while touted as imperative, often is not afforded sufficient resources and time in translation workflows. The potential elimination of revision from the translation workflow further complicates notions of quality, insofar as professional standards are at times misaligned with practice. Translation companies heavily rely on this step in their workflows, with revisers lending their linguistic expertise to evaluate translations that the companies themselves cannot necessarily vet.

Notions such as accuracy and correct interpretations of the source text, however, belie two presuppositions often present in professional translation projects. The first might be considered a positivistic understanding of an underlying meaning inherent in the source text. For instance, Koby et al. (2014Koby, Geoffrey S., et al. 2014 “Defining Translation Quality.” Tradumàtica 12: 413–420. DOI logoGoogle Scholar, 415) argue for “maximum fluency and accuracy” when attempting to define translation quality after rejecting the “possibility of ‘perfect’ accuracy and fluency.” This product-oriented approach seems to echo language industry standards without taking into account extra-linguistic considerations. While Koby et al. seem to assume that these non-textual features will be evident or are accounted for in the text itself, there is little evidence to suggest that only one translation will achieve this specific end. They further assume that these features will, in fact, be known to the editor or evaluator. In addition, they argue for the creation of an objective measure against which translations may be assessed. Again, this view of a stable source text meaning and clear, pre-negotiated client instructions appears untenable, at least in the present context, and would rely on a specific conception of quality, on which the authors themselves disagree.99.It should be noted that Koby et al. (2014)Koby, Geoffrey S., et al. 2014 “Defining Translation Quality.” Tradumàtica 12: 413–420. DOI logoGoogle Scholar openly admit to have competing opinions about a narrow or broad definition of quality. Their aim is for the discussion to drive further examination of what constitutes quality.

The second presupposition is that of the reviser being in a position to evaluate the translation. Standards and practitioners often characterize the reviser as a more experienced translator who possesses the requisite linguistic and domain-specific knowledge to identify misinterpretations and infelicities in the draft translation. Expertise studies, however, suggest that translation experience as a necessary qualification of revisers might require greater scrutiny. Researchers have begun to question whether expertise in the translation task domain is sufficiently similar to the revision task such that it may be transferable (cf. Shreve 2006Shreve, Gregory M. 2006 “The Deliberate Practice: Translation and Expertise.” Journal of Translation Studies 9 (1): 27–42.Google Scholar). Translation process researchers have begun to investigate paraphrasing, editing and revision, and post-editing, and the point to which a text is edited before being abandoned and deemed of sufficient quality to begin work on subsequent tasks (e.g., Englund Dimitrova 2005Englund Dimitrova, Birgitta 2005Expertise and Explicitation in the Translation Process. Amsterdam: John Benjamins. DOI logoGoogle Scholar; O’Brien 2007O’Brien, Sharon 2007 “An Empirical Investigation of Temporal and Technical Post-Editing Effort.” Translation and Interpreting Studies 2 (1): 83–136. DOI logoGoogle Scholar; Whyatt, Stachowiak and Kajzer-Wietrzny 2016Whyatt, Bogusława, Katarzyna Stachowiak, and Marta Kajzer-Wietrzny 2016 “Similar and Different: Cognitive Rhythm and Effort in Translation and Paraphrasing.” Poznan Studies in Contemporary Linguistics 52 (2): 175–208. DOI logoGoogle Scholar). These studies have only just begun to reveal translator and editor behavior and are limited in number and scope such that definitive conclusions cannot yet be drawn. Yet the mere raising of these questions highlights possible areas in which the role and characterization of the editor must be rethought.

Adequacy and its intrinsic relationship to quality and, by extension, revision is thus complicated by its positioning within the translation process. Computer-assisted translation and machine translation systems further blur the lines of translation quality, insofar as a third agent of text production is introduced into an already entangled dyad. The reviewed product-oriented focus of translation quality alone does not sufficiently address quality and revision. To better understand the extent to which MT and CAT systems alter the revision process, the concepts of distributed cognition and salience will be examined as process-oriented approaches to translation quality that take into account translation workflows and the translation task paradigm.

4.Revision and distributed cognition

To this point, a predominantly product-oriented approach to translation quality has been discussed; adequacy and accuracy refer largely to micro- and macro-level textual features, and a potential evaluation or assessment of their overall appropriateness to the context in which they appear. However, we should also consider the process by which these translations are produced. Therefore, as a second concept to examine in relation to quality, the role of distributed cognition in revision will be considered as a means to investigate the translation workflow because several people (i.e., translators, revisers, proofreaders) are involved in shaping the final version of the target text. Jääskeläinen (2016)Jääskeläinen, Riitta 2016 “Quality and Translation Process Research.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 89–106. Amsterdam: John Benjamins. DOI logoGoogle Scholar traces translation quality as it relates to translation process research and elucidates this distinction using Abdallah’s (2007)Abdallah, Kristiina 2007 “Tekstittämisen laatu – mitä se oikein on?” [Subtitling quality – what is it?]. In Olennaisen äärellä. Johdatus audiovisuaaliseen käätämiseen [Introduction to audiovidual translation], edited by Riitta Oittinen and Tiina Tuominen, 272–293. Tampere: Tampereen yliopistopaino.Google Scholar tripartite definition of product, process, and social quality. Her approach to translation quality emphasizes TPR research in a number of areas and highlights the importance of not divorcing the translation product from its production.1010.This approach is in stark contrast to House’s (2015House, Juliane 2015Translation Quality Assessment: Past and Present. New York: Routledge.Google Scholar, 118) dismissal of the ability of translation process research to investigate cognitive processes in the service of translation quality assessment. The main contention forwarded by House is that observable behavior cannot conclusively reveal cognitive processing because the latter is inherently unobservable.

The process by which translations are elaborated is of particular importance in light of technological advances and the inclusion of translation aids. Since initial attempts to achieve fully-automated high-quality machine translation in the 1950s and 1960s, to the subsequent development of the translator’s workstation, computer-assisted translation has become a more commonplace feature of non-literary translation work (Hutchins 1998Hutchins, John 1998 “The Origin of the Translator’s Workstation.” Machine Translation 13 (4): 287–307. DOI logoGoogle Scholar; Quah 2006Quah, C. K. 2006Translation and Technology. New York: Palgrave Macmillan. DOI logoGoogle Scholar). The advent of translation memories and terminology management systems, coupled with the development of rules-based and statistically-based machine translation, has altered the professional translation landscape. The ability to manage and use these systems has practically become a requirement for professional translators.

Drugan (2013)Drugan, Joanna 2013Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.Google Scholar recognizes this change in the language industry and outlines the limited scholarship to date on the impact of translation technology on quality. Rather than focusing on a specific language pair or translation tool, Drugan describes workflow – that is, the larger process of translation involving various steps and tasks – as having a direct bearing on translation quality. Moreover, the traditional translation-edit-proofread paradigm requires considerable expansion if translation tools are implemented as often recommended by their developers. Cronin (2013Cronin, Michael 2013Translation in the Digital Age. New York: Routledge.Google Scholar, 128) argues along different lines – language industry attempts at wide-scale translation automation ultimately emphasize product-oriented conceptions of quality:

Quality is, in a sense, the return of the repressed translation detail. The careful, detailed attention to text, language, and meaning that is implicit in the act of translation re-emerges in the context of automation in the debates about the extent and role of post-editing, and about how to achieve acceptable quality in translation output.

As Cronin argues, the process of translation automation has prompted proponents of translation automation to revisit the translation product as a means of measuring success or quality in the machine translation output. While the translation providers emphasize the textual product, an overarching process – in this case, the way in which the translation was produced – has implicitly influenced how quality has been operationalized.

Yet the use of computer-assisted translation tools fundamentally alters the translation task and adds an additional layer of mediation to translation and revision. Dragsted’s (2008)Dragsted, Barbara 2008 “Computer-aided Translation as a Distributed Cognitive Task.” In Dror and Harnad 2008a, 237–256.Google Scholar assertion that computer-assisted translation is an act of distributed cognition highlights this shift in the translation task. Distributed cognition, a concept first described by Hutchins (1995)Hutchins, Edwin 1995Cognition in the Wild. Cambridge, MA: MIT Press.Google Scholar, is an approach to cognitive science that is suggestive of cognitive processing occurring beyond an individual. In reviewing Hutchins’ work, Turner (2016Turner, Phil 2016HCI Redux: The Promise of Post-Cognitive Interaction. Switzerland: Springer. DOI logoGoogle Scholar, 76) describes distributed cognition wherein “cognitive processes are distributed among multiple human actors, external artefacts and representations and the relationships between these elements […] work together to achieve the system’s goal.” In the context of Dragsted’s (2008)Dragsted, Barbara 2008 “Computer-aided Translation as a Distributed Cognitive Task.” In Dror and Harnad 2008a, 237–256.Google Scholar work, then, the translation process is an instance of distributed cognition insofar as at least two actors are present as well as several technologies that work toward the end goal of a quality translation.1111.This work has been further supported by Risku, Windhager and Apfelthaler (2013)Risku, Hanna, Florian Windhager, and Matthias Apfelthaler 2013 “A Dynamic Network Model of Translatorial Cognition and Action.” Translation Spaces 2: 151–182. DOI logoGoogle Scholar and Risku and Windhager (2013)Risku, Hanna, and Florian Windhager 2013 “Extended Translation: A Sociocognitive Research Agenda.” Target 25 (1): 33–45. DOI logoGoogle Scholar. Dror and Harnad (2008aDror, Itiel E., and Stevan Harnad eds. 2008aCognition Distributed: How Cognitive Technology Extends our Minds. Philadelphia: John Benjamins. DOI logoGoogle Scholar, 2008b 2008b “Offloading Cognition onto Cognitive Technology.” In Dror and Harnad 2008a, 1–23.Google Scholar) emphasize that distributed cognition in the service of an overarching goal, however, is only facilitated by technology. The integration of translation memories and terminology management systems into the translation process distributes the ability of multiple actors to work asynchronously and offload some of their cognitive processing onto technology and extend the actor’s capabilities.1212.The influence of technology on cognition should not be taken for granted. Glenberg (2006)Glenberg, Arthur M. 2006 “Radical Changes in Cognitive Process due to Technology: A Jaundiced View.” Pragmatics & Cognition 14 (2): 263–274. DOI logoGoogle Scholar, for instance, investigates distributed cognition and its ability to influence language comprehension and concludes that it does not significantly change cognitive processing. This debate falls outside the scope of this article; however, the application of this cognitive science concept to language comprehension is another example of the utility of revisiting traditionally-held notions as argued by Muñoz Martín (2010)Muñoz Martín, Ricardo 2010 “Leave No Stone Unturned: On the Development of Cognitive Translatology.” Translation and Interpreting Studies 5 (2): 145–162. DOI logoGoogle Scholar.

In the case of computer-assisted translation and machine translation, the translator or MT user perform several processes (e.g., comprehension, transfer, and production) with the computer system. This distributed act of cognition creates a shared responsibility for the final translation product, in contrast with a human-only translation production model. Rather than the translator being the sole creator of the target text, this actor now serves as arbiter and editor of the machine-generated content. As such, the translator-cum-editor is charged with a dual, and sometimes competing, role of generating an appropriate translation, and ensuring that the use of stored translations or automatically-generated translations are appropriately applied and revised as necessary. In a solely product-oriented perspective on quality, the shift in the production model would discount this substantial change.

The change described above, though, is not a complete view of the translation process when working with CAT and MT systems. For these translation aids to be truly effective in supporting the language service provider in the translation task, translated content must be stored, processed, and leveraged to be beneficial. These aids are often the result of the work of other language professionals (e.g., translators, editors, proofreaders, writers). Killman (2015)Killman, Jeffrey 2015 “Context as Achilles’ Heel of Translation Technologies: Major Implications for End Users.” Translation and Interpreting Studies 10 (2): 203–222. DOI logoGoogle Scholar highlights several challenges inherent to translation tool use, particularly with respect to extra-linguistic contextual features occurring outside the segment on which the translator is currently working. Cognition in this mode of text production is thereby distributed further, such that the translation quality relies on a myriad of factors and constraints.

In view of this instantiation of distributed cognition, the editing task performed by the translator in the initial production of the target text necessarily requires significant evaluation of the sources of information as to its perceived quality. Additionally, the translator must determine whether the use of these resources is in fact in support of the translation task. Teixeira (2014)Teixeira, Carlos S. C. 2014 “Perceived vs. Measured Performance in the Post-editing of Suggestions from Machine Translation and Translation Memories.” Proceedings of the AMTA 2014 Third Workshop on Post-editing Technology and Practice. Vancouver, BC.Google Scholar and Mellinger and Shreve (2016)Mellinger, Christopher D., and Gregory M. Shreve 2016 “Match Evaluation and Over-editing in a Translation Memory Environment.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 131–148. Amsterdam: John Benjamins. DOI logoGoogle Scholar find the tendency of translators to persist in the translation task regardless of whether these aids are, in fact, useful. Editing performed by the translator when drafting the target text must be performed such that the quality, in the eyes of the translator, is appropriate for the given context and task.

In the TEP paradigm, though, translation is still the first step in the translation workflow; often the next step in the translation process is revision by another agent, similar to the translation process that does not use a translation tool or aid.1313.Suojanen, Koskinen and Tuominen (2015Suojanen, Tytti, Kaisa Koskinen, and Tiina Tuominen 2015User-Centred Translation. New York: Routledge.Google Scholar, 130) reference the first step as a heuristic quality control process in which the translator self-revises or checks his or her work at various stages. In the context of the present paper, self-revision is considered to be completed by the translator prior to passing the translation to another agent for review. Herein lies another implicit assumption: that the reviser adds value to the produced text. As described in many standards, the reviser’s role is to detect and correct any errors that are present in the translation. If, however, the translator is already performing much of this role in the review of machine-translated or translation memory matches, the question should be raised whether the work performed by the reviser is necessary or redundant. Including revisers in the process may provide peace of mind to the client as an extra check in the process, which may be reason enough to include them in the process. Moreover, revisers may be necessary to review novel translations produced by the translator when translated material is not available to support the translation task. The quality that these revisers ensure or the value that they add may require additional investigation.

At present, there is a relative dearth of research on revision behavior and performance that impedes drawing definitive conclusions. Evidence is accumulating that revisers tend to approach the editing task with a “detect-and-correct” mindset, as shown by Mellinger and Shreve (2016)Mellinger, Christopher D., and Gregory M. Shreve 2016 “Match Evaluation and Over-editing in a Translation Memory Environment.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 131–148. Amsterdam: John Benjamins. DOI logoGoogle Scholar. The revision task seems to elicit a “red-pen” effect, in which revisers and editors introduce preferential changes into the target text. While these changes do not necessarily result in an inappropriate translation, they may lead to inefficiency in the translation and revision process. Consequently, the compensation and time spent in making ostensibly unnecessary alterations to the text may outweigh any value added during the translation process. In other words, revisers acting in this manner may be partially and correctly rendering a translation for a targeted group of readers, but these changes may be economically inefficient. Further research is needed to replicate and verify this behavior across different editing styles and editor profiles.

5.Revision and salience

To this point, product- and process-oriented approaches to translation quality have been discussed in relation to the role and position of editing and revision. These approaches are useful to understand specific aspects of translation quality, but in isolation cannot fully offer an account of revision and its relationship to quality. Therefore, the notion of salience will be discussed as a third and final perspective on quality assessment, given the potential usefulness of the concept to understand the final evaluation of what constitutes quality in a translated text.

In linguistics, salience often refers to “the degree of relative prominence of a unit of information, at a specific point in time, in comparison to other units of information” and can occur at the level of entities or utterances, information and discourse structure, and extra-linguistic features (Chiarcos, Claus, and Grabski 2011Chiarcos, Christian, Berry Claus, and Michael Grabski 2011 “Introduction: Salience in Linguistics and Beyond.” In Salience: Multidisciplinary Perspectives on its Function in Discourse, edited by Christian Chiarcos, Berry Claus, and Michael Grabski, 1–28. Berlin: De Gruyter Mouton. DOI logoGoogle Scholar, 2; see also Allan and Jaszczolt 2011Allan, Keith, and Kasia M. Jaszczolt eds. 2011Salience and Defaults in Utterance Processing. Berlin: De Mouton Gruyter.Google Scholar and Giora 2003Giora, Rachel 2003On Our Mind: Salience, Context, and Figurative Language. New York: Oxford University Press. DOI logoGoogle Scholar).1414. Racz (2013)Racz, Peter 2013Salience in Sociolinguistics: A Quantitative Approach. Berlin: Mouton De Gruyter. DOI logoGoogle Scholar, in contrast, adopts a sociolinguistic view, in contrast to these more traditional notions of salience in linguistics. In psychological research, salience typically describes “the ability of a stimulus to stand out from the rest” and is therefore more likely to be attended to and enter into subsequent cognitive processing (Ellis 2016Ellis, Nick C. 2016 “Salience, Cognition, Language Complexity, and Complex Adaptive Systems.” Studies in Second Language Acquisition 38 (2): 341–351. DOI logoGoogle Scholar). As Ellis (342–343) asserts, three aspects should be considered as relevant to determine salience: (1) the physical world and a person’s embodiment and senses that allow for specific sensations to be perceived as more prominent than others; (2) the person’s previous experiences that shape the context in which stimuli are perceived; and (3) the person’s expectations of what should follow based on the context.

Applied to the context of translation and revision, salience provides insight on the overediting behavior observed by Mellinger and Shreve (2016)Mellinger, Christopher D., and Gregory M. Shreve 2016 “Match Evaluation and Over-editing in a Translation Memory Environment.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 131–148. Amsterdam: John Benjamins. DOI logoGoogle Scholar and helps question the role that technology might play in explaining this phenomenon. Translators who provide language services are often confronted with the task of working with computer-assisted translation tools that are ostensibly designed to aid the translator’s progress. As cited above, the introduction of CAT and MT systems changes the task of the translator and reviser; there is an added technological component that necessarily draws attention and focus away from the editing task and to the use of a specific translation aid. This change is significant in its own right, and perhaps user familiarity with the system can partially explain changes in the translation product.

A more important consideration might be the shift in the task model in which translators, editors, and revisers work. Mellinger and Shreve (2016)Mellinger, Christopher D., and Gregory M. Shreve 2016 “Match Evaluation and Over-editing in a Translation Memory Environment.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 131–148. Amsterdam: John Benjamins. DOI logoGoogle Scholar argue that a fundamental change occurs in the translation workflow, particularly if we consider the tripartite model of uncertainty management described by Angelone (2010)Angelone, Erik 2010 “Uncertainty, Uncertainty Management and Metacognitive Problem Solving in the Translation Task.” In Translation and Cognition, edited by Gregory M. Shreve and Erik Angelone, 17–40. Amsterdam: John Benjamins. DOI logoGoogle Scholar – that is, problem recognition, solution proposal, and solution evaluation. In translation without CAT tools or MT systems, a translator must translate a text, recognize problems in the source text, propose potential solutions, and evaluate their appropriateness in light of a number of factors. This iterative process often focuses on the source text as the main locus of recognized problems.

However, when working with CAT and MT systems, there is increased salience on the solution proposal stage. The distributed nature of the cognitive task described above allows for prior translations (that is, previous solutions) that have been leveraged by the tools to be presented as possible solutions for this new task. In this task configuration, a second locus of problems is introduced – that of the stored translations. The solutions that have been stored or leveraged are then evaluated by the translator-cum-editor for possible inclusion in the target text. Nevertheless, the quality or content of these segments cannot be guaranteed. As LeBlanc (2017)LeBlanc, Matthieu 2017 “ ‘I Can’t Get No Satisfaction’: Should We Blame Translation Technologies or Shifting Business Practices?” In Human Issues in Translation Technology, edited by Dorothy Kenny, 45–62. New York: Routledge.Google Scholar notes, the implementation of TM systems in professional environments is often guided by business practices or client preference. Consequently, suboptimal translation matches may be stored in the TM. With two competing problem loci, coupled with a foregrounded solution proposal, quality in the revision task may require the negotiation of multiple renditions and translation decisions that are codified within a string of characters.

Revisers should be considered in translation quality assessment models since they are yet another agent involved in providing a finished rendition of the target text. These language professionals do not approach the revision task as tabula rasa, but they instead bring their own conception of what a possible translation may in fact be (Lörscher 1986Lörscher, Wolfgang 1986 “Linguistic Aspects of Translation Processes: Towards an Analysis of Translation Performance.” In Interlingual and Intercultural Communication: Discourse and Cognition in Translation and Second Language Acquisition Studies, edited by Juliane House and Shoshana Blum-Kulka, 277–292. Tübingen: Gunter Narr.Google Scholar, 1991 1991Translation Performance, Translation Process, and Translation Strategies. A Psycholinguistic Investigation. Tübingen: Gunter Narr.Google Scholar). The salience of their experience and expectations of what constitutes an appropriate target language rendition may conflict with the suggestion proffered by a translation memory. When considering Halverson’s (2015) 2015 “Cognitive Translation Studies and the Merging of Empirical Paradigms: The Case of ‘Literal Translation.’Translation Spaces 4 (2): 310–340. DOI logoGoogle Scholar reconceptualization of the “literal translation hypothesis,” it becomes clear that there are competing renditions and quality conceptions at play. Halverson describes the notion of literal translation not necessarily as a close rendering of the source language, which might be considered accurate or adequate if the product-oriented terminology cited above were to be used, but rather as a default mode of translation specific to the translator (or in this case, the reviser). This target text rendition of the translator’s representation of the source text will undoubtedly differ from those of other language professionals. Then, with the charge of detecting and correcting errors in translation projects that incorporate revision or editing, the reviser may impose their conception of the source text on the target text produced by the translator. His or her work may occur largely at the segment level, although the extent to which revisers can make larger structural changes may be constrained by the CAT tool or task brief provided by the language service provider. In such cases, the competing target text renditions may then take the form of the overediting behavior observed by Mellinger and Shreve (2016)Mellinger, Christopher D., and Gregory M. Shreve 2016 “Match Evaluation and Over-editing in a Translation Memory Environment.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 131–148. Amsterdam: John Benjamins. DOI logoGoogle Scholar. Consequently, the assumption that editing can serve the same quality control role and be performed in ways similar to that of human-only translation must be challenged in light of the increased salience of evaluative behavior in the translation workflow. Likewise, the salience afforded to solution proposal and evaluation stages of problem solving behavior equally challenges prior conceptualizations of the role of editing and revision in translation quality assessment.

6.Conclusion

Failure to reconsider the way in which revision is performed may have significant impact on the notion of quality. Revision ought to take into account the text production process that has shifted from a relatively linear workflow, and the discussions of the concepts of adequacy, distributed cognition, and salience vis-à-vis revision behavior argue that a linguistic model of quality cannot be adopted as the sole indicator of quality. Instead, TQA models need to be expanded to include revision and editing as explicit components. Product-oriented models do not sufficiently account for extra-textual considerations, nor do they capture the shifting task models that arise from the introduction of new translation technologies. These models may be subsumed within more comprehensive quality frameworks, but they cannot be considered the sole standard against which quality should be measured. The implicit assumptions that arise from purely product-oriented approaches to quality with respect to revision and editing make a solely product-oriented approach untenable.

The re-examination of translation quality to include editing and revision has potential implications in the language industry. To provide one example, typical pricing models are based largely on word count as the primary determinant of client cost. When working with translation memory tools, language service providers may also provide a discount based on a specific CAT tool algorithm that determines the relative “likeness” of a stored translated text segment to the translation segment to be translated.1515.How similar a stored translated text segment is to a new segment is often described in terms of matches. An exact match or 100% match is a segment that is identical to translation unit stored in a translation memory. A fuzzy match, in contrast, denotes a translation that is similar to a stored translation unit, but ostensibly requires revision by the translator or editor. This determination is made by a variety of factors, and can be done using character-based matching, formatting penalties, or context-based algorithms. Translators, in turn, may be paid less when editing proposed solutions than when providing their own translation. The concomitant performance of translation and revision, the associated competing pricing structures, and the manner by which each task is evaluated with respect to quality, potentially upends current practices and requires greater scrutiny.

That is not to say that this compensation model is inherently flawed or that revision is a non-value-adding service; rather, in my view this indicates that these industry models are based on underlying assumptions. Language service providers ought to review their current workflows and internal processes to determine when and how value is added during the translation workflow. As stated above, revisers typically occupy the position of being the sole bastion of quality in the translation process. This positioning should be investigated in greater detail given the implicit assumptions about this task. Moreover, language service providers ought to increase their emphasis on terminology management and identify client-specific requirements that can be incorporated throughout the process. In doing so, challenges posed by product-oriented perspectives on translation quality might be mitigated and provide a benchmark against which revisers and editors can evaluate the target language version. A process-oriented perspective to translation quality that incorporates editing and revision tasks complements our understanding of what constitutes translation quality particularly with regard to the use of translation aids.

To conclude, revision in the digital age requires the text production process to be included in the assessment of translations; it is insufficient to approach quality as a series of objective measures that only consider the texts themselves. Instead, translation quality ought to be rethought to encompass the evolving nature of editing and revision, particularly as it pertains to the ever-changing role of translation technology in the document lifecycle.

Acknowledgements

I would like to thank the anonymous reviewers and editors who provided constructive and thought-provoking feedback that has helped strengthen this article.

Notes

1.Similar revision practices are seen in the European Union, particularly with respect to multilingual legislation (Wagner, Bech and Martínez 2002Wagner, Emma, Svend Bech, and Jesús M. Martínez 2002Translating for the European Union Institutions. Manchester: St. Jerome.Google Scholar). Revision processes are sometimes carried out by senior colleagues. In other cases lawyers or lawyer-linguists perform this task, in addition to other drafting and linguistic tasks that fall outside the scope of the traditional translation workflow (Šarčević and Robertson 2015Šarčević, Susan, and Colin Robertson 2015 “The Work of Lawyer-Linguists in the EU Institutions.” In Legal Translation in Context: Professional Issues and Prospects, edited by Anabel Borja Albi and Fernando Prieto Ramos, 181–202. Bern: Peter Lang.Google Scholar).
2.While some textual features may be difficult to classify as an error when in the service of an overarching norm, certain linguistic or terminological errors can be more easily identified.
3. ISO 17100 (2015)ISO 17100 2015Translation Services – Requirements for Translation Services. Geneva: ISO.Google Scholar has superseded the European standard; however, given the prevalence of EN 15038 in the literature, both standards are mentioned here as a point of reference.
4. Bass’s (2006)Bass, Scott 2006 “Quality in the Real World.” In Perspectives on Localization, edited by Keiran J. Dunne, 69–94. Amsterdam: John Benjamins. DOI logoGoogle Scholar work focuses largely on work in localization. However, these comments can be extended to other translation contexts. For a description of quality management specific to localization, see Dunne (2006)Dunne, Keiran J. 2006 “Putting the Cart behind the Horse: Rethinking Localization Quality Management.” In Perspectives on Localization, edited by Keiran J. Dunne, 95–117. Amsterdam: John Benjamins. DOI logoGoogle Scholar.
5.Though not explicitly stated in the volume, Williams’s (2004)Williams, Malcolm 2004Translation Quality Assessment: An Argumentation-Centred Approach. Ottawa: University of Ottawa Press.Google Scholar approach to translation quality assessment seems to draw on Mann and Thompson’s (1988)Mann, William C., and Sandra A. Thompson 1988 “Rhetorical Structure Theory: Toward a Functional Theory of Text Organization.” Text 8 (3): 243–281. DOI logoGoogle Scholar Rhetorical Structure Theory.
6.The description here of translation quality assessment models is by no means exhaustive. Drugan’s (2013)Drugan, Joanna 2013Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.Google Scholar monograph on quality in professional translation traces the development of top-down and bottom-up approaches to assessment and quality that are often at odds. Lee (2006)Lee, Hyang 2006 “Révision: Définitions et paramètres.” Meta 51 (2): 410–419. DOI logoGoogle Scholar and Brunnette (2000)Brunette, Louise 2000 “Towards a Terminology for Translation Quality Assessment: A Comparison of TQA Practices.” The Translator 6 (2): 169–182. DOI logoGoogle Scholar similarly point to competing conceptions of translation quality. For a more extensive discussion of various approaches to translation quality assessment and the significant debate surrounding what constitutes quality, see House (2015)House, Juliane 2015Translation Quality Assessment: Past and Present. New York: Routledge.Google Scholar.
7.The traditional waterfall approach to project management has been described in the context of translation and localization (e.g., Dunne 2011 2011 “From Vicious to Virtuous cycle: Customer-Focused Translation Quality Management Using ISO 9001 Principles and Agile Methodologies.” In Translation and Localization Project Management, edited by Keiran J. Dunne and Elena S. Dunne, 153–187. Amsterdam: John Benjamins. DOI logoGoogle Scholar; Drugan 2013Drugan, Joanna 2013Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.Google Scholar) as have more iterative approaches, such as the Agile methodology (Dunne 2011 2011 “From Vicious to Virtuous cycle: Customer-Focused Translation Quality Management Using ISO 9001 Principles and Agile Methodologies.” In Translation and Localization Project Management, edited by Keiran J. Dunne and Elena S. Dunne, 153–187. Amsterdam: John Benjamins. DOI logoGoogle Scholar). In particular, a shift from solely text-based translation projects to content-based localization necessitates rethinking a linear progression through translation-editing-proofreading.
8.Incidentally, Koskinen’s (2008)Koskinen, Kaisa 2008Translating Institutions: An Ethnographic Study of EU Translation. Manchester: St. Jerome.Google Scholar ethnographic study of the European Commission notes that the revision process, while touted as imperative, often is not afforded sufficient resources and time in translation workflows. The potential elimination of revision from the translation workflow further complicates notions of quality, insofar as professional standards are at times misaligned with practice.
9.It should be noted that Koby et al. (2014)Koby, Geoffrey S., et al. 2014 “Defining Translation Quality.” Tradumàtica 12: 413–420. DOI logoGoogle Scholar openly admit to have competing opinions about a narrow or broad definition of quality. Their aim is for the discussion to drive further examination of what constitutes quality.
10.This approach is in stark contrast to House’s (2015House, Juliane 2015Translation Quality Assessment: Past and Present. New York: Routledge.Google Scholar, 118) dismissal of the ability of translation process research to investigate cognitive processes in the service of translation quality assessment. The main contention forwarded by House is that observable behavior cannot conclusively reveal cognitive processing because the latter is inherently unobservable.
11.This work has been further supported by Risku, Windhager and Apfelthaler (2013)Risku, Hanna, Florian Windhager, and Matthias Apfelthaler 2013 “A Dynamic Network Model of Translatorial Cognition and Action.” Translation Spaces 2: 151–182. DOI logoGoogle Scholar and Risku and Windhager (2013)Risku, Hanna, and Florian Windhager 2013 “Extended Translation: A Sociocognitive Research Agenda.” Target 25 (1): 33–45. DOI logoGoogle Scholar.
12.The influence of technology on cognition should not be taken for granted. Glenberg (2006)Glenberg, Arthur M. 2006 “Radical Changes in Cognitive Process due to Technology: A Jaundiced View.” Pragmatics & Cognition 14 (2): 263–274. DOI logoGoogle Scholar, for instance, investigates distributed cognition and its ability to influence language comprehension and concludes that it does not significantly change cognitive processing. This debate falls outside the scope of this article; however, the application of this cognitive science concept to language comprehension is another example of the utility of revisiting traditionally-held notions as argued by Muñoz Martín (2010)Muñoz Martín, Ricardo 2010 “Leave No Stone Unturned: On the Development of Cognitive Translatology.” Translation and Interpreting Studies 5 (2): 145–162. DOI logoGoogle Scholar.
13.Suojanen, Koskinen and Tuominen (2015Suojanen, Tytti, Kaisa Koskinen, and Tiina Tuominen 2015User-Centred Translation. New York: Routledge.Google Scholar, 130) reference the first step as a heuristic quality control process in which the translator self-revises or checks his or her work at various stages. In the context of the present paper, self-revision is considered to be completed by the translator prior to passing the translation to another agent for review.
14. Racz (2013)Racz, Peter 2013Salience in Sociolinguistics: A Quantitative Approach. Berlin: Mouton De Gruyter. DOI logoGoogle Scholar, in contrast, adopts a sociolinguistic view, in contrast to these more traditional notions of salience in linguistics.
15.How similar a stored translated text segment is to a new segment is often described in terms of matches. An exact match or 100% match is a segment that is identical to translation unit stored in a translation memory. A fuzzy match, in contrast, denotes a translation that is similar to a stored translation unit, but ostensibly requires revision by the translator or editor. This determination is made by a variety of factors, and can be done using character-based matching, formatting penalties, or context-based algorithms.

References

Abdallah, Kristiina
2007 “Tekstittämisen laatu – mitä se oikein on?” [Subtitling quality – what is it?]. In Olennaisen äärellä. Johdatus audiovisuaaliseen käätämiseen [Introduction to audiovidual translation], edited by Riitta Oittinen and Tiina Tuominen, 272–293. Tampere: Tampereen yliopistopaino.Google Scholar
Allan, Keith, and Kasia M. Jaszczolt
eds. 2011Salience and Defaults in Utterance Processing. Berlin: De Mouton Gruyter.Google Scholar
Alves, Fabio
2015 “Translation Process Research at the Interface: Paradigmatic, Theoretical, and Methodological Issues in Dialogue with Cognitive Science, Expertise Studies, and Psycholinguistics.” In Psycholinguistic and Cognitive Inquiries into Translation and Interpreting, edited by Aline Ferreira and John W. Schwieter, 17–40. Amsterdam: John Benjamins.Google Scholar
Angelone, Erik
2010 “Uncertainty, Uncertainty Management and Metacognitive Problem Solving in the Translation Task.” In Translation and Cognition, edited by Gregory M. Shreve and Erik Angelone, 17–40. Amsterdam: John Benjamins. DOI logoGoogle Scholar
ASTM International
2006 “ASTM F 2575 – 06: Standard Guide for Quality Assurance in Translation.”Google Scholar
Bass, Scott
2006 “Quality in the Real World.” In Perspectives on Localization, edited by Keiran J. Dunne, 69–94. Amsterdam: John Benjamins. DOI logoGoogle Scholar
Brunette, Louise
2000 “Towards a Terminology for Translation Quality Assessment: A Comparison of TQA Practices.” The Translator 6 (2): 169–182. DOI logoGoogle Scholar
Chiarcos, Christian, Berry Claus, and Michael Grabski
2011 “Introduction: Salience in Linguistics and Beyond.” In Salience: Multidisciplinary Perspectives on its Function in Discourse, edited by Christian Chiarcos, Berry Claus, and Michael Grabski, 1–28. Berlin: De Gruyter Mouton. DOI logoGoogle Scholar
Cronin, Michael
2013Translation in the Digital Age. New York: Routledge.Google Scholar
Dragsted, Barbara
2008 “Computer-aided Translation as a Distributed Cognitive Task.” In Dror and Harnad 2008a, 237–256.Google Scholar
Dror, Itiel E., and Stevan Harnad
eds. 2008aCognition Distributed: How Cognitive Technology Extends our Minds. Philadelphia: John Benjamins. DOI logoGoogle Scholar
2008b “Offloading Cognition onto Cognitive Technology.” In Dror and Harnad 2008a, 1–23.Google Scholar
Drugan, Joanna
2013Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.Google Scholar
Dunne, Keiran J.
2006 “Putting the Cart behind the Horse: Rethinking Localization Quality Management.” In Perspectives on Localization, edited by Keiran J. Dunne, 95–117. Amsterdam: John Benjamins. DOI logoGoogle Scholar
2011 “From Vicious to Virtuous cycle: Customer-Focused Translation Quality Management Using ISO 9001 Principles and Agile Methodologies.” In Translation and Localization Project Management, edited by Keiran J. Dunne and Elena S. Dunne, 153–187. Amsterdam: John Benjamins. DOI logoGoogle Scholar
2012 “The Industrialization of Translation: Causes, Consequences and Challenges.” Translation Spaces 1: 143–168. DOI logoGoogle Scholar
Ellis, Nick C.
2016 “Salience, Cognition, Language Complexity, and Complex Adaptive Systems.” Studies in Second Language Acquisition 38 (2): 341–351. DOI logoGoogle Scholar
Englund Dimitrova, Birgitta
2005Expertise and Explicitation in the Translation Process. Amsterdam: John Benjamins. DOI logoGoogle Scholar
Even-Zohar, Itamar
1975 “Decisions in Translating Poetry.” Ha-sifrut/Literature 21: 32–45.Google Scholar
European Standard
2006 “EN 15038: Translation Services – Service Requirements.”Google Scholar
Giora, Rachel
2003On Our Mind: Salience, Context, and Figurative Language. New York: Oxford University Press. DOI logoGoogle Scholar
Glenberg, Arthur M.
2006 “Radical Changes in Cognitive Process due to Technology: A Jaundiced View.” Pragmatics & Cognition 14 (2): 263–274. DOI logoGoogle Scholar
Gouadec, Daniel
2010Translation as a Profession. 2nd ed. Amsterdam: John Benjamins.Google Scholar
Halverson, Sandra L.
2013 “Implications of Cognitive Linguistics for Translation Studies.” In Cognitive Linguistics and Translation, edited by Ana Rojo and Iraide Ibarretxe-Antuñano, 33–73. Berlin: De Gruyter Mouton. DOI logoGoogle Scholar
2015 “Cognitive Translation Studies and the Merging of Empirical Paradigms: The Case of ‘Literal Translation.’Translation Spaces 4 (2): 310–340. DOI logoGoogle Scholar
Hine Jr., Jonathan T.
2003 “Teaching Text Revision in a Multilingual Environment.” In Beyond the Ivory Tower: Rethinking Translation Pedagogy, edited by Brian J. Baer and Geoffrey S. Koby, 135–156. Amsterdam: John Benjamins. DOI logoGoogle Scholar
Horguelin, Paul A., and Louise Brunnette
1998Pratique de la révision. Montreal: Linguatech.Google Scholar
House, Juliane
2015Translation Quality Assessment: Past and Present. New York: Routledge.Google Scholar
Hutchins, Edwin
1995Cognition in the Wild. Cambridge, MA: MIT Press.Google Scholar
Hutchins, John
1998 “The Origin of the Translator’s Workstation.” Machine Translation 13 (4): 287–307. DOI logoGoogle Scholar
ISO 17100
2015Translation Services – Requirements for Translation Services. Geneva: ISO.Google Scholar
Jääskeläinen, Riitta
2016 “Quality and Translation Process Research.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 89–106. Amsterdam: John Benjamins. DOI logoGoogle Scholar
Killman, Jeffrey
2015 “Context as Achilles’ Heel of Translation Technologies: Major Implications for End Users.” Translation and Interpreting Studies 10 (2): 203–222. DOI logoGoogle Scholar
Koby, Geoffrey S., et al.
2014 “Defining Translation Quality.” Tradumàtica 12: 413–420. DOI logoGoogle Scholar
Koskinen, Kaisa
2008Translating Institutions: An Ethnographic Study of EU Translation. Manchester: St. Jerome.Google Scholar
Künzli, Alexander
2007 “Translation Revision: A Study of the Performance of Ten Professional Translators Revising a Legal Text.” In Doubts and Directions in Translation Studies: Selected Contributions from the EST Congress, Lisbon 2004, edited by Radegundis Stolze, Miriam Shlesinger, and Yves Gambier, 115–126. Amsterdam: John Benjamins. DOI logoGoogle Scholar
LeBlanc, Matthieu
2017 “ ‘I Can’t Get No Satisfaction’: Should We Blame Translation Technologies or Shifting Business Practices?” In Human Issues in Translation Technology, edited by Dorothy Kenny, 45–62. New York: Routledge.Google Scholar
Lee, Hyang
2006 “Révision: Définitions et paramètres.” Meta 51 (2): 410–419. DOI logoGoogle Scholar
Lörscher, Wolfgang
1986 “Linguistic Aspects of Translation Processes: Towards an Analysis of Translation Performance.” In Interlingual and Intercultural Communication: Discourse and Cognition in Translation and Second Language Acquisition Studies, edited by Juliane House and Shoshana Blum-Kulka, 277–292. Tübingen: Gunter Narr.Google Scholar
1991Translation Performance, Translation Process, and Translation Strategies. A Psycholinguistic Investigation. Tübingen: Gunter Narr.Google Scholar
Mann, William C., and Sandra A. Thompson
1988 “Rhetorical Structure Theory: Toward a Functional Theory of Text Organization.” Text 8 (3): 243–281. DOI logoGoogle Scholar
Mellinger, Christopher D.
2014Computer-Assisted Translation: An Empirical Investigation of Cognitive Effort. PhD diss. Kent State University. Available at: http://​bit​.ly​/1ybBY7W
Mellinger, Christopher D., and Gregory M. Shreve
2016 “Match Evaluation and Over-editing in a Translation Memory Environment.” In Reembedding Translation Process Research, edited by Ricardo Muñoz Martín, 131–148. Amsterdam: John Benjamins. DOI logoGoogle Scholar
Mossop, Brian
2014Revising and Editing for Translators. 3rd ed. New York: Routledge. DOI logoGoogle Scholar
Muñoz Martín, Ricardo
2010 “Leave No Stone Unturned: On the Development of Cognitive Translatology.” Translation and Interpreting Studies 5 (2): 145–162. DOI logoGoogle Scholar
2014 “A Blurred Snapshot of Advances in Translation Process Research.” MonTI Special Issue – Minding Translation 1: 49–84.Google Scholar
O’Brien, Sharon
2007 “An Empirical Investigation of Temporal and Technical Post-Editing Effort.” Translation and Interpreting Studies 2 (1): 83–136. DOI logoGoogle Scholar
Orellana, Marina
1990La traducción del inglés al castellano. Santiago: Editorial Universitaria.Google Scholar
Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu
2002 “BLEU: A Method for Automatic Evaluation of Machine Translation.” In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311–318. Stroudsburg, PA: ACL.Google Scholar
Quah, C. K.
2006Translation and Technology. New York: Palgrave Macmillan. DOI logoGoogle Scholar
Racz, Peter
2013Salience in Sociolinguistics: A Quantitative Approach. Berlin: Mouton De Gruyter. DOI logoGoogle Scholar
Risku, Hanna
2010 “A Cognitive Scientific View on Technical Communication and Translation: Do Embodiment and Situatedness Really Make a Difference?Target 22 (1): 94–111. DOI logoGoogle Scholar
Risku, Hanna, and Florian Windhager
2013 “Extended Translation: A Sociocognitive Research Agenda.” Target 25 (1): 33–45. DOI logoGoogle Scholar
Risku, Hanna, Florian Windhager, and Matthias Apfelthaler
2013 “A Dynamic Network Model of Translatorial Cognition and Action.” Translation Spaces 2: 151–182. DOI logoGoogle Scholar
Robert, Isabelle S.
2014 “Investigating the Problem-Solving Strategies of Revisers through Triangulation: An Exploratory Study.” Translation and Interpreting Studies 9 (1): 88–108. DOI logoGoogle Scholar
Robert, Isabelle S., and Louise Brunette
2016 “Should Revision Trainees Think Aloud While Revising Somebody Else’s Translation? Insights from an Empirical Study with Professionals.” Meta 61 (2): 320–345. DOI logoGoogle Scholar
Šarčević, Susan, and Colin Robertson
2015 “The Work of Lawyer-Linguists in the EU Institutions.” In Legal Translation in Context: Professional Issues and Prospects, edited by Anabel Borja Albi and Fernando Prieto Ramos, 181–202. Bern: Peter Lang.Google Scholar
Shih, Claire Yi-Yi
2006 ““Revision from Translators’ Point of View. An Interview Study.” Target 18 (2): 295–312. DOI logoGoogle Scholar
Shreve, Gregory M.
2006 “The Deliberate Practice: Translation and Expertise.” Journal of Translation Studies 9 (1): 27–42.Google Scholar
Spalink, Karin, Rachel Levy, and Carla Merrill
1997The Level Edit ™ Post-Editing Process: A Tutorial for Post-Editors of Machine Translation Output. Internationalization and Translation Services.Google Scholar
Specia, Lucia, Najeh Hajlaoui, Catalina Hallett, and Wilker Aziz
2011 “Predicting Machine Translation Accuracy.” MT Summit XIII: The Thirteenth Machine Translation Summit, 513–520. Xiamen, China.Google Scholar
Suojanen, Tytti, Kaisa Koskinen, and Tiina Tuominen
2015User-Centred Translation. New York: Routledge.Google Scholar
Teixeira, Carlos S. C.
2014 “Perceived vs. Measured Performance in the Post-editing of Suggestions from Machine Translation and Translation Memories.” Proceedings of the AMTA 2014 Third Workshop on Post-editing Technology and Practice. Vancouver, BC.Google Scholar
Toury, Gideon
2012Descriptive Translation Studies – and Beyond. Revised edition. Amsterdam: John Benjamins. DOI logoGoogle Scholar
Turner, Phil
2016HCI Redux: The Promise of Post-Cognitive Interaction. Switzerland: Springer. DOI logoGoogle Scholar
Underwood, Nancy L., and Bart Jongejan
2001 “Translatability Checker: A Tool to Help Decide Whether to Use MT.” Proceedings of MT Summit VIII: Machine Translation in the Information Age, edited by Bente Maegaard, 363–368. Santiago de Compostela.Google Scholar
Vermeer, Hans J.
(1989) 2004 “Skopos and Commission in Translational Action.” Translated by Andrew Chesterman. In The Translation Studies Reader. 2nd ed., edited by Lawrence Venuti, 227–238. New York: Routledge.Google Scholar
Wagner, Emma, Svend Bech, and Jesús M. Martínez
2002Translating for the European Union Institutions. Manchester: St. Jerome.Google Scholar
Whyatt, Bogusława, Katarzyna Stachowiak, and Marta Kajzer-Wietrzny
2016 “Similar and Different: Cognitive Rhythm and Effort in Translation and Paraphrasing.” Poznan Studies in Contemporary Linguistics 52 (2): 175–208. DOI logoGoogle Scholar
Williams, Malcolm
2004Translation Quality Assessment: An Argumentation-Centred Approach. Ottawa: University of Ottawa Press.Google Scholar

Address for correspondence

Christopher D. Mellinger

Department of Languages and Culture Studies

University of North Carolina at Charlotte

9201 University City Blvd.

CHARLOTTE, NC 28223

USA

[email protected]