The impact of visible lip movements on silent pauses in simultaneous interpreting
Simultaneous interpreting requires interpreters to listen to a source text while producing the target text in a
second language. In addition, the interpreter needs to process various types of visual input, which may further increase the
already high cognitive load. A study with 14 students of interpreting was conducted to investigate the impact of a speaker’s
visible lip movements on cognitive load in simultaneous interpreting by analysing the duration of silent pauses in the target
texts. Background noise masking the source speech was introduced as a control condition for cognitive load. Silent pause durations
were shorter when interpreters saw the speaker’s lip movements, which indicates that interpreters benefitted from visual input.
Furthermore, silent pause durations were longer with noise, which suggests that comparative silent pause durations can indicate
changes in cognitive load.
Article outline
- 1.Introduction
- 2.Theoretical background
- 2.1Visual input in simultaneous interpreting
- 2.2Manipulating speech perception with background noise
- 2.3Disfluencies as an indicator of cognitive load
- 3.Empirical study
- 3.1Participants
- 3.2Material
- 3.3Procedure
- 3.4Data analysis
- 3.4.1Subjective reports
- 3.4.2Silent pause durations
- 3.5Results
- 3.5.1Subjective reports
- 3.5.2Silence durations
- 4.Discussion
- 4.1Limitations of the study
- 4.2Potential and limitations of silent pauses in the target text as cognitive load indicator
- 5.Conclusion
- Acknowledgements
- Notes
-
References
References (65)
References
Ahrens, B. (2004). Prosodie
beim Simultandolmetschen. Frankfurt am Main: Peter Lang.
Albl-Mikasa, M. (2010). Global
English and English as a lingua franca (ELF): Implications for the interpreting
profession. Trans-kom 3 (3), 126–148.
Bates, D., Mächler, M., Bolker, B. & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software 67 (1).
Benoit, C., Mohammadi, T. & Kandel, S. (1994). Effects of phonetic context on audio-visual intelligibility of French. Journal of Speech and Hearing Research 37 (5), 1195–1203.
Bernstein, L. E., Auer, E. T. & Takayanagi, S. (2004). Auditory
speech detection in noise enhanced by lipreading. Speech
Communication 44 (1–4), 5–18.
Brancazio, L., Best, C. T. & Fowler, C. A. (2006). Visual
influences on perception of speech and nonspeech vocal-tract events. Language and
Speech 49 (1), 21–53.
Bühler, H. (1985). Conference
interpreting: A multichannel communication
phenomenon. Meta 30 (1), 49–54.
Bühler, H. (1986). Linguistic
(semantic) and extra-linguistic (pragmatic) criteria for the evaluation of conference interpretation and
interpreters. Multilingua 5 (4), 231–235.
Cecot, M. (2001). Pauses
in simultaneous interpretation: A contrastive analysis of professional interpreters’
performances. The Interpreters’
Newsletter 111, 63–85.
Chiaro, D. & Nocella, G. (2004). Interpreters’
perception of linguistic and non-linguistic factors affecting quality: A survey through the World Wide
Web. Meta 49 (2), 278–293.
Chmiel, A., Szarkowska, A., Koržinek, D., Lijewska, A., Dutka, Ł., Brocki, Ł. & Marasek, K. (2017). Ear–voice
span and pauses in intra- and interlingual respeaking: An exploratory study into temporal aspects of the respeaking
process. Applied
Psycholinguistics 38 (5), 1201–1227.
Davies, M. (2008). Word frequency data. Retrieved from The Corpus of Contemporary American English (COCA): [URL] (accessed 19 March 2021).
Euopean Commission. (2009a). United
Airlines rewards fittest people. Retrieved from Speech Repository: [URL] (accessed 4 February 2021).
European
Commission. (2009b). Disenchantment at
work. Retrieved from Speech Repository: [URL] (accessed 4 February 2021).
European
Commission. (2012a). Demographic shift in
Europe. Retrieved from Speech Repository: [URL] (accessed 4 February 2021).
European Commission. (2012b). Greece
in the doldrums. Retrieved from Speech Repository: [URL] (accessed 4 February 2021).
Fox, J. & Weisberg, S. (2018). Visualizing fit and lack of fit in complex regression models with predictor effect plots and partial residuals. Journal of Statistical Software 87 (9), 1–27.
Gerver, D. (1974). The
effects of noise on the performance of simultaneous interpreters: Accuracy of performance. Acta
Psychologica 38 (3), 159–167.
Gerver, D. (1975). A
psychological approach to simultaneous
interpretation. Meta 20 (2), 119–128.
Gerver, D. (2002). The
effects of source language presentation rate on the performance of simultaneous conference
interpreters. In F. Pöchhacker & M. Shlesinger (Eds.), The
interpreting studies reader. London/New York: Routledge, 53–66.
Gieshoff, A. C. (2018). The
impact of audio-visual speech on work-load in simultaneous interpreting. Doctoral
thesis, University of Mainz.
Goldman-Eisler, F. (1958). Speech
analysis and mental processes. Language and
Speech 1 (1), 59–75.
Goldman-Eisler, F. (1961). The
distribution of pause durations in speech. Language and
Speech 4 (4), 232–237.
Goldman-Eisler, F. (1968). Psycholinguistics:
Experiments in spontaneous speech. London/New York: Academic Press.
Goldman-Eisler, F. (2002). Segmentation
of input in simultaneous translation. In F. Pöchhacker & M. Shlesinger (Eds.), The
interpreting studies reader. London/New York: Routledge, 69–76.
I-hsin, I. L., Feng-lan, A. C. & Feng-lan, K. (2013). The
impact of non-native accented English on rendition accuracy in simultaneous
interpreting. Translation &
Interpreting 5 (2), 30–44.
Kramer, S. E., Kapteyn, T. S., Festen, J. M. & Kuik, D. J. (1997). Assessing
aspects of auditory handicap by means of pupil
dilation. Audiology 361, 155–164.
Lin, Y., Lv, Q. & Liang, J. (2018). Predicting
fluency with language proficiency, working memory, and directionality in simultaneous
interpreting. Frontiers in
Psychology 91: 1543.
Lo, S. & Andrews, S. (2015). To
transform or not to transform: Using generalized linear mixed models to analyse reaction time
data. Frontiers in
Psychology 61: 1171.
Macleod, A. & Summerfield, Q. (1987). Quantifying
the contribution of vision to speech perception in noise. British Journal of
Audiology 21 (2), 131–141.
Massaro, D. W. & Cohen, M. M. (1999). Speech
perception in perceivers with hearing loss: Synergy of multiple modalities. Journal of Speech,
Language, and Hearing
Research 42 (1), 21–41.
Mattys, S. L. & Wiget, L. (2011). Effects
of cognitive load on speech recognition. Journal of Memory and
Language 65 (2), 145–160.
Mattys, S. L., Brooks, J. & Cooke, M. (2009). Recognizing
speech under a processing load: Dissociating energetic from informational factors. Cognitive
Psychology 59 (1), 203–243.
Mizuno, A. (2005). Process
model for simultaneous interpreting and working
memory. Meta 50 (2), 739–752.
Moser, B. (1978). Simultaneous
interpretation: A hypothetical model and its practical
application. In D. Gerver & H. W. Sinaiko (Eds.), Language
interpretation and communication. New York: Plenum Press, 353–368.
Moser-Mercer, B. (2003). Remote
interpreting: Assessment of human factors and performance parameters. Communicate! AIIC
Webzine (Summer 2003). [URL] (accessed 16 March 2020).
Moser-Mercer, B. (2005). Remote
interpreting: The crucial role of
presence. VALS-ASLA 811, 73–97.
Peirce, J. W. (2007). PsychoPy-Psychophysics software in Python. Journal of Neuroscience Methods 162 (1/2), 8–13.
Pöchhacker, F. (2005). From
operation to action: Process-orientation in interpreting
studies. Meta 50 (2), 682–695.
Poyatos, F. (1984). The
multichannel reality of discourse: Language-paralanguage-kinesics and the totality of communicative
systems. Language
Sciences 6 (2), 307–337.
Rackow, J. (2013). Dolmetschen
als Kommunikation: Verbale und nonverbale Informationsverarbeitung im
Dolmetschprozess. In D. Andres, M. Behr & M. Dingfelder Stone (Eds.), Dolmetschmodelle –
erfasst, erläutert, erweitert. Frankfurt am Main: Peter Lang, 129–152.
Rennert, S. (2008). Visual
input in simultaneous
interpreting. Meta 53 (1), 204–217.
Rennert, S. (2019). Redeflüssigkeit
und Dolmetschqualität: Wirkung und
Bewertung. Tübingen: Narr.
Seeber, K. G. (2017). Multimodal
processing in simultaneous interpreting. In J. W. Schwieter & A. Ferreira (Eds.), The
handbook of translation and
cognition (pp. 461–475). Hoboken: John Wiley & Sons, 461–475.
Seubert, S. (2017). Simultaneous
interpreting is a whole-person process: Zur Verarbeitung visueller Informationen beim
Simultandolmetschen. In M. Behr & S. Seubert (Eds.), Education
is a whole-person process: Von ganzheitlicher Lehre, Dolmetschforschung und anderen
Dingen. Berlin: Frank & Timme, 271–303.
Thomas, S. M. & Jordan, T. R. (2004). Contributions
of oral and extraoral facial movement to visual and audiovisual speech perception. Journal of
Experimental Psychology: Human Perception and
Performance 30 (5), 873–888.
Tissi, B. (2000). Silent
pauses and disfluencies in simultaneous interpretation: A descriptive analysis. The
Interpreters’
Newsletter 101, 103–128.
Vatikiotis-Bateson, E., Eigsti, I.-M., Yano, S. & Munhall, K. G. (1998). Eye
movement of perceivers during audiovisual speech perception. Perception &
Psychophysics 60 (6), 926–940.
von Kriegstein, K., Dogan, Ö., Grüter, M., Giraud, A.-L., Kell, C. A., Grüter, T., Kleinschmidt, A. & Kiebel, S. J. (2008). Simulation
of talking faces in the human brain improves auditory speech recognition. Proceedings of the
National Academy of
Sciences 105 (18), 6747–6752.
Wickham, H. (2009). ggplot2: Elegant graphics for data analysis. New York: Springer.
Zwischenberger, C. (2010). Quality
criteria in simultaneous interpreting: An international vs. a national view. The Interpreters’
Newsletter 151, 127–142.
Cited by (2)
Cited by two other publications
Shang, Xiaoqi & Guixia Xie
2024.
Investigating the impact of visual access on trainee interpreters’ simultaneous interpreting performance.
The Interpreter and Translator Trainer ► pp. 1 ff.
This list is based on CrossRef data as of 12 september 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.