Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
2024. Interpreting accuracy revisited: a refined approach to interpreting performance analysis. Perspectives 32:2 ► pp. 210 ff.
Shang, Xiaoqi & Guixia Xie
2024. Investigating the impact of visual access on trainee interpreters’ simultaneous interpreting performance. The Interpreter and Translator Trainer► pp. 1 ff.
Arbona, Eléonore, Kilian G. Seeber & Marianne Gullberg
2023. Semantically related gestures facilitate language comprehension during simultaneous interpreting. Bilingualism: Language and Cognition 26:2 ► pp. 425 ff.
Arbona, Eléonore, Kilian G. Seeber & Marianne Gullberg
2024. The role of semantically related gestures in the language comprehension of simultaneous interpreters in noise. Language, Cognition and Neuroscience 39:5 ► pp. 584 ff.
2019. MULTI-CHANNEL NATURE AND MULTIMODALITY OF PERCEPTION IN A SIMULTANEOUS INTERPRETER’S COGNITIVE ACTIVITY. Philology. Theory & Practice 12:9 ► pp. 337 ff.
Jesse, Alexandra & Elina Kaplan
2019. Attentional resources contribute to the perceptual learning of talker idiosyncrasies in audiovisual speech. Attention, Perception, & Psychophysics 81:4 ► pp. 1006 ff.
Kaplan, Elina & Alexandra Jesse
2019. Fixating the eyes of a speaker provides sufficient visual information to modulate early auditory processing. Biological Psychology 146 ► pp. 107724 ff.
Pennington, Martha C. & Pamela Rogerson-Revell
2019. Using Technology for Pronunciation Teaching, Learning, and Assessment. In English Pronunciation Teaching and Research [Research and Practice in Applied Linguistics, ], ► pp. 235 ff.
Chen, Sijia
2017. The construct of cognitive load in interpreting and its measurement. Perspectives 25:4 ► pp. 640 ff.
Francisco, Ana A., Margriet A. Groen, Alexandra Jesse & James M. McQueen
2017. Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. Learning and Individual Differences 54 ► pp. 60 ff.
Francisco, Ana A., Alexandra Jesse, Margriet A. Groen & James M. McQueen
2017. A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia. Journal of Speech, Language, and Hearing Research 60:1 ► pp. 144 ff.
Seeber, Kilian G.
2017. Multimodal Processing in Simultaneous Interpreting. In The Handbook of Translation and Cognition, ► pp. 461 ff.
2015. Speech Sound–Production Deficitsin Children With Visual Impairment:A Preliminary Investigation of theNature and Prevalence ofCoexisting Conditions. Contemporary Issues in Communication Science and Disorders 42:Spring ► pp. 22 ff.
2015. Speech Sound-Production Deficits in Children With Visual Impairment: A Preliminary Investigation of the Nature and Prevalence of Coexisting Conditions. Contemporary Issues in Communication Science and Disorders 42:Spring ► pp. 33 ff.
van der Zande, Patrick, Alexandra Jesse & Anne Cutler
2013. Lexically guided retuning of visual phonetic categories. The Journal of the Acoustical Society of America 134:1 ► pp. 562 ff.
Jesse, Alexandra & Dominic W. Massaro
2010. Seeing a singer helps comprehension of the song’s lyrics. Psychonomic Bulletin & Review 17:3 ► pp. 323 ff.
Massaro, Dominic W. & Alexandra Jesse
2009. Read my lips: speech distortions in musical lyrics can be overcome (slightly) by facial information. Speech Communication 51:7 ► pp. 604 ff.
Ouni, Slim, Michael M. Cohen & Dominic W. Massaro
2005. Training Baldi to be multilingual: A case study for an Arabic Badr. Speech Communication 45:2 ► pp. 115 ff.
Massaro, Dominic W.
2004. Proceedings of the 6th international conference on Multimodal interfaces, ► pp. 24 ff.
Massaro, Dominic W.
2005. The Psychology and Technology of Talking Heads: Applications in Language Learning. In Advances in Natural Multimodal Dialogue Systems [Text, Speech and Language Technology, 30], ► pp. 183 ff.
[no author supplied]
2009. 2009 42nd Hawaii International Conference on System Sciences, ► pp. 1 ff.
This list is based on CrossRef data as of 12 september 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.