This review describes the primary strategies used to express changes in conceptual viewpoint (Parrill, 2012) in co-speech gesture and sign language. We describe the use of the face, eye gaze, body orientation and hands to represent these differences in viewpoint, focusing particularly on McNeill’s (1992) division of iconic gestures into observer versus character viewpoint gestures, and on the situations in which they occur. We also draw a parallel between the strategies used in co-speech gesture and those used in different signed languages (see Cormier, Quinto-Pozos, Sevcikova, & Schembri, 2012), and suggest possibilities for further research in this area.
2021. Construing events first-hand: Gesture viewpoints interact with speech to shape the attribution and memory of agency. Memory & Cognition 49:5 ► pp. 884 ff.
CIENKI, ALAN
2015. Spoken language usage events. Language and Cognition 7:4 ► pp. 499 ff.
DeLiema, David, Noel Enyedy, Francis Steen & Joshua A. Danish
2021. Integrating Viewpoint and Space: How Lamination across Gesture, Body Movement, Language, and Material Resources Shapes Learning. Cognition and Instruction 39:3 ► pp. 328 ff.
Ebert, Cornelia
2018. A comparison of sign language with speech plus gesture. Theoretical Linguistics 44:3-4 ► pp. 239 ff.
Ferrara, Lindsay
2019. Coordinating signs and eye gaze in the depiction of directions and spatial scenes by fluent and L2 signers of Norwegian Sign Language. Spatial Cognition & Computation 19:3 ► pp. 220 ff.
Ferrara, Lindsay & Torill Ringsø
2019. Spatial Vantage Points in Norwegian Sign Language. Open Linguistics 5:1 ► pp. 583 ff.
FREDERIKSEN, ANNE THERESE
2017. Separating viewpoint from mode of representation in iconic co-speech gestures: insights from Danish narratives. Language and Cognition 9:4 ► pp. 677 ff.
Guilbeault, Douglas
2017. How politicians express different viewpoints in gesture and speech simultaneously. Cognitive Linguistics 28:3 ► pp. 417 ff.
Gulamani, Sannah, Chloë Marshall & Gary Morgan
2022. The challenges of viewpoint-taking when learning a sign language: Data from the ‘frog story’ in British Sign Language. Second Language Research 38:1 ► pp. 55 ff.
2022. The Interaction Space. In Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Anthropometry, Human Behavior, and Communication [Lecture Notes in Computer Science, 13319], ► pp. 243 ff.
Lindgren, Robb & David DeLiema
2022. Viewpoint, embodiment, and roles in STEM learning technologies. Educational technology research and development 70:3 ► pp. 1009 ff.
Mittelberg, Irene
2017. Experiencing and construing spatial artifacts from within: Simulated artifact immersion as a multimodal viewpoint strategy. Cognitive Linguistics 28:3 ► pp. 381 ff.
Parrill, Fey, Kashmiri Stec, David Quinto-Pozos & Sebastian Rimehaug
2016. Linguistic, gestural, and cinematographic viewpoint: An analysis of ASL and English narrative. Cognitive Linguistics 27:3 ► pp. 345 ff.
Quinto‐Pozos, David & Fey Parrill
2015. Signers and Co‐speech Gesturers Adopt Similar Strategies for Portraying Viewpoint in Narratives. Topics in Cognitive Science 7:1 ► pp. 12 ff.
Rekittke, Linn-Marlen
2017. Viewpoint and stance in gesture: How a potential taboo topic may influence gestural viewpoint in recounting films. Journal of Pragmatics 122 ► pp. 50 ff.
2015. Multimodal analysis of quotation in oral narratives. Open Linguistics 1:1
Stec, Kashmiri, Mike Huiskes & Gisela Redeker
2016. Multimodal quotation: Role shift practices in spoken narratives. Journal of Pragmatics 104 ► pp. 1 ff.
This list is based on CrossRef data as of 30 march 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.