Interactions between a quiz robot and multiple participants
Focusing on speech, gaze and bodily conduct in Japanese and English speakers
This paper reports on a quiz robot experiment in which we explore similarities and differences in human participant speech, gaze, and bodily conduct in responding to a robot’s speech, gaze, and bodily conduct across two languages. Our experiment involved three-person groups of Japanese and English-speaking participants who stood facing the robot and a projection screen that displayed pictures related to the robot’s questions. The robot was programmed so that its speech was coordinated with its gaze, body position, and gestures in relation to transition relevance places (TRPs), key words, and deictic words and expressions (e.g. this, this picture) in both languages. Contrary to findings on human interaction, we found that the frequency of English speakers’ head nodding was higher than that of Japanese speakers in human-robot interaction (HRI). Our findings suggest that the coordination of the robot’s verbal and non-verbal actions surrounding TRPs, key words, and deictic words and expressions is important for facilitating HRI irrespective of participants’ native language. Keywords: coordination of verbal and non-verbal actions; robot gaze comparison between English and Japanese; human-robot interaction (HRI); transition relevance place (TRP); conversation analysis
Cited by (2)
Cited by two other publications
Mlynář, Jakub, Lynn de Rijk, Andreas Liesenfeld, Wyke Stommel & Saul Albert
2024.
AI in situated action: a scoping review of ethnomethodological and conversation analytic studies.
AI & SOCIETY
Pitsch, Karola, Marc Relieu & Julia Velkovska
2020.
Répondre aux questions d’un robot.
Réseaux N° 220-221:2
► pp. 113 ff.
This list is based on CrossRef data as of 12 september 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.