Article published in:Gaze in Human-Robot Communication
Edited by Frank Broz, Hagen Lehmann, Bilge Mutlu and Yukiko Nakano
[Benjamins Current Topics 81] 2015
► pp. 47–70
Interactions between a quiz robot and multiple participants
Focusing on speech, gaze and bodily conduct in Japanese and English speakers
This paper reports on a quiz robot experiment in which we explore similarities and differences in human participant speech, gaze, and bodily conduct in responding to a robot’s speech, gaze, and bodily conduct across two languages. Our experiment involved three-person groups of Japanese and English-speaking participants who stood facing the robot and a projection screen that displayed pictures related to the robot’s questions. The robot was programmed so that its speech was coordinated with its gaze, body position, and gestures in relation to transition relevance places (TRPs), key words, and deictic words and expressions (e.g. this, this picture) in both languages. Contrary to findings on human interaction, we found that the frequency of English speakers’ head nodding was higher than that of Japanese speakers in human-robot interaction (HRI). Our findings suggest that the coordination of the robot’s verbal and non-verbal actions surrounding TRPs, key words, and deictic words and expressions is important for facilitating HRI irrespective of participants’ native language.
Keywords: conversation analysis, coordination of verbal and non-verbal actions, human-robot interaction (HRI), robot gaze comparison between English and Japanese, transition relevance place (TRP)
Published online: 16 December 2015
Bavelas, J.B., Coates, L., & Johnson, T.
CMU cross-cultural receptionist corpus
(2012) Retrieved from https://github.com/maxipesfix/receptionist_corpus
(2010) Museum Guide Robot: Diginfo [Video file]. Retrieved from https://www.youtube.com/watch?v=2cSCwMOxccs
Retrieved September 6, 2013. from http://www.seeingmachines.com/product/faceapi/
Fox, B.A., Hayashi, M., & Jasperson, R.
Knight, H., & Simmons, R.
(2012) Estimating human interest and attention via gaze analysis. In Proceedings of the 12th International Conference on Intelligent Virtual Agents (IVA’12, pp. 245–251).
Lee, C., Lesh, N., Sidner, C.L., Morency, L.P., Kapoor, A., & Darrell, T.
(2004) Nodding in conversations with a robot. In CHI’04 Extended Abstracts on Human Factors in Computing Systems (pp. 785–786). ACM.
Makatchev, M., Simmons, R., & Sakr, M.
(2012) A cross-cultural corpus of annotated verbal and nonverbal behaviors in receptionist encounters. In Gaze in HRI: From Modeling to Communication workshop at 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , Boston, USA, March, 2012.
Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., & Hagita, N.
(2009) Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction (pp. 61–68). ACM.
Nakano, Y., & Rehm, M.
Nomura, T., Kanda, T., Suzuki, T., Han, J., Shin, N., Burke, J., & Kato, K.
Pitsch, K., Vollmer, A.L., & Muhlig, M.
Rossano, F., Brown, P., & Levinson, S.C.
Sacks, H., Schegloff, E.A., & Jefferson, G.
Stivers, T., & Rossano, F.
Traum, D., Aggarwal, P., Artstein, R., Foutz, S., Gerten, J., Katsamanis, A., Leuski, A., Noren, D., & Swartout, W.
Yamazaki, A., Yamazaki, K., Burdelski, M., Kuno, Y., & Fukushima, M.
Yamazaki, A., Yamazaki, K., Kuno, Y., Burdelski, M., Kawashima, M., & Kuzuoka, H.
Yamazaki, A., Yamazaki, K., Ohyama, T., Kobayashi, Y., & Kuno, Y.
(2012) A techno-sociological solution for designing a museum guide robot: Regarding choosing an appropriate visitor. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI’ 12, pp. 309–316). ACM.
Yamazaki, K., Yamazaki, A., Okada, M., Kuno, Y., Kobayashi, Y., Hoshi, Y., & Heath, C.
(2009) Revealing gauguin: Engaging visitors in robot guide’s explanation in an art museum. In Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI’ 09, pp. 1437–1446). ACM.