Article published in:
Gaze in human-robot communicationGuest-edited by Frank Broz, Hagen Lehmann, Bilge Mutlu and Yukiko Nakano
[Interaction Studies 14:3] 2013
► pp. 390–418
Cooperative gazing behaviors in human multi-robot interaction
When humans are addressing multiple robots with informative speech acts (Clark & Carlson 1982), their cognitive resources are shared between all the participating robot agents. For each moment, the user’s behavior is not only determined by the actions of the robot that they are directly gazing at, but also shaped by the behaviors from all the other robots in the shared environment. We define cooperative behavior as the action performed by the robots that are not capturing the user’s direct attention. In this paper, we are interested in how the human participants adjust and coordinate their own behavioral cues when the robot agents are performing different cooperative gaze behaviors. A novel gaze-contingent platform was designed and implemented. The robots’ behaviors were triggered by the participant’s attentional shifts in real time. Results showed that the human participants were highly sensitive when the robot agents were performing different cooperative gazing behaviors. Keywords: human-robot interaction; multi-robot interaction; multiparty interaction; eye gaze cue; embodied conversational agent
Published online: 10 June 2014
https://doi.org/10.1075/is.14.3.05xu
https://doi.org/10.1075/is.14.3.05xu
Cited by
Cited by other publications
Aronson, Reuben M., Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa & Henny Admoni
Xu, Tian (Linger), Hui Zhang & Chen Yu
This list is based on CrossRef data as of 03 january 2021. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.