Article published in:
Gaze in Human-Robot CommunicationEdited by Frank Broz, Hagen Lehmann, Bilge Mutlu and Yukiko Nakano
[Benjamins Current Topics 81] 2015
► pp. 71–98
Cooperative gazing behaviors in human multi-robot interaction
When humans are addressing multiple robots with informative speech acts (Clark & Carlson 1982), their cognitive resources are shared between all the participating robot agents. For each moment, the user’s behavior is not only determined by the actions of the robot that they are directly gazing at, but also shaped by the behaviors from all the other robots in the shared environment. We define cooperative behavior as the action performed by the robots that are not capturing the user’s direct attention. In this paper, we are interested in how the human participants adjust and coordinate their own behavioral cues when the robot agents are performing different cooperative gaze behaviors. A novel gaze-contingent platform was designed and implemented. The robots’ behaviors were triggered by the participant’s attentional shifts in real time. Results showed that the human participants were highly sensitive when the robot agents were performing different cooperative gazing behaviors.
Keywords: embodied conversational agent, eye gaze cue, human-robot interaction, multiparty interaction, multi-robot interaction
Published online: 16 December 2015
https://doi.org/10.1075/bct.81.05xu
https://doi.org/10.1075/bct.81.05xu
References
References
Allopenna, P.D., Magnuson, J.S., & Tanenhaus, M.K.
Balch, T.
Bales, R.F.
Bard, K.A., & Leavens, D.A.
Bennewitz, M., Faber, F., Joho, D., Schreiber, M., & Behnke, S.
Cakmak, M., Srinivasa, S.S., Lee, M.K., Kiesler, S., & Forlizzi, J.
(2011) Using spatial and temporal contrast for fluent robot-human hand-overs. In
Proceedings of the 6th International Conference on Human-Robot Interaction
(pp. 489–496). ACM.
Cao, Y.U., Fukunaga, A.S., & Kahng, A.
Carpenter, M., Nagell, K., Tomasello, M., Butterworth, G., & Moore, C.
Casper, J., & Murphy, R.R.
Dautenhahn, K.
Demiris, Y.
Dudek, G., Jenkin, M., & Milios, E.
Duffy, B.R.
Farinelli, A., Iocchi, L., & Nardi, D.
Frischen, A., Bayliss, A.P., & Tipper, S.P.
Goodrich, M.A., & Schultz, A.C.
Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P., Marnier, B., Serre, J., & Maisonnier, B.
(2009) Mechatronic design of NAO humanoid. In
IEEE International Conference on Robotics and Automation
(pp. 769–774). IEEE.
Griffin, Z.M., & Bock, K.
Isaacs, E.A., & Tang, J.C.
Kendon, A.
Kirchner, N., Alempijevic, A., & Dissanayake, G.
(2011) Nonverbal robot-group interaction using an imitated gaze cue. In
Proceedings of the 6th International Conference on Human-Robot Interaction
(pp. 497–504). ACM.
Kitano, H., Tadokoro, S., Noda, I., Matsubara, H., Takahashi, T., Shinjou, A., & Shimada, S.
(1999) Robocup rescue: Search and rescue in large-scale disasters as a domain for autonomous agents research. In
IEEE International Conference on Systems, Man, and Cybernetics
, (Vol. 6, pp. 739–743). IEEE.
Knapp, M., Hall, J., & Horgan, T.
Imai, M., Ono, T., & Ishiguro, H.
Le Meur, O., Ninassi, A., Le Callet, P., & Barba, D.
Matsusaka, Y., Fujie, S., & Kobayashi, T.
(2001) Modeling of conversational strategy for the robot participating in the group conversation. In
Proceedings of European Conference on Speech Communication and Technology
, (Vol. 1, pp. 2173–2176).
McLurkin, J., Lynch, A., Rixner, S., Barr, T., Chou, A., Foster, K., & Bilstein, S.
(2010) A low-cost multi-robot system for research, teaching, and outreach. In
Proceedings of the Tenth International Symposium on Distributed Autonomous Robotic Systems
(pp. 597–609). Springer Berlin Heidelberg.
Meltzoff, A.N., Kuhl, P.K., Movellan, J., & Sejnowski, T.J.
Meyer, A.S., Sleiderink, A.M., & Levelt, W.J.
Modi, P.J., Shen, W.M., Tambe, M., & Yokoo, M.
Muhl, C., & Nagai, Y.
(2007) Does disturbance discourage people from communicating with a robot? In
16th IEEE International Symposium on Robot and Human Interactive Communication
(pp. 1137–1142). IEEE.
Mutlu, B.
(2009) Designing gaze behavior for humanlike robots. Doctoral dissertation, Northwestern University.
Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., & Ishiguro, H.
Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., & Hagita, N.
Mutlu, B., Yamaoka, F., Kanda, T., Ishiguro, H., & Hagita, N.
Nakano, Y.I., & Nishida, T.
(2005) Awareness of perceived world and conversational engagement by conversational agents. In Proceedings AISB 2005.
Symposium Conversational Informatics for Supporting Social Intelligence and Interaction-Situational and Environmental Information Enforcing Involvement
(pp. 128–134).
Parker, L.E.
Posner, M.I.
Rehm, M., & André, E.
(2005) Where do they look? Gaze behaviors of multiple users interacting with an embodied conversational agent. In Proceedings of the 5th International Workshop on Intelligent Virtual Agents (pp. 241–252). Panayiotopoulos T., Gratch J., Aylett R., Ballin D., Olivier P., Rist T. (Eds.). Springer Berlin: Heidelberg. 

Rich, C., Ponsler, B., Holroyd, A., & Sidner, C.L.
(2010) Recognizing engagement in human-robot interaction. In
Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction
(pp. 375–382).
Rich, C., & Sidner, C.L.
Rubenstein, M., Ahler, C., & Nagpal, R.
(2012) Kilobot: A low cost scalable robot system for collective behaviors. In
IEEE International Conference on Robotics and Automation
(pp. 3293–3298). IEEE.
Sacks, H., Schegloff, E.A., & Jefferson, G.
Smith, L.B., Yu, C., & Pereira, A.F.
Staudte, M., & Crocker, M.W.
Tapus, A., Mataric, M.J., & Scassellati, B.
Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H.
Trafton, J.G., Bugajska, M.D., Fransen, B.R., & Ratwani, R.M.
(2008) Integrating vision and audition within a cognitive architecture to track conversations. In
Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction
(pp. 201–208).
Vertegaal, R., Slagter, R., van der Veer, G., & Nijholt, A.
Vertegaal, R., van der Veer, G., & Vons, H.
Yoshikawa, Y., Shinozawa, K., Ishiguro, H., Hagita, N., & Miyamoto, T.
Yu, C., Ballard, D.H., & Aslin, R.N.
Yu, C., Schermerhorn, P., & Scheutz, M.
Yu, C., Scheutz, M., & Schermerhorn, P.
(2010a) Investigating multimodal real-time patterns of joint attention in an HRI word learning task. In
Proceedings of the 2010. 5th ACM/IEEE International Conference on Human-Robot Interaction
(pp. 309–316). IEEE.
Yu, C., & Smith, L.B.
Yu, C., Smith, L.B., Shen, H., Pereira, A.F., & Smith, T.
Yu, C., Smith, T., Hidaka, S., Scheutz, M., & Smith, L.