Part of
Gaze in Human-Robot Communication
Edited by Frank Broz, Hagen Lehmann, Bilge Mutlu and Yukiko Nakano
[Benjamins Current Topics 81] 2015
► pp. 7198
References
Allopenna, P.D., Magnuson, J.S., & Tanenhaus, M.K
(1998) Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of Memory and Language, 38(4), 419–439. DOI logoGoogle Scholar
Argyle, M., & Cook, M
(1976) Gaze and mutual gaze. Oxford, England: Cambridge University Press.Google Scholar
Balch, T
(2002) Taxonomies of multirobot task and reward. In Robot Teams: From Diversity to Polymorphism (pp. 23–35). Natick, MA: A K Peters/CRC Press.Google Scholar
Bales, R.F
(1950) A set of categories for the analysis of small group interaction. American Sociological Review, 15(2), 257–263. DOI logoGoogle Scholar
Bard, K.A., & Leavens, D.A
(2008) Socio-emotional factors in the development of joint attention in human and ape infants. In Learning From Animals? Examining the Nature of Human Uniqueness (pp. 89–104). London: Psychology Press.Google Scholar
Bennewitz, M., Faber, F., Joho, D., Schreiber, M., & Behnke, S
(2005) Towards a humanoid museum guide robot that interacts with multiple persons. In Humanoid Robots, 2005. 5th IEEE-RAS International Conference (pp. 418–423). IEEE. DOI logo
Bratman, M.E
(1992) Shared cooperative activity. The Philosophical Review, 101(2), 327–341. DOI logoGoogle Scholar
Cakmak, M., Srinivasa, S.S., Lee, M.K., Kiesler, S., & Forlizzi, J
(2011) Using spatial and temporal contrast for fluent robot-human hand-overs. In Proceedings of the 6th International Conference on Human-Robot Interaction (pp. 489–496). ACM.
Cao, Y.U., Fukunaga, A.S., & Kahng, A
(1997) Cooperative mobile robotics: Antecedents and directions. Autonomous Robots, 4, 1–23. DOI logoGoogle Scholar
Carpenter, M., Nagell, K., Tomasello, M., Butterworth, G., & Moore, C
(1998) Social cognition, joint attention, and communicative competence from 9 to 15 months of age. Monographs of the Society for Research in Child Development, 63(4)1–143. DOI logoGoogle Scholar
Casper, J., & Murphy, R.R
(2003) Human-robot interactions during the robot-assisted urban search and rescue response at the world trade center. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 33(3), 367–385. DOI logoGoogle Scholar
Clark, H.H., & Carlson, B.T
(1982) Hearers and speech acts. Language, 58, 332–373. DOI logoGoogle Scholar
Dautenhahn, K
(2007) Socially intelligent robots: Dimensions of human-robot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 679–704. DOI logoGoogle Scholar
Demiris, Y
(2007) Prediction of intent in robotics and multi-agent systems. Cognitive Processing, 8(3), 151–158. DOI logoGoogle Scholar
Dudek, G., Jenkin, M., & Milios, E
(2002) A taxonomy of multi-robot systems. In T. Balch & L.E. Parker (Eds.), Robot teams (pp. 3–22). Natick, MA: A K Peters.Google Scholar
Duffy, B.R
(2003) Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3), 177–190. DOI logoGoogle Scholar
Farinelli, A., Iocchi, L., & Nardi, D
(2004) Multirobot systems: A classification focused on coordination. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 34(5), 2015–2028. DOI logoGoogle Scholar
Frischen, A., Bayliss, A.P., & Tipper, S.P
(2007) Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychological Bulletin, 133(4), 694. DOI logoGoogle Scholar
Goffman, E
(1981) Forms of talk. Philadelphia, PA: University of Pennsylvania Press.Google Scholar
Goodrich, M.A., & Schultz, A.C
(2007) Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction, 1(3), 203–275. DOI logoGoogle Scholar
Goodwin, C
(2007) Interactive footing. Studies in Interactional Sociolinguistics, 24, 16.Google Scholar
Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P., Marnier, B., Serre, J., & Maisonnier, B
(2009) Mechatronic design of NAO humanoid. In IEEE International Conference on Robotics and Automation (pp. 769–774). IEEE.
Griffin, Z.M., & Bock, K
(2000) What the eyes say about speaking. Psychological Science, 11(4), 274–279. DOI logoGoogle Scholar
Isaacs, E.A., & Tang, J.C
(1994) What video can and cannot do for collaboration: A case study. Multimedia Systems, 2(2), 63–73. DOI logoGoogle Scholar
Kendon, A
(1967) Some functions of gaze-direction in social interaction. Acta Psychologica, 26, 22–63. DOI logoGoogle Scholar
Kirchner, N., Alempijevic, A., & Dissanayake, G
(2011) Nonverbal robot-group interaction using an imitated gaze cue. In Proceedings of the 6th International Conference on Human-Robot Interaction (pp. 497–504). ACM.
Kitano, H., Tadokoro, S., Noda, I., Matsubara, H., Takahashi, T., Shinjou, A., & Shimada, S
(1999) Robocup rescue: Search and rescue in large-scale disasters as a domain for autonomous agents research. In IEEE International Conference on Systems, Man, and Cybernetics , (Vol. 6, pp. 739–743). IEEE.
Knapp, M., Hall, J., & Horgan, T
(2013) Nonverbal communication in human interaction. (8th ed.). Boston: Wadsworth/Cengage Learning.Google Scholar
Imai, M., Ono, T., & Ishiguro, H
(2003) Physical relation and expression: Joint attention for human-robot interaction. IEEE Transactions on Industrial Electronics, 50(4), 636–643. DOI logoGoogle Scholar
Le Meur, O., Ninassi, A., Le Callet, P., & Barba, D
(2010) Overt visual attention for free-viewing and quality assessment tasks: Impact of the regions of interest on a video quality metric. Signal Processing: Image Communication, 25(7), 547–558. DOI logoGoogle Scholar
Matsusaka, Y., Fujie, S., & Kobayashi, T
(2001) Modeling of conversational strategy for the robot participating in the group conversation. In Proceedings of European Conference on Speech Communication and Technology , (Vol. 1, pp. 2173–2176).
McLurkin, J., Lynch, A., Rixner, S., Barr, T., Chou, A., Foster, K., & Bilstein, S
(2010) A low-cost multi-robot system for research, teaching, and outreach. In Proceedings of the Tenth International Symposium on Distributed Autonomous Robotic Systems (pp. 597–609). Springer Berlin Heidelberg.
Meltzoff, A.N., Kuhl, P.K., Movellan, J., & Sejnowski, T.J
(2009) Foundations for a new science of learning. Science, 325(5938), 284–288. DOI logoGoogle Scholar
Meyer, A.S., Sleiderink, A.M., & Levelt, W.J
(1998) Viewing and naming objects: Eye movements during noun phrase production. Cognition, 66(2), B25-B33. DOI logoGoogle Scholar
Modi, P.J., Shen, W.M., Tambe, M., & Yokoo, M
(2005) ADOPT: Asynchronous distributed constraint optimization with quality guarantees. Artificial Intelligence, 161(1), 149–180. DOI logoGoogle Scholar
Muhl, C., & Nagai, Y
(2007) Does disturbance discourage people from communicating with a robot? In 16th IEEE International Symposium on Robot and Human Interactive Communication (pp. 1137–1142). IEEE.
Mutlu, B
(2009) Designing gaze behavior for humanlike robots. Doctoral dissertation, Northwestern University.
Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., & Ishiguro, H
(2012) Conversational gaze mechanisms for humanlike robots. ACM Transactions on Interactive Intelligent Systems, 1(2), 12. DOI logoGoogle Scholar
Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., & Hagita, N
(2009) Footing in human-robot conversations: How robots might shape participant roles using gaze cues. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction (pp. 61–68). DOI logo
Mutlu, B., Yamaoka, F., Kanda, T., Ishiguro, H., & Hagita, N
(2009) Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior. In Proceedings of 4th ACM/IEEE International Conference on Human-Robot Interaction (pp. 69–76). DOI logo
Nakano, Y.I., & Nishida, T
(2005) Awareness of perceived world and conversational engagement by conversational agents. In Proceedings AISB 2005. Symposium Conversational Informatics for Supporting Social Intelligence and Interaction-Situational and Environmental Information Enforcing Involvement (pp. 128–134).
Parker, L.E
(1998) ALLIANCE: An architecture for fault tolerant multirobot cooperation. IEEE Transactions on Robotics and Automation, 14(2), 220–240. DOI logoGoogle Scholar
(2008) Distributed intelligence: Overview of the field and its application in multi-robot systems. Journal of Physical Agents, 2(1), 5–14.Google Scholar
Posner, M.I
(1980) Orienting of attention. Quarterly Journal of Experimental Psychology, 32(1), 3–25. DOI logoGoogle Scholar
Rehm, M., & André, E
(2005) Where do they look? Gaze behaviors of multiple users interacting with an embodied conversational agent. In Proceedings of the 5th International Workshop on Intelligent Virtual Agents (pp. 241–252). Panayiotopoulos T., Gratch J., Aylett R., Ballin D., Olivier P., Rist T. (Eds.). Springer Berlin: Heidelberg. DOI logoGoogle Scholar
Rich, C., Ponsler, B., Holroyd, A., & Sidner, C.L
(2010) Recognizing engagement in human-robot interaction. In Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction (pp. 375–382).
Rich, C., & Sidner, C.L
(2009) Robots and avatars as hosts, advisors, companions, and jesters. AI Magazine, 30(1), 29.Google Scholar
Rubenstein, M., Ahler, C., & Nagpal, R
(2012) Kilobot: A low cost scalable robot system for collective behaviors. In IEEE International Conference on Robotics and Automation (pp. 3293–3298). IEEE.
Sacks, H., Schegloff, E.A., & Jefferson, G
(1974) A simplest systematics for the organization of turn-taking for conversation. Language, 50(4), 696–735. DOI logoGoogle Scholar
Searle, J.R
(1976) A classification of illocutionary acts. Language in Society, 5(1), 1–23. DOI logoGoogle Scholar
Smith, L.B., Yu, C., & Pereira, A.F
(2010) Not your mother’s view: The dynamics of toddler visual experience. Developmental Science, 14(1), 9–17. DOI logoGoogle Scholar
Staudte, M., & Crocker, M.W
(2009) Visual attention in spoken human-robot interaction. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction (pp. 77–84). DOI logo
Tapus, A., Mataric, M.J., & Scassellati, B
(2007) Socially assistive robotics. [Grand Challenges in Robotics]. IEEE Robotics Automation Magazine, 14(1), 35–42. DOI logoGoogle Scholar
Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H
(2005) Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675–690. DOI logoGoogle Scholar
Trafton, J.G., Bugajska, M.D., Fransen, B.R., & Ratwani, R.M
(2008) Integrating vision and audition within a cognitive architecture to track conversations. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (pp. 201–208).
Vertegaal, R., Slagter, R., van der Veer, G., & Nijholt, A
(2001) Eye gaze patterns in conversations: There is more to conversational agents than meets the eyes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 301–308). ACM. DOI logo
Vertegaal, R., van der Veer, G., & Vons, H
(2000) Effects of gaze on multiparty mediated communication. In Proceedings of Graphics Interface (pp. 95–102). Montreal, Canada: Morgan Kaufmann Publishers.Google Scholar
Yoshikawa, Y., Shinozawa, K., Ishiguro, H., Hagita, N., & Miyamoto, T
(2006) Responsive robot gaze to interaction partner. In Proceedings of Robotics: Science and Systems . DOI logo
Yu, C., Ballard, D.H., & Aslin, R.N
(2005) The role of embodied intention in early lexical acquisition. Cognitive Science, 29(6), 961–1005. DOI logoGoogle Scholar
Yu, C., Schermerhorn, P., & Scheutz, M
(2012) Adaptive eye gaze patterns in interactions with human and artificial agents. ACM Transactions on Interactive Intelligent Systems, 1(2), 13: 1–13: 25.Google Scholar
Yu, C., Scheutz, M., & Schermerhorn, P
(2010a) Investigating multimodal real-time patterns of joint attention in an HRI word learning task. In Proceedings of the 2010. 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 309–316). IEEE.
Yu, C., & Smith, L.B
(2012) Embodied attention and word learning by toddlers. Cognition, 125(2), 244–262. DOI logoGoogle Scholar
Yu, C., Smith, L.B., Shen, H., Pereira, A.F., & Smith, T
(2009) Active information selection: Visual attention through the hands. IEEE Transactions on Autonomous Mental Development, 1(2), 141–151. DOI logoGoogle Scholar
Yu, C., Smith, T., Hidaka, S., Scheutz, M., & Smith, L
(2010b) A data-driven paradigm to understand multimodal communication in human-human and human-robot interaction. Advances in Intelligent Data Analysis IX (pp. 232–244). Cohen P., Adams N., Berthold M. (Eds.). Springer Berlin: Heidelberg. DOI logoGoogle Scholar
Zhao, Q., Yuan, X., Tu, D., & Lu, J
(2012) Multi-initialized states referred work parameter calibration for gaze tracking human-robot interaction. International Journal of Advanced Robotic Systems, 9(75). DOI logoGoogle Scholar