Article published In:
Interaction Studies
Vol. 17:2 (2016) ► pp.180210
References (63)
References
Admoni, H., Datsikas, C., & Scassellati, B. (2014). Speech and gaze conflicts in collaborative human-robot interactions. In Proceedings of the 36th Annual Conference of the Cognitive Science Society (CogSci 2014).Google Scholar
Ahrenholz, B. (2007). Verweise mit Demonstrativa im gesprochenen Deutsch: Grammatik, Zweitspracherwerb und Deutsch als Fremdsprache (Vol. 171). Walter de Gruyter. DOI logoGoogle Scholar
Almor, A. (1999). Noun-phrase anaphora and focus: The informational load hypothesis. Psychological Review, 106(4), 748. DOI logoGoogle Scholar
Arnold, J. E., Eisenband, J. G., Brown-Schmidt, S., & Trueswell, J. C. (2000). The rapid use of gender information: Evidence of the time course of pronoun resolution from eyetracking. Cognition, 76(1), B13–B26. DOI logoGoogle Scholar
Arts, A., Maes, A., Noordman, L., & Jansen, C. (2011). Overspecification facilitates object identification. Journal of Pragmatics, 43(1), 361–374. DOI logoGoogle Scholar
Benthall, J., Argyle, M., & Cook, M. (1976). Gaze and mutual gaze. RAIN(12), 7. DOI logoGoogle Scholar
Böckler, A., Knoblich, G., & Sebanz, N. (2011). Observing shared attention modulates gaze following. Cognition, 120(2), 292–298. DOI logoGoogle Scholar
Brennan, S. E. (1996). Lexical entrainment in spontaneous dialog. Proceedings of International Symposium on Spoken Dialog, 41–44.Google Scholar
(2000). Processes that shape conversation and their implications for computational linguistics. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics (pp. 1–11). DOI logoGoogle Scholar
Brennan, S. E., Chen, X., Dickinson, C. A., Neider, M. B., & Zelinsky, G. J. (2008). Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition, 106(3), 1465–1477. DOI logoGoogle Scholar
Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22(6), 1482–1493. DOI logoGoogle Scholar
Chai, J. Y., Prasov, Z., & Qu, S. (2006). Cognitive principles in robust multimodal interpretation. Journal of Artificial Intelligence Research (JAIR), 271, 55–83. DOI logoGoogle Scholar
Chen, Y., Schermerhorn, P., & Scheutz, M. (2012). Adaptive eye gaze patterns in interactions with human and artificial agents. ACM Transactions on Interactive Intelligent Systems, 1(2), 13. DOI logoGoogle Scholar
Clark, H. H. (2003). Pointing and placing. Pointing: Where language, culture, and cognition meet, 243–268. DOI logoGoogle Scholar
Clark, H. H., & Krych, M. A. (2004). Speaking while monitoring addressees for understanding. Journal of Memory and Language, 50(1), 62–81. DOI logoGoogle Scholar
Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22(1), 1–39. DOI logoGoogle Scholar
Dahan, D., Tanenhaus, M. K., & Chambers, C. G. (2002). Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47(2), 292–314. DOI logoGoogle Scholar
Dale, R., & Reiter, E. (1995). Computational interpretations of the Gricean maxims in the generation of referring expressions. Cognitive Science, 19(2), 233–263. DOI logoGoogle Scholar
Fang, R., Doering, M., & Chai, J. Y. (2015). Embodied collaborative referring expression generation in situated human-robot interaction. In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 271–278). DOI logoGoogle Scholar
Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007). Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychological Bulletin, 133(4), 691–721. DOI logoGoogle Scholar
Furnas, G., Landauer, T., Gomez, L., & Dumais, S. (1984). Statistical semantics: Analysis of the potential performance of keyword information systems. In Human factors in computer systems (pp. 187–212). DOI logoGoogle Scholar
(1987). The vocabulary problem in human-system communication. Communications of the ACM, 50(11), 964–971. DOI logoGoogle Scholar
Gatt, A., Krahmer, E., van Deemter, K., & van Gompel, R. P. (2014). Models and empirical data for the production of referring expressions. Language, Cognition and Neuroscience, 29(8), 899–911. DOI logoGoogle Scholar
Goudbeek, M., & Krahmer, E. (2012). Alignment in interactive reference production: Content planning, modifier ordering, and referential overspecification. Topics in Cognitive Science, 4(2), 269–289. DOI logoGoogle Scholar
Grice, H. (1975). Logic and conversation. In Syntax and semantics: Speech acts (pp. 41–58). New York, DOI logoGoogle Scholar
Griffin, Z. M. (2001). Gaze durations during speech reflect word selection and phonological encoding. Cognition, 821(Bl–Bl4). DOI logoGoogle Scholar
Grosz, B. J., Weinstein, S., & Joshi, A. K. (1995). Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2), 203–225.Google Scholar
Gundel, J. K. (2010). Reference and accessibility from a givenness hierarchy perspective. International Review of Pragmatics, 2(2), 148–168. DOI logoGoogle Scholar
Gundel, J. K., Hedberg, N., & Zacharski, R. (1993). Cognitive status and the form of referring expressions in discourse. Language, 271–307. DOI logoGoogle Scholar
(2012). Underspecification of cognitive status in reference production: Some empirical predictions. Topics in Cognitive Science, 4(2), 249–268. DOI logoGoogle Scholar
Gundel, J. K., Hedberg, N., Zacharski, R., Mulkern, A., Custis, T., Swierzbin, B., … Watters, S. (2006). Coding protocol for statuses on the giveness hierarchy, (unpublished manuscript)Google Scholar
Hanna, J. E., & Brennan, S. E. (2007). Speakers’ eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language, 57(4), 596–615. DOI logoGoogle Scholar
Hanna, J. E., & Tanenhaus, M. K. (2004). Pragmatic effects on reference resolution in a collaborative task: Evidence from eye movements. Cognitive Science, 28(1), 105–115. DOI logoGoogle Scholar
Huang, C.-M., & Mutlu, B. (2014). Learning-based modeling of multimodal behaviors for humanlike robots. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (pp. 57–61). DOI logoGoogle Scholar
Huwel, S., Wrede, B., & Sagerer, G. (2006). Robust speech understanding for multimodal human-robot communication. In Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (pp. 45–50). DOI logoGoogle Scholar
Kehler, A. (2000). Cognitive status and form of reference in multimodal human-computer interaction. In Proceedings of the 14th AAAI Conference on Artificial Intelligence (pp. 685–690).Google Scholar
Kelleher, J. D., & Kruijff, G.-J. M. (2006). Incremental generation of spatial referring expressions in situated dialog. In Proceedings of the 21st International Conference on Computational Linguistics (pp. 1041–1048). DOI logoGoogle Scholar
Kendon, A. (2004). Gesture: Visible action as utterance. Cambridge University Press, DOI logoGoogle Scholar
Knoeferle, P., & Crocker, M. W. (2006). The coordinated interplay of scene, utterance, and world knowledge: evidence from eye tracking. Cognitive Science, 30(3), 481–529. DOI logoGoogle Scholar
Kowadlo, G., Ye, P., & Zukerman, I. (2010). Influence of gestural salience on the interpretation of spoken requests. In Proceedings of Interspeech (pp. 2034–2037).Google Scholar
Krahmer, E., & Theune, M. (2002). Efficient context-sensitive generation of referring expressions. In Information sharing: Reference and presupposition in language generation and interpretation. Stanford.Google Scholar
Kranstedt, A., Lucking, A., Pfeiffer, T., Rieser, H., & Wachsmuth, I. (2006). Deictic object reference in task-oriented dialogue. Trends in Linguistic Studies and Monographs, 1661, 155. DOI logoGoogle Scholar
Kruijff, G.-J. M., Lison, P., Benjamin, T., Jacobsson, H., Zender, H., Kruijff-Korbayová, L, & Hawes, N. (2010). Situated dialogue processing for human-robot interaction. In Cognitive systems (pp. 311–364). Springer, DOI logoGoogle Scholar
Lambrecht, K. (1996). Information structure and sentence form: Topic, focus, and the mental representations of discourse referents (Vol. 711). Cambridge University Press, DOI logoGoogle Scholar
Lemaignan, S., Ros, R., Sisbot, E. A., Alami, R., & Beetz, M. (2012). Grounding the interaction: Anchoring situated discourse in everyday human-robot interaction. International Journal of Social Robotics, 4(2), 181–199. DOI logoGoogle Scholar
Lozano, S. C., & Tversky, B. (2006). Communicative gestures facilitate problem solving for both communicators and recipients. Journal of Memory and Language, 55(1), 47–63. DOI logoGoogle Scholar
McNeill, D. (1992). Hand and mind: What gestures reveal about thought. University of Chicago Press, DOI logoGoogle Scholar
(2008). Gesture and thought. University of Chicago Press, DOI logoGoogle Scholar
Pechmann, T. (1989). Incremental speech production and referential overspecification. Linguistics, 27(1), 89–110. DOI logoGoogle Scholar
Pitsch, K., Lohan, K. S., Rohlfing, K., Saunders, J., Nehaniv, C. L., & Wrede, B. (2012). Better be reactive at the beginning, implications of the first seconds of an encounter for the tutoring style in human-robot-interaction. In Proceedings of RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication (pp. 974–981). DOI logoGoogle Scholar
Prasov, Z., & Chai, J. Y. (2008). What’s in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces. In Proceedings of the 13th International Conference on Intelligent User Interfaces (pp. 20–29). DOI logoGoogle Scholar
Reiter, E., Dale, R., & Feng, Z. (2000). Building natural language generation systems (Vol. 331). MIT Press, DOI logoGoogle Scholar
Scheutz, M., Briggs, G., Cantrell, R., Krause, E., Williams, T., & Veale, R. (2013). Novel mechanisms for natural human-robot interactions in the DIARC architecture. In Proceedings of AAAI Workshop on Intelligent Robotic Systems.Google Scholar
Schmid, H. (1995). Improvements in part-of-speech tagging with an application to German. In Proceedings of the ACL SIG DAT-Workshop, DOI logoGoogle Scholar
Staudte, M., & Crocker, M. W. (2009a). Producing and resolving multi-modal referring expressions in human-robot interaction. In Proceedings of the Pre-CogSci Workshop on Production of Referring Expressions, DOI logoGoogle Scholar
(2009b). Visual attention in spoken human-robot interaction. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction (pp. 77–84). DOI logoGoogle Scholar
Streeck, J. (1993). Gesture as communication i: Its coordination with gaze and speech. Communications Monographs, 60(4), 275–299. DOI logoGoogle Scholar
Tomasello, M.. & Akhtar, N. (1995). Two-year-olds use pragmatic cues to differentiate reference to objects and actions. Cognitive Development, 10(2), 201–224. DOI logoGoogle Scholar
Van Deemter, K., Gatt, A., van Gompel, R. P., & Krahmer, E. (2012). Toward a computational psycholinguistics of reference production. Topics in Cognitive Science, 4(2), 166–183. DOI logoGoogle Scholar
Van der Sluis, I., & Krahmer, E. (2007). Generating multimodal references. Discourse Processes, 44(3), 145–174. DOI logoGoogle Scholar
Vollmer, A.-L., Lohan, K. S., Fischer, K., Nagai, Y., Pitsch, K., Fritsch, J., … Wrede, B. (2009). People modify their tutoring behavior in robot-directed interaction for action learning. In Proceedings of the 8th International Conference on Development and Learning (pp. 1–6). DOI logoGoogle Scholar
Williams, T., Acharya, S., Schreitter, S., & Scheutz, M. (2016). Situated open world reference resolution for human-robot dialogue. In Proceedings of the IEEE/ACM Conference on Human-Robot Interaction (p. forthcoming). DOI logoGoogle Scholar
Williams, T., Schreitter, S., Acharya, S., & Scheutz, M. (2015). Towards situated open world reference resolution. In Proceedings of the 2015 AAAI Fall Symposium on Al and HRI.Google Scholar