References
Advanced Realtime Tracking GmbH
(2016) ART Advanced Realtime Tracking Company Website. [URL], last checked October 2016.
Deak, G. O., Fasel, I., & Movellan, J.
(2001) The emergence of shared attention: Using robots to test developmental theories. In Proceedings 1st International Workshop on Epigenetic Robotics: Lund University Cognitive Studies, volume 85.Google Scholar
Duchowski, A.
(2007) Eye Tracking Methodology. Springer London, London.Google Scholar
Hobson, R. P.
(2005) What puts the jointness into joint attention? Joint Attention: Communication and Other Minds: Issues in Philosophy and Psychology, page 185.DOI logoGoogle Scholar
Holthaus, P., Pitsch, K., & Wachsmuth, S.
(2011) How can I help? Spatial attention strategies for a receptionist robot. International Journal of Social Robots, 3:383–393.DOI logoGoogle Scholar
Kassner, M. P. & Patera, W. R.
(2012) PUPIL: constructing the space of visual attention. PhD thesis, Massachusetts Institute of Technology.Google Scholar
Kendon, A.
(1990) Conducting interaction: Patterns of behavior in focused encounters, volume 7. CUP Archive.Google Scholar
(2004) Gesture: Visible action as utterance. Cambridge University Press, Cambridge, UK.DOI logoGoogle Scholar
Kühnlein, P. & Stegmann, J.
(2003) Empirical issues in deictic gesture: Referring to objects in simple identification tasks. Technical Report, SFB 360, Bielefeld University.Google Scholar
Kopp, S., Jung, B., Leßmann, N., & Wachsmuth, I.
(2003) Max – a multimodal assistant in virtual reality construction. KI – Künstliche Intelligenz, 4(03):11–17.Google Scholar
Kranstedt, A., Lücking, A., Pfeiffer, T., Rieser, H., & Staudacher, M.
(2006a) Measuring and Reconstructing Pointing in Visual Contexts. In Schlangen, D. & Fernandez, R. (Eds.), Proceedings of the Brandial 2006 – The 10th Workshop on the Semantics and Pragmatics of Dialogue, pages82–89, Potsdam. Universitätsverlag Potsdam.Google Scholar
Kranstedt, A., Lücking, A., Pfeiffer, T., Rieser, H., & Wachsmuth, I.
(2006b) Deictic object reference in task-oriented dialogue. In Rickheit, G. & Wachsmuth, I. (Eds.), Situated Communication, pages155–207. Mouton de Gruyter: Berlin.Google Scholar
(2006c) Deixis: How to Determine Demonstrated Objects Using a Pointing Cone. In Gibet, S., Courty, N., & Kamp, J. -F. (Eds.), Gesture Workshop 2005, LNAI 3881, pages300–311, Berlin/Heidelberg: SpringerVerlag GmbH.Google Scholar
Landis, J. R. & Koch, G. G.
(1977) The Measurement of Observer Agreement for Categorical Data. Biometrics, 33(1),159.DOI logoGoogle Scholar
Lücking, A., Pfeiffer, T., & Rieser, H.
(2015) Pointing and reference recon-sidered. Journal of Pragmatics, 77, 56–79.DOI logoGoogle Scholar
McNeill, D.
(1992) Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago.Google Scholar
(2006) Gesture, gaze, and ground. Lecture Notes in Computer Science, 3869:1.DOI logoGoogle Scholar
Microsoft
(2016) Kinect for Windows Website. WWW: [URL], last checked October 2016.
NaturalPoint, Inc
(2016) OptiTrack Motion Capture Systems Company Website. WWW: [URL], last checked October 2016.
Pfeiffer, T.
(2010) Understanding Multimodal Deixis with Gaze and Gesture in Conversational Interfaces. Dissertation to acquire the doctor rerum naturalium, Bielefeld University, Bielefeld, Germany.Google Scholar
(2011) Interaction between Speech and Gesture: Strategies for Pointing to Distant Objects. In: E. Efthimiou & G. Kouroupetroglou (Eds.), Gestures in Embodied Communication and Human-Computer Interaction, 9th International Gesture Workshop, GW 2011, pages109–112, Athens: National and Kapodistrian University of Athens.Google Scholar
(2012) Using virtual reality technology in linguistic research. In Proceedings of the IEEE Virtual Reality 2012, pages83–84, Orange County, CA, USA. IEEE, IEEE.DOI logoGoogle Scholar
(2013a) Documentation of gestures with data gloves. In C. Müller, A. Cienki, E. Fricke, S. Ladewig, D. McNeill & S. Teßendorf (Eds.), Handbücher zur Sprach- und Kommunikationswissenschaft / Hand- books of Linguistics and Communication Science, volume 1 of Handbooks of Linguistics and Communication Science (pp.868–879). Berlin: Mouton de Gruyter.Google Scholar
(2013b) Documentation of gestures with motion capture. In C. Müller, A. Cienki, E. Fricke, S. Ladewig, D. McNeill & S. Teßendorf (Eds.), Handbücher zur Sprach- und Kommunikationswissenschaft / Hand- books of Linguistics and Communication Science, volume 1 of Handbooks of Linguistics and Communication Science (pp.857–868). Berlin: Mouton de Gruyter.Google Scholar
Pfeiffer, T., Hofmann, F., Hahn, F., Rieser, H., & Röpke, I.
(2013) Gesture semantics reconstruction based on motion capturing and complex event processing: a circular shape example. In M. Eskenazi, M. Strube, B. D. Eugenio & J. D. Williams (Eds.), Proceedings of the SIGDIAL 2013 Conference (pp.270–279). Metz: Association for Computational Linguistics.Google Scholar
Pfeiffer, T., Kranstedt, A., & Lücking, A.
(2006) Sprach-Gestik Experimente mit IADE, dem Interactive Augmented Data Explorer. In S. Müller & G. Zachmann (Eds.), Dritter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR, pages61–72, Aachen: Shaker.Google Scholar
Pfeiffer, T. & Renner, P.
(2014) EyeSee3D: A low-cost approach for analyzing mobile 3D eye-tracking data using computer vision and augmented reality technology. In Proceedings of the Symposium on Eye Tracking Research and Applications, ETRA ’14, pp.195–202, New York: ACM.DOI logoGoogle Scholar
Pfeiffer, T., Renner, P., & Pfeiffer-Leßmann, N.
(2016) EyeSee3D 2.0: Model-based Real-time Analysis of Mobile Eye-Tracking in Static and Dynamic Three-Dimensional Scenes. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, pp.189–196. New York: ACM Press.DOI logoGoogle Scholar
Renner, P., Pfeiffer, T., & Pfeiffer-Leßmann, N.
(2015) Automatic analysis of a mobile dual eye-tracking study on joint attention. Abstracts of the 18th European Conference on Eye Movements, pages116–116.Google Scholar
Renner, P., Pfeiffer, T., & Wachsmuth, I.
(2014) Spatial references with gaze and pointing in shared space of humans and robots. In C. Freksa, B. Nebel, M. Hegarty & T. Barkowsky (Eds.), Spatial Cognition IX: Volume 8684 of Lecture Notes in Computer Science (pp.121–136).Google Scholar
Rickheit, G. & Strohner, H.
(1993) Grundlagen der kognitiven Sprachverarbeitung: Modelle, Methoden, Ergebnisse. Francke.Google Scholar
SensoMotoric Instruments GmbH
(2016) SensoMotoric Instruments GmbH Company Website. WWW: [URL], last checked October 2016.
Tobii AB
(2016) Tobii. WWW: [URL], last checked October 2016.
Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H.
(2005) Understanding and sharing intentions: The origins of cultural cognition. Behavioral and brain sciences, 28(05), 675–691.DOI logoGoogle Scholar
Van Dijk, T. A. & Kintsch, W.
(1983) Strategies of discourse comprehension. Academic Press.Google Scholar
Vicon Motion Systems Ltd
(2016) VICON Company Website. WWW: [URL], last checked October 2016.
Viola, P. & Jones, M. J.
(2004) Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.DOI logoGoogle Scholar
Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., & Sloetjes, H.
(2006) ELAN: a professional framework for multimodality research. In Proceedings of LREC, volume 2006, page 5th.Google Scholar