Chapter 13
Utterance unit annotation for the Japanese Sign Language Dialogue Corpus
Towards a method for detecting interactional boundaries in spontaneous sign language dialogue
This chapter defines ‘utterance units’ and describes their annotation in the Japanese Sign Language (JSL) dialogue corpus, first focusing on how human annotators – native signers of JSL – identify and annotate utterance units, before reporting on part of speech (POS) tagging for JSL and semi-automatic annotation of utterance units. The utterance unit is an original concept for segmenting and annotating movement features in sign language dialogue, based on signers’ native sense. We postulate a fundamental interaction-specific unit for understanding interactional mechanisms (such as turn-taking) in sign language social interactions from the perspectives of conversation analysis and multimodal interaction studies. We explain differences between sentence and utterance units, the corpus construction and composition, and the annotation scheme, before analyzing how JSL native annotators annotated the units. Finally, we show the application potential of this research by presenting two case studies, the first exploring POS annotations, and the second a first attempt at automatic annotation using OpenPose software.
Article outline
- 1.Introduction
- 2.Sentence versus utterance units
- 3.The Colloquial Corpus of Japanese Sign Language
- 3.1Tasks, areas and participants
- 3.2Filming services and video clip editing
- 4.Annotation of utterance units
- 4.1Identifying an utterance unit
- 4.2Annotation of utterance units on the individual level
- 4.3Integration level
- 5.Trial annotation of utterance units
- 5.1Quantitative analysis
- 5.2Qualitative analysis of utterance units
- 5.2.1Utterance unit including mouthing
- 5.2.2Utterance unit segmented by gaze shift
- 5.2.3Utterance unit bounded by interlocutor’s actions
- 6.The application potential of this research
- 6.1Part of speech annotation for the utterance unit
- 6.1.1Target data and annotation tool
- 6.1.2POS annotation guidelines
- 6.1.3Results, challenges and discussions
- 6.2Automatic detection of utterance units
- 6.2.1Detecting body-keypoint positions
- 6.2.2Results
- 6.2.2Discussion
- 7.Conclusions
-
Notes
-
Acknowledgments
-
References
-
Appendix