Rater variation in the assessment of speech acts
Abstract
This study addresses variability among native speaker raters who evaluated pragmatic performance of learners of English as a foreign language. Using a five-point rating scale, four native English speakers of mixed cultural background (one African American, one Asian American, and two Australians) assessed the appropriateness of two types of speech acts (requests and opinions) produced by 48 Japanese EFL students. To explore norms and the reasoning behind the raters’ assessment practice, individual introspective verbal interviews were conducted. Eight students’ speech act productions (64 speech acts in total) were selected randomly, and the raters were asked to rate each speech act and then explain their rating decision. Interview data revealed similarities and differences in their use of pragmatic norms and social rules in evaluating appropriateness.