Vol. 127/128 (2000) ► pp. 37–51
Assessing Second Language Writing
The Relationship between Computerized Analysis and Rater Evaluation
This article examines the relationship between two kinds of methods used to assess the quality of second language writing : 1) objective computerised text analysis focusing on the linguistic features of written texts, and 2) subjective evaluation performed by human raters using a combination of holistic and analytical scoring procedures. In particular, it attempts to explore the potentials and possible limitations of using computerised programs as research tools in second language writing research.
The written sample consisted of a total of 132 short essays written b y ESL students enrolled in various academic programs at an American university. The first method used computerized programs to assess the written texts in terms of syntactic complexity, lexical complexity and grammatical accuracy, whereas in the second method, two ESL raters evaluated the same sample of texts by first assigning a holistic score to each piece of writing, then applying an analytical scheme to assess linguistic features at the syntactic, lexical and grammatical level as well as textual and rhetorical features at the discourse level. A series of correlation analyses were performed using the scores obtained from these two kinds of assessment procedures at the correspondent levels. The results show that a significant correlation was consistently found between these two kinds of scores at the level of grammatical accuracy, yet n o significant correlation was found in any of the other categories. The results also indicate a high level of internal consistency in t he computerized analysis.
Cited by other publications
This list is based on CrossRef data as of 28 august 2020. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.