Vol. 6:1 (2023) ► pp.1–28
Assessing receptive vocabulary using state‑of‑the‑art natural language processing techniques
Semantic embedding approaches commonly used in natural language processing such as transformer models have rarely been used to examine L2 lexical knowledge. Importantly, their performance has not been contrasted with more traditional annotation approaches to lexical knowledge. This study used NLP techniques related to lexical annotations and semantic embedding approaches to model the receptive vocabulary of L2 learners based on their lexical production during a writing task. The goal of the study is to examine the strengths and weaknesses of both approaches in understanding L2 lexical knowledge. Findings indicate that transformer approaches based on semantic embeddings outperform linguistic annotations and Word2vec models in predicting L2 learners’ vocabulary scores. The findings help to support the strength and accuracy of semantic-embedding approaches as well as their generalizability across tasks when compared to linguistic feature models. Limitations to semantic-embedding approaches, especially interpretability, are discussed.
Article outline
- 1.Introduction
- 2.Literature review
- 2.1Lexical knowledge
- 2.2Measuring L2 lexical knowledge
- 3.Current study
- 4.Method
- 4.1Corpus
- 4.2Receptive vocabulary knowledge
- 4.3Lexical annotations
- Age of acquisition
- Concreteness
- Word familiarity
- Word meaningfulness
- Lexical response times
- Word associations
- Phonological distance
- Word frequency
- Collocation strength
- Contextual distinctiveness
- 4.4Semantic embedding
- Doc2vec
- Transformers
- 4.5Statistical analysis
- 5.Results
- 5.1Lexical annotations model
- 5.2Doc2vec model
- 5.3BERT model
- 5.4Comparisons between models
- 6.Discussion
- 7.Conclusion
- Notes
-
References