Article published in:Implicit and Explicit Learning of Languages
Edited by Patrick Rebuschat
[Studies in Bilingualism 48] 2015
► pp. 213–246
Implicit learning of non-adjacent dependencies
A graded, associative account
Language and other higher-cognitive functions require structured sequential behavior including non-adjacent relations. A fundamental question in cognitive science is what computational machinery can support both the learning and representation of such non-adjacencies, and what properties of the input facilitate such processes. Learning experiments using miniature languages with adult and infants have demonstrated the impact of high variability (Gómez, 2003) as well as nil variability (Onnis, Christiansen, Chater, & Gómez (2003; submitted) of intermediate elements on the learning of nonadjacent dependencies. Intriguingly, current associative measures cannot explain this U shape curve. In this chapter, extensive computer simulations using five different connectionist architectures reveal that Simple Recurrent Networks (SRN) best capture the behavioral data, by superimposing local and distant information over their internal ‘mental’ states. These results provide the first mechanistic account of implicit associative learning of non-adjacent dependencies modulated by distributional properties of the input. We conclude that implicit statistical learning might be more powerful than previously anticipated.
Keywords: artificial grammar learning, artificial language learning, computational models, connectionism, grammar, implicit learning, non-adjacent dependencies, sequence learning, Simple Recurrent Networks, statistical learning
Published online: 24 September 2015
Cited by 3 other publications
Andringa, Sible & Patrick Rebuschat
Deocampo, Joanne A., Tricia Z. King & Christopher M. Conway
Katan, Pesia, Shani Kahta, Ayelet Sasson & Rachel Schiff
This list is based on CrossRef data as of 03 april 2021. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.
Allen, J., & Seidenberg, M.S.
Botvinick, M., & Plaut, D.C.
Brakel, P., & Frank, S.L.
Chater, N., & Conkey, P.
(1992) Finding linguistic structure with recurrent neural networks. In Proceedings of the 14th Annual Conference of the Cognitive Science Society (pp. 402–407). Hillsdale, New Jersey: Psychology Press.
Christiansen, M.H., Allen, J., & Seidenberg, M.S.
Christiansen, M.H., & Chater, N.
Christiansen, M.H., Conway, C.M., & Curtin, S.
Christiansen, M.H., & MacDonald, M.C.
Cleeremans, A., Servan-Schreiber, D., & McClelland, J.L.
Destrebecqz, A., & Cleeremans, A.
Cottrell, G.W., & Plunkett, K.
Dell, G.S., Juliano, C., & Govindjee, A.
Dulany, D.E., Carlson, R.A., & Dewey, G.I.
Estes, K., Evans, J., Alibali, M., & Saffran, J.
Farkaš, I., & Crocker, M.W.
Frank, M.C., Goldwater, S., Griffiths, T.L., & Tenenbaum, J.B.
Frinken, V., Fischer, A., Manmatha, R., & Bunke, H.
Gaskell, M.G., Hare, M., & Marslen-Wilson, W.D.
Gibson, F.P., Fichman, M., & Plaut, D.C.
Harm, M.W., & Seidenberg, M.S.
Hinoshita, W., Arie, H., Tani, J., Okuno, H.G., & Ogata, T.
Johnstone, T. & Shanks, D.R.
(1986) Attractor dynamics and parallelism in a connectionist sequential machine. In Proceedings of the Eighth Annual Conference of the Cognitive Science Society . Hillsdale, NJ: Lawrence Erlbaum Associates.
Kinder, A. & Shanks, D.R.
Kirov, C., & Frank, R.
Maraqa, M., Al-Zboun, F., Dhyabat, M., & Zitar, R.A.
Maskara, A., & Noetzel, A.
(1992) Forced simple recurrent neural network and grammatical inference. In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society (pp. 420–425). Hillsdale, NJ: Lawrence Erlbaum Associates.
Miikkulainen, R., & Mayberry III, M.R.
Misyak, J.B., & Christiansen, M.H.
Misyak, J.B., Christiansen, M.H. & Tomblin, J.B.
Moss, H.E., Hare, M.L., Day, P., & Tyler, L.K.
Munakata, Y., McClelland, J.L., & Siegler, R.S.
Onnis, L., Christiansen, M.H., Chater, N., & Gómez, R.
submitted). Statistical learning of non-adjacent relations. Submitted manuscript.
Onnis, L., Monaghan, P., Christiansen, M.H., & Chater, N.
(2004) Variability is the spice of learning, and a crucial ingredient for detecting and generalizing in nonadjacent dependencies. In Proceedings of the 26th annual conference of the Cognitive Science Society (pp. 1047–1052). Mahwah, NJ: Lawrence Erlbaum.
Pacton, S., Perruchet, P., Fayol, M., & Cleeremans, A.
Perruchet, P., & Pacteau, C.
Perruchet, P., & Pacton, S.
Plaut, D.C., & Kello, C.T.
Redington, M., & Chater, N.
Rohde, D.L.T., & Plaut, D.C.
Saffran, J.R., Aslin, R.N., & Newport, E.L.
Servan-Schreiber, D., Cleeremans, A. & McClelland, J.L.
Si, Y., Xu, J., Zhang, Z., Pan, J., & Yan, Y.
(2012) An improved Mandarin voice input system using recurrent neural network language model. In Computational Intelligence and Security (CIS), Eighth International Conference on IEEE (pp. 242–246).
Socher, R., Manning, C.D., & Ng, A.Y.
(2010) Learning continuous phrase representations and syntactic parsing with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop . Hilton: Cheakmus.
Sutskever, I., Martens, J., & Hinton, G.
(2011) Generating text with recurrent neural networks. In Proceedings of the 2011 International Conference on Machine Learning (ICML-2011).
(2011) Recursion and recursion-like structure in ensembles of neural elements. In H. Sayama, A. Minai, D. Braha, & Y. Bar-Yam (Eds.), Unifying themes in complex systems. Proceedings of the VIII International Conference on Complex Systems (pp. 1494–1508). Berlin: Springer.
Takac, M., Benuskova, L, & Knott, A.