Chapter published in:How the Brain Got Language – Towards a New Road Map
Edited by Michael A. Arbib
[Benjamins Current Topics 112] 2020
► pp. 289–317
From actions to events
Communicating through language and gesture
In this paper, I argue that an important component of the language-ready brain is the ability to recognize and conceptualize events. By ‘event’, I mean any situation or activity in the world or our mental life, that we find salient enough to individuate as a thought or word. While this may sound either trivial or non-unique to humans, I hope to show that abstracting away events and their participants from the embodied flow of experience is a characteristic unique to humans. This ability is enabled, I will argue, by two critical competencies that act as scaffolds for language-ready thought in the prehuman brain. The first, as argued by Arbib (2006, 2012, 2016) and others, is a sophisticated system of gesture production and understanding in prehumans, which provided a template for schema-like sequencing and slot-filling of information units. The second involves the integration of multiple modalities of expression in the communicative act, in particular, the alignment of co-gestural speech and co-speech gesture. With such computational facilities, action-based gestures can be abstracted away from their associated objects and become full event representations. This view supports the MSH argument for the emergence of more complex linguistic expressions from initially holophrastic units. In particular, actions can be thought of as protoverbs, which through this process are abstracted to full event descriptions, i.e., verbs.
Keywords: gesture, language, action, affordances, events
Published online: 11 August 2020
Abner, N., Cooperrider, K., & Goldin-Meadow, S.
Arbib, M. A.
Arbib, M. A., Liebal, K. and Pika, S.
Arbib, M. A. & Rizzolatti, G.
Armstrong, D. F., Stokoe, W. C. and Wilcox, S. E.
Asher, N. and A. Lascarides
Bohn, M., Call, J. and Tomasello, M.
Clark, H. H., S. E. Brennan, et al.
Clark, H. H., Schreuder, R., & Buttrick, S.
Cooper, R. and Ginzburg, J.
Corballis, Michael C.
Deacon, Terrence W.
Engle, R.A. and Clark, H.H.
Fogassi, L., Coude, G. and Ferrari, P. F.
Gillespie-Lynch, K., Greenfield, P. M., Lyn, H., & Savage-Rumbaugh, S.
Gibson, J. J.
Glenberg, A. M. & Gallese, V.
Goldin-Meadow, S. and Alibali, M.W.
Grice, H. P.
Hsiao, K. -Y., S. Tellex, S. Vosoughi, R. Kubat, and D. Roy
Hauser, Marc D., Noam Chomsky, and W. Tecumseh Fitch
Iverson, J. M., Capirci, O. and Caselli, M. C.
Iverson, J. M., & Goldin-Meadow, S.
Kamp, H. and Reyle, U.
(1995) Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of pragmatics, 23(3), pp. 247–279.
Lascarides, A. and M. Stone
(2006) Formal semantics for iconic gesture. In Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue (BRANDIAL), pp. 64–71.
Pika, S. and Mitani, J. C.
Pustejovsky, J. and N. Krishnaswamy
Pustejovsky, J. and J. L. Moszkowicz
Rizzolatti, G., and Arbib, M.
Searle, J. R.
Stout, Dietrich, & Hecht, Erin E.
Strickland, B., Geraci, C., Chemla, E., Schlenker, P., Kelepir, M., and Pfau, R.
Volterra, V., Caselli, M. C., Capirci, O., & Pizzuto, E.
von Uexkll, J.
Wang, I., M. Ben Fraj, P. Narayana, D. Patil, G. Mulay, R. Bangar, R. Beveridge, B. Draper, and J. Ruiz
(2017) Eggnog: A continuous, multimodal data set of naturally occurring gestures with ground truth labels. In Proceedings of the 12th IEEE International Conference on Automatic Face & Gesture Recognition.
Whitehead, A. N.