Statistical learning of social signals and its implications for the social brain hypothesis
Hjalmar K. Turesson | Princeton University, USA
Asif A. Ghazanfar | Princeton University, USA
The social brain hypothesis implies that humans and other primates evolved “modules” for representing social knowledge. Alternatively, no such cognitive specializations are needed because social knowledge is already present in the world — we can simply monitor the dynamics of social interactions. Given the latter idea, what mechanism could account for coalition formation? We propose that statistical learning can provide a mechanism for fast and implicit learning of social signals. Using human participants, we compared learning of social signals with arbitrary signals. We found that learning of social signals was no better than learning of arbitrary signals. While coupling faces and voices led to parallel learning, the same was true for arbitrary shapes and sounds. However, coupling versus uncoupling social signals with arbitrary signals revealed that faces and voices are treated with perceptual priority. Overall, our data suggest that statistical learning is a viable domain-general mechanism for learning social group structure. Keywords: social brain; embodied cognition; distributed cognition; situated cognition; multisensory; audiovisual speech; crossmodal; multimodal
Published online: 06 December 2011
https://doi.org/10.1075/is.12.3.02tur
https://doi.org/10.1075/is.12.3.02tur
Cited by
Cited by other publications
Sherman, Laura J., Katherine Rice & Jude Cassidy
Stolk, Arjen, Matthijs L. Noordzij, Inge Volman, Lennart Verhagen, Sebastiaan Overeem, Gijs van Elswijk, Bas Bloem, Peter Hagoort & Ivan Toni
ten Brink, Maia & Asif A. Ghazanfar
Wittig, Roman M., Catherine Crockford, Kevin E. Langergraber & Klaus Zuberbühler
This list is based on CrossRef data as of 10 january 2021. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.