Ikuma Adachi | Yerkes National Primate Research Center, Emory University
The importance of learning and categorizing social objects and events has become widely acknowledged over a couple of decades. Although findings from field studies have suggested that non-human animals have sophisticated abilities to recognize social objects, there is relatively little experimental evidence on this issue. Some studies have revealed animals’ excellent skills for discriminating visual and auditory social stimuli. However, because of perceptual resemblances among stimuli, it is still not clear that they recognize these objects with conceptual mechanisms that are independent of the perceptual characteristics of the stimuli. At the same time, whether their concepts have an aspect of transferring information from one modality to another has not received much attention. This paper advocates approaches to a cross-modal aspect of concepts as a new framework to solve these problems, and introduces the latest studies on cross-modal representations of social objects in non-humans.
Dutour, Mylène, Sarah L. Walsh, Elizabeth M. Speechley & Amanda R. Ridley
2021. Female Western Australian magpies discriminate between familiar and unfamiliar human voices. Ethology 127:11 ► pp. 979 ff.
Prichard, Ashley, Raveena Chhibber, Kate Athanassiades, Veronica Chiu, Mark Spivak & Gregory S. Berns
2021. 2D or not 2D? An fMRI study of how dogs visually process objects. Animal Cognition 24:5 ► pp. 1143 ff.
Nawroth, Christian, Jan Langbein, Marjorie Coulon, Vivian Gabor, Susann Oesterwind, Judith Benz-Schwarzburg & Eberhard von Borell
2019. Farm Animal Cognition—Linking Behavior, Welfare and Ethics. Frontiers in Veterinary Science 6
Bensky, Miles K., Samuel D. Gosling & David L. Sinn
2013. The World from a Dog’s Point of View [Advances in the Study of Behavior, 45], ► pp. 209 ff.
Lampe, Jessica Frances & Jeffrey Andre
2012. Cross-modal recognition of human individuals in domestic horses (Equus caballus). Animal Cognition 15:4 ► pp. 623 ff.
This list is based on CrossRef data as of 12 september 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.