Iconic gestures serve as primes for both auditory and visual word forms
Previous studies using cross-modal semantic priming have found that iconic gestures prime target words that are related with the gestures. In the present study, two analogous experiments examined this priming effect presenting prime and targets in high synchrony. In Experiment 1, participants performed an auditory primed lexical decision task where target words (e.g., “push”) and pseudowords had to be discriminated, primed by overlapping iconic gestures that could be semantically related (e.g., moving both hands forward) or not with the words. Experiment 2 was similar but with both gestures and words presented visually. The grammatical category of the words was also manipulated: they were nouns and verbs. It was found that words related to gestures were recognized faster and with fewer errors than the unrelated ones in both experiments and similarly for both types of words.
Article outline
- Experiment 1
- Method
- Participants
- Materials and design
- Procedure
- Results and discussion
- Experiment 2
- Method
- Participants
- Materials and design
- Procedure
- Results and discussion
- General discussion
-
References