On the order of processing of humorous tweets with visual and verbal elements
In this paper we examine the order of processing of multimodal tweets (text + image). Using an eye tracker, we collected a
sample of 36 participants reading 25 humorous tweets. Our conclusions show that the processing of multimodal humorous tweets is in line with
the processing of other multimodal texts. The participants were significantly more likely to start from the image, followed by the caption.
Other elements, such as the tweet’s “author” (the user name) or elements outside the tweet’s frame, attracted significantly less and later
attention. The participants spent significantly more time gazing at the caption, before moving on to another area. The longer the
participants spent looking at the tweet, the less predictable their gaze direction became.
Article outline
- 1.Introduction
- 2.Multimodal texts in the eye tracker
- 3.Attractors
- 4.Method
- 4.1Participants
- 4.2Stimuli
- 4.3Procedure
- 5.Results
- 5.1Descriptive analysis
- 5.2Inferential analysis
- 6.Analysis of the results
- 6.1Summary of multilevel regression modeling
- 6.2Gaze duration
- 6.3Conclusion
- 7.General conclusions
- Notes
-
References