Article published in:
New Questions for the Next DecadeEdited by Gonia Jarema, Gary Libben and Victor Kuperman
[The Mental Lexicon 11:3] 2016
► pp. 375–400
Why we need to investigate casual speech to truly understand language production, processing and the mental lexicon
Benjamin V. Tucker | University of Alberta
Mirjam Ernestus | Radboud University / Max Plank Institute for Psycholinguistics
The majority of studies addressing psycholinguistic questions focus on speech produced and processed in a careful, laboratory speech style. This ‘careful’ speech is very different from the speech that listeners encounter in casual conversations. This article argues that research on casual speech is necessary to show the validity of conclusions based on careful speech. Moreover, research on casual speech produces new insights and questions on the processes underlying communication and on the mental lexicon that cannot be revealed by research using careful speech. This article first places research on casual speech in its historic perspective. It then provides many examples of how casual speech differs from careful speech and shows that these differences may have important implications for psycholinguistic theories. Subsequently, the article discusses the challenges that research on casual speech faces, which stem from the high variability of this speech style, its necessary casual context, and that casual speech is connected speech. We also present opportunities for research on casual speech, mostly in the form of new experimental methods that facilitate research on connected speech. However, real progress can only be made if these new methods are combined with advanced (still to be developed) statistical techniques.
Keywords: casual speech, conversational speech, experimental paradigms, pronunciation variability, statistical analyses
Published online: 31 December 2016
https://doi.org/10.1075/ml.11.3.03tuc
https://doi.org/10.1075/ml.11.3.03tuc
References
References
Anderson, A.H., Bader, M., Bard, E.G., Boyle, E., Doherty, G., Garrod, S., & Sotillo, C.
Baayen, R.H.
Baayen, R.H., van Rij, J., de Cat, C. & Wood, S.N.
to appear). Autocorrelated errors in experimental data in the language sciences: Some solutions offered by Generalized Additive Mixed Models. In D. Speelman, K. Heylen, & D. Geeraerts (Eds.) Mixed effects regression models in linguistics Berlin: Springer Retrieved from http://arxiv.org/abs/1601.02043
Bates, D., Kliegl, R., Vasishth, S. & Baayen, R.H.
submitted). Parsimonious mixed models.
Bentum, M., Ernestus, M., ten Bosch, L. & van den Bosch, A.
submitted). How do speech registers differ in the predictability of words?
Benzeghiba, M., De Mori, R., Deroo, O., Dupont, S., Erbes, T., Jouvet, D., Fissore, L., Laface, P., Mertins, A., Ris, S., Rose, R., Tyagi, V., & Wellekens, C.
Bernhard, D., & Tucker, B.
Biber, D., Conrad, S., & Reppen, R.
Brand, Sophie, & Ernestus, Mirjam
submitted). How do native listeners and learners of French comprehend French word pronunciation variants?
Brenner, D.
Brenner, D.S.
(2015) The phonetics of Mandarin tones in conversation. Retrieved from http://arizona.openrepository.com/arizona/handle/10150/578721
Brouwer, S., Mitterer, H., & Huettig, F.
Bürki, A., Ernestus, M., Gendrot, C., Fougeron, C., & Frauenfelder, U.H.
Bürki, A., Ernestus, M., & Frauenfelder, U.H.
Çetin, Ö., & Shriberg, E.
(2006) Speaker overlaps and ASR errors in meetings: Effects before, during, and after the overlap. In
2006 IEEE international conference on Acoustics Speech and Signal Processing Proceedings
(vol. 1).
Chen, T.-Y., & Tucker, B.V.
Connine, C.M., & Titone, D.
De Chat, C.
Dilts, P.C.
(2013) Modelling phonetic reduction in a corpus of spoken English using random forests and mixed-effects regression (Thesis). Retrieved from https://era.library.ualberta.ca/downloads/5425k999s
Drijvers, L., & Özyürek, A.
in press). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research.
Engen, K.J.V., Baese-Berk, M., Baker, R.E., Choi, A., Kim, M., & Bradlow, A.R.
Ernestus, M.
Ernestus, M., & R.H. Baayen
Ernestus, M., Baayen, R.H., & Schreuder, R.
Ernestus, M., Hanique, I., & Verboom, E.
Ernestus, M., Lahey, M., Verhees, F., & Baayen, R.H.
Fowler, C.A., & Turvey, M.T.
Fu, Q., Zeng, F.
Gahl, S., Yao, Y., & Johnson, K.
Galliano, S., Georois, E., Mostefa, D., Choukri, K., Bonastre, J.-F., & Gravier, J.
Gaskell, G., & William, M.-W.
Gaygen, D.E., & Luce, P.A.
Gick, B.
Godfrey, J.J., Holliman, E.C., & McDaniel, J.
Greenberg, S.
Goldinger, S.D., & Papesh, M.H.
Heylighen, F., & Dewaele, J.-M.
Hymes, D.
Kemps, R., Ernestus, M., Schreuder, R., & Baayen, R.H.
Klingner, J., Tversky, B., & Hanrahan, P.
Koch, X., & Janse, E.
Kruschke, J.K.
Kryuchkova, T., Tucker, B.V., Wurm, L.H., & Baayen, R.H.
Lahiri, A., & Reetz, H.
Levelt, W.J.M., Roelofs, A., & Meyer, A.S.
Lindblom, B.
Liu, S., & Samuel, A.G.
MacWhinney, B.
McLennan, C.T., Luce, P.A., & Charles-Luce, J.
Mehta, G., & Cutler, A.
Mirman, D. , Dixon, J.A., & Magnuson, J.S.
Mulder, K., ten Bosch, L., & Boves, L.
submitted). Comparing different methods for analyzing ERP signals.
Munson, B., & Solomon, N.P.
Oleson, J.J., Cavanaugh, J.E., McMurray, B., & Brown, G.
Pitt, M.A., Dilley, L., Johnson, K., Kiesling, S., Raymond, W., Hume, E., & Fosler-Lussier, E.
(2007) Buckeye corpus of conversational speech (2nd release) [www.buckeyecorpus.osu.edu] Columbus, OH: Department of Psychology. Ohio State University (Distributor).
Pluymaekers, M., Ernestus, M., & Baayen, R.
Podlubny, R., Geeraert, K., Tucker, B.V.
(2015) It’s all about, like, acoustics. Proceedings of the
18th international Congress of Phonetic Sciences
. Glasgow, UK: The University of Glasgow. Paper number 0477.
Podlubny, R., Tucker, B.V., & Nearey, T.
(2011) ‘Sorry, what was that?’: The roles of pitch, duration, and amplitude in the perception of reduced speech. Poster presented at the
Nijmegen Spontaneous Speech Workshop
, Nijmegen, NL.
Pollack, I., & Pickett, J.M.
Ranbom, L.J., & Connine, C.M.
Richter, E.
Ruiter, de, L.E.
Schönle, P.W., Gräbe, K., Wenig, P., Höhne, J., Schrader, J., & Conrad, B.
Schweitzer, K., Walsh, M., Calhoun, S., Schütze, H., Möbius, B., Schweitzer, A., & Dogil, G.
Stone, M.
Taft, M., & Chen, H.C.
Tagliamonte, S.A., & Baayen, R.H.
Torreira, F., Adda-Decker, M., & Ernestus, M.
Tucker, B.V.
(2007) Spoken word recognition of the reduced American English Flap. The University of Arizona. Retrieved from http://hdl.handle.net/10150/194987
Tyrone, M.E., & Mauk, C.E.
van Rij, J., Natalya, P., van Rijn, H., Wood, S.N., & Baayen, R.H.
submitted). Pupil dilation to study cognitive processing: Challenges and solutions for time course analyses.
Van de Ven, M., Ernestus, M., & Schreuder, R.
Viebahn, M., Ernestus, M., & McQueen, J.
Wagner, P., Trouvain, J., & Zimmerer, F.
Warner, N.
Warner, N., & Tucker, B.V.
Wiggers, P., & Rothkrantz, L.J.M.
Willems, R.M., Frank, S.L., Nijhof, A.D., Hagoort, P., & Bosch, A. van den
Wrench, A.A., & Scobbie, J.M.
(2011) Very high frame rate ultrasound tongue imaging. In Proceedings of the
9th International Seminar On Speech Production (ISSP)
(pp. 155–162).
Wurm, L.H., & Fisicaro, S.A.
Xiong, W., Droppo, J., Huang, X., Seide, F., Seltzer, M., Stolcke, A., & Zweig, G.
(2016) The Microsoft 2016 Conversational Speech Recognition System. arXiv:1609.03528 [Cs]. Retrieved from http://arxiv.org/abs/1609.03528
Cited by
Cited by 9 other publications
Baese-Berk, Melissa M., Laura C. Dilley, Molly J. Henry, Louis Vinke & Elina Banzina
Ben Hedia, Sonia & Ingo Plag
Felker, E., A. Troncoso-Ruiz, M. Ernestus & M. Broersma
Lorenz, David & David Tizón-Couto
Nenadić, Filip & Benjamin V. Tucker
Orzechowska, Paula
Podlubny, Ryan G., Terrance M. Nearey, Grzegorz Kondrak & Benjamin V. Tucker
Tucker, Benjamin V., Daniel Brenner, D. Kyle Danielson, Matthew C. Kelley, Filip Nenadić & Michelle Sims
Vigliecca, Nora Silvana
This list is based on CrossRef data as of 07 february 2021. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.