28. The Intrinsic Bimodality of Speech Communication and the Synthesis of Talking Faces
Cited by
Cited by 6 other publications
Goncalves, Lucas & Carlos Busso
2022.
Robust Audiovisual Emotion Recognition: Aligning Modalities, Capturing Temporal Information, and Handling Missing Features.
IEEE Transactions on Affective Computing 13:4
► pp. 2156 ff.

Hazen, T.J.
2006.
Visual model structures and synchrony constraints for audio-visual speech recognition.
IEEE Transactions on Audio, Speech and Language Processing 14:3
► pp. 1082 ff.

Jong-Seok Lee & Cheol Hoon Park
2008.
Robust Audio-Visual Speech Recognition Based on Late Integration.
IEEE Transactions on Multimedia 10:5
► pp. 767 ff.

Lee, Jong-Seok & Touradj Ebrahimi
2009.
Two-Level Bimodal Association for Audio-Visual Speech Recognition. In
Advanced Concepts for Intelligent Vision Systems [
Lecture Notes in Computer Science, 5807],
► pp. 133 ff.

Tao, Fei & Carlos Busso
2018.
2018 IEEE International Conference on Multimedia and Expo (ICME),
► pp. 1 ff.

[no author supplied]
2015.
The Paradigm Shift to Multimodality in Contemporary Computer Interfaces [
Synthesis Lectures on Human-Centered Informatics, ],

This list is based on CrossRef data as of 11 march 2023. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.