28. The Intrinsic Bimodality of Speech Communication and the Synthesis of Talking Faces
Cited by 6 other publications
Goncalves, Lucas & Carlos Busso
. Robust Audiovisual Emotion Recognition: Aligning Modalities, Capturing Temporal Information, and Handling Missing Features
. IEEE Transactions on Affective Computing
pp. 2156 ff.
. Visual model structures and synchrony constraints for audio-visual speech recognition
. IEEE Transactions on Audio, Speech and Language Processing
pp. 1082 ff.
Jong-Seok Lee & Cheol Hoon Park
. Robust Audio-Visual Speech Recognition Based on Late Integration
. IEEE Transactions on Multimedia
pp. 767 ff.
Lee, Jong-Seok & Touradj Ebrahimi
. Two-Level Bimodal Association for Audio-Visual Speech Recognition
. In Advanced Concepts for Intelligent Vision Systems
[Lecture Notes in Computer Science
, 5807], ►
pp. 133 ff.
Tao, Fei & Carlos Busso
. 2018 IEEE International Conference on Multimedia and Expo (ICME)
pp. 1 ff.
[no author supplied]
. The Paradigm Shift to Multimodality in Contemporary Computer Interfaces
[Synthesis Lectures on Human-Centered Informatics
This list is based on CrossRef data as of 11 march 2023. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers.
Any errors therein should be reported to them.