Article published In:
Expressing and Describing Surprise
Edited by Agnès Celle and Laure Lansari
[Review of Cognitive Linguistics 13:2] 2015
► pp. 461477
References
Ahmad, K., Budin, G., Wien, U., Devitt, A., Glucksberg, S., Heyer, G., Leipzig, U., Musacchio, M.T., Pazienza, M.T., Rogers, M., Vogel, C., & Wilks, Y
(2011) Affective computing and sentiment analysis: Emotion, metaphor and terminology. New York: Springer Science+ Business Media. DOI logoGoogle Scholar
Anderson, K., André, E., Baur, T., Bernardini, S., Chollet, M., Chryssafidou, E., Damian, I., Ennis, C., Egges, A., Gebhard, P., Jones, H., Ochs, M., Pelachaud, C., Porayska-Pomsta, K., Rizzo, P., & Sabouret, N
(2013) The TARDIS framework: Intelligent virtual agents for social coaching in job interviews. In D. Reidsma, H. Katayose, & A. Nijholt (Eds.), Advances in computer entertainment (pp. 476-491). New York: Springer. DOI logoGoogle Scholar
Aubergé, V., Audibert, N., & Rilliard, A
(2004) Acoustic morphology of expressive speech: What about contours? Speech prosody 2004 international conference . Nara, Japan.
Audibert, N., Vincent, D., Aubergé, V., & Rosec, O
(2006) Expressive speech synthesis: Evaluation of a voice quality centered coder on the different acoustic dimensions. Proceedings of speech prosody .
Baccianella, S., Esuli, A., & Sebastiani, F
(2010) Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. Language resources and evaluation conference (LREC) . Valletta, Malta.
Bertrand, R., Blache, P., Espesser, R., Ferré, G., Meunier, C., Priego-Valverde, B., & Rauzy, S
(2008) Le CID-Corpus of interactional data: Annotation et exploitation multimodale de parole conversationnelle. Traitement Automatique des Langues, 49(3), 1–30.Google Scholar
Bloom, K., Garg, N., & Argamon, S
(2007) Extracting appraisal expressions. In C. Sidner (Ed.), Proceedings of the HLT-NAACL (pp. 308–315). Morristown: Association for Computational Linguistics.Google Scholar
Bohus, D., & Horvitz, E
(2014) Managing human-robot engagement with forecasts and um hesitations. In Proceedings of the 16th international conference on multimodal interaction (pp. 2–9). New York: ACM Press.
Callejas, Z., Ravenet, B., Ochs, M., & Pelachaud, C
(2014) A computational model of social attitudes for a virtual recruiter. In Proceedings of the 2014 international conference on autonomous agents and multi-agent systems (pp. 93–100). International Foundation for Autonomous Agents and Multiagent Systems.
Campano, S., Durand, J., & Clavel, C
(2014a) Comparative analysis of verbal alignment in human-human and human-agent interactions. In Proceedings of the ninth international conference on language resources and evaluation (LREC-2014), Reykjavik, Iceland, May 26–31, 2014 (pp. 4415–4422). European Language Resources Association (ELRA).
Campano, S., Glas, N., Langlet, C., Clavel, C., & Pelachaud, C
(2014b) Alignement par production d’hétéro-répétitions chez un aca. In Workshop Affect, Compagnon Artificiel, Interaction .
Campano, S., Langlet, C., Glas, N., Clavel, C., & Pelachaud, C
(2015) Enhancing user engagement through verbal alignment by the agent. Manuscript in preparation.
Cassell, J
(2000) Embodied conversational agents. Cambridge, MA: MIT Press. DOI logoGoogle Scholar
Celle, A., & Lansari, L
(2014) ‘I’m surprised’ / ‘Are you surprised?’: Surprise as an argumentation tool in verbal interaction. In P. Blumenthal, I. Novakova, & D. Siepmann (Eds.), Les émotions dans le discour /Emotions in discourse (pp. 267–277). Bern: Peter Lang.Google Scholar
Clavel, C., Adda, G., Cailliau, F., Garnier-Rizet, M., Cavet, A., Chapuis, G., Courcinous, S., Danesi, C., Daquo, A.-L., Deldossi, M., Guillemin-Lanne, S., Seizou, M., & Suignard, P
(2013b) Spontaneous speech and opinion detection: Mining call-centre transcripts. Language Resources and Evaluation, 471, 1089–1125. DOI logoGoogle Scholar
Clavel, C., Pelachaud, C., & Ochs, M
(2013a) User’s sentiment analysis in face-to-face human-agent interactions – prospects. In Workshop on affective social signal computing, satellite of Interspeech. Association for Computational Linguistics.Google Scholar
Clavel, C., & Richard, G
(2013) Recognition of acoustic emotion. In C. Pelachaud (Ed.), Emotion-oriented systems (pp. 139–167). London: John Wiley & Sons. DOI logoGoogle Scholar
Clavel, C., Vasilescu, I., Devillers, L., Richard, G., & Ehrette, T
(2008) Fear-type emotions recognition for future audio-based surveillance systems. Speech Communication, 501, 487–503. DOI logoGoogle Scholar
Devillers, L., & Vidrascu, L
(2006) Real-life emotions detection with lexical and paralinguistic cues on human-human call center dialogs. In Interspeech 2006 .
Ekman, P
(1999) Basic emotions. In T. Dalgleish & T. Power (Eds.), The handbook of cognition and emotion (pp. 45–60). New York: John Wiley & Sons.Google Scholar
Ekman, P., & Friesen, W.V
(2003) Unmasking the face: A guide to recognizing emotions from facial clues. Los Altos, CA: Ishk.Google Scholar
Fant, G
(1960) Acoustic theory of speech production. The Hague: Mouton.Google Scholar
Fontaine, J.R., Scherer, K.R., Roesch, E.B., & Ellsworth, P.C
(2007) The world of emotions is not two-dimensional. Psychological Science, 18(12), 1050–1057. DOI logoGoogle Scholar
Ishizuka, M
(2012) Textual affect sensing and affective communication. In IEEE 11th international conference on cognitive informatics and cognitive computing (pp. 2–3). Kyoto.
Izard, C.E
(1971) The face of emotion. New York: Appleton-Century-Crofts.Google Scholar
Karpouzis, K., Andre, E., & Batliner, A
(2010) Emotion-aware natural interaction. Advances in Human-Computer Interaction. DOI logoGoogle Scholar
Kumar, R., Rosé, C.P., & Litman, D.J
(2006) Identification of confusion and surprise in spoken dialog using prosodic features. In Interspeech 2006.Google Scholar
Langlet, C., & Clavel, C
(2014a) Modélisation des questions de l’agent pour l’analyse des affects, jugements et appréciations de l’utilisateur dans les interactions humain-agent. In Conference on traitement automatique du langage naturel, TALN .
(2014b) Modelling user’s attitudinal reactions to the agent utterances: Focus on the verbal content. In 5th international workshop on corpora for research on emotion, sentiment & social signals (ES3 2014) , Reykjavik, Iceland.
Martin, J.R., & White, P.R
(2005) The language of evaluation: Appraisal in English. Basingstoke/New York: Macmillan. DOI logoGoogle Scholar
McKeown, G., Valstar, M., Cowie, R., Pantic, M., & Schroder, M
(2011) The SEMAINE database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing, 3(1), 5–17. DOI logoGoogle Scholar
Munezero, M., Montero, C.S., Sutinen, E., & Pajunen, J
(2014) Are they different?: Affect, feeling, emotion, sentiment, and opinion detection in text. IEEE Transactions on Affective Computing, 5(2), 101–111. DOI logoGoogle Scholar
Neviarouskaya, A., Prendinger, H., & Ishizuka, M
(2010a) User study on AffectIM, an avatar-based Instant Messaging system employing rule-based affect sensing from text. International Journal of Human-Computer Studies, 68(7):432–450. DOI logoGoogle Scholar
(2010b) Recognition of affect, judgment, and appreciation in text. In Proceedings of the 23rd international conference on computational linguistics (pp. 806–814). Association for Computational Linguistics.
Niewiadomski, R., Obaid, M., Bevacqua, E., Looser, J., Anh, L.Q., & Pelachaud, C
(2011) Cross-media agent platform. In Proceedings of the 16th international conference on 3D web technology (pp. 11–19). ACM. DOI logo
Osgood, C., Mai, W.H., & Miron, M
(1975) Cross-cultural universals of affective meaning. Urbana: University of Illinois Press.Google Scholar
Osherenko, A., & André, E
(2009) Differentiated semantic analysis in lexical affect sensing. In Conference on affective computing and intelligent interaction (ACII) and workshops (pp. 1–6).
Păiş, A.L., Moga, S.A., & Buiu, C
(2010) Emotions and robot artists: State-of-the-art and research challenges. Petroleum-Gas University of Ploiesti Bulletin, Mathematics-Informatics-Physics Series, 62(1), 26–40.Google Scholar
Pang, B., Lee, L., & Vaithyanathan, S
(2002) Thumbs up?: Sentiment classification using machine learning techniques. In ACL-02 conference on empirical methods in natural language processing-10 (pp. 79–86). DOI logo
Pennebaker, J.W., Francis, M.E., & Booth, R.J
(2001) Linguistic inquiry and word count LIWC2001. Hillsdale, NJ: Erlbaum.Google Scholar
Perikos, I., & Hatzilygeroudis, I
(2013) Recognizing emotion presence in natural language sentences. Communications in Computer and Information Science, 3841, 30–39. DOI logoGoogle Scholar
Picard, R
(1997) Affective computing. Cambridge, MA: MIT Press. DOI logoGoogle Scholar
Pickering, M.J., & Garrod, S
(2004) Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences, 27(2), 169–190. DOI logoGoogle Scholar
Plutchik, R
(1984) Emotions: A general psychoevolutionary theory. In K.R. Scherer & P. Ekman (Eds.), Approaches to emotion (pp. 197–219). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Poggi, I
(2007) Mind, hands, face and body: A goal and belief view of multimodal communication. Berlin: Weidler.Google Scholar
Reisenzein, R., Hudlicka, E., Dastani, M., Gratch, J., Hindriks, K.V., Lorini, E., & Meyer, J.-J
C (2013) Computational modeling of emotion: Toward improving the inter- and intradisciplinary exchange. IEEE Transactions on Affective Computing, 4(3), 246–266. DOI logoGoogle Scholar
Rosis, F., Pelachaud, C., Poggi, I., Carofiglio, V., & Carolis, B.D
(2003) From greta’s mind to her face: modelling the dynamics of affective states in a conversational embodied agent. International Journal of Human-Computer Studies, 59(1), 81–118. DOI logoGoogle Scholar
Scherer, K
(2003) Vocal communication of emotion : a review of research paradigms. Speech Communication, 40(1–2), 227–256. DOI logoGoogle Scholar
Scherer, K.R
(2005) What are emotions? And how can they be measured? Social Science Information, 44(4), 695–729. DOI logoGoogle Scholar
Schuller, B., Valster, M., Eyben, F., Cowie, R., & Pantic, M
(2012) Avec 2012: the continuous audio/visual emotion challenge. In Proceedings of the 14th ACM international conference on Multimodal interaction (pp. 449–456). ACM.
Smith, C., Crook, N., Dobnik, S., & Charlton, D
(2011) Interaction strategies for an affective conversational agent. Presence: Teleoperators and Virtual Environments, 20(5), 395–411. DOI logoGoogle Scholar
Svennevig, J
(2004) Other-repetition as display of hearing, understanding and emotional stance. Discourse Studies, 6(4), 489–516. DOI logoGoogle Scholar
Taboada, M., & Grieve, J
(2004) Analyzing appraisal automatically. In Proceedings of AAAI spring symposium on exploring attitude and affect in text (AAAI technical report SS04#07), Stanford University, CA (pp. 158–161). AAAI Press.
Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D’Errico, F., & Schröder, M
(2012) Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE Transactions on Affective Computing, 3(1), 69–87. DOI logoGoogle Scholar
Wrede, B., & Shriberg, E
(2003) Relationship between dialogue acts and hot spots in meetings. In IEEE Workshop on automatic speech recognition and understanding, 2003. ASRU’03 (pp. 180–185). IEEE. DOI logo
Yildirim, S., Lee, C.M., Lee, S., Potamianos, A., & Narayanan, S
(2005) Detecting politeness and frustration state of a child in a conversational computer game. In Interspeech (pp. 2209-2212).Google Scholar
Cited by

Cited by 3 other publications

Giannopoulos, Panagiotis, Isidoros Perikos & Ioannis Hatzilygeroudis
2018. Deep Learning Approaches for Facial Emotion Recognition: A Case Study on FER-2013. In Advances in Hybridization of Intelligent Methods [Smart Innovation, Systems and Technologies, 85],  pp. 1 ff. DOI logo
Piryani, R., D. Madhavi & V.K. Singh
2017. Analytical mapping of opinion mining and sentiment analysis research during 2000–2015. Information Processing & Management 53:1  pp. 122 ff. DOI logo
[no author supplied]
2019. References. In Opinion Analysis in Interactions,  pp. 107 ff. DOI logo

This list is based on CrossRef data as of 13 april 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.