Part of
Recent Advances in Natural Language Processing III: Selected papers from RANLP 2003
Edited by Nicolas Nicolov, Kalina Bontcheva, Galia Angelova and Ruslan Mitkov
[Current Issues in Linguistic Theory 260] 2004
► pp. 8190
Cited by (16)

Cited by 16 other publications

Tang, Zhuo, Qi Xiao, Li Zhu, Kenli Li & Keqin Li
2019. A semantic textual similarity measurement model based on the syntactic-semantic representation. Intelligent Data Analysis 23:4  pp. 933 ff. DOI logo
Vo, Ngoc Phuoc An & Octavian Popescu
2019. Multi-layer and Co-learning Systems for Semantic Textual Similarity, Semantic Relatedness and Recognizing Textual Entailment. In Knowledge Discovery, Knowledge Engineering and Knowledge Management [Communications in Computer and Information Science, 914],  pp. 54 ff. DOI logo
Xie, Zhipeng & Junfeng Hu
2017. Max-Cosine Matching Based Neural Models for Recognizing Textual Entailment. In Database Systems for Advanced Applications [Lecture Notes in Computer Science, 10177],  pp. 295 ff. DOI logo
Moon, Sung Won, Gahgene Gweon, Hojin Choi & Jeong Heo
2016. 2016 18th International Conference on Advanced Communication Technology (ICACT),  pp. 680 ff. DOI logo
Keshtkar, Fazel & Diana Inkpen
2013. A BOOTSTRAPPING METHOD FOR EXTRACTING PARAPHRASES OF EMOTION EXPRESSIONS FROM TEXTS. Computational Intelligence 29:3  pp. 417 ff. DOI logo
GIRJU, ROXANA & MICHAEL J. PAUL
2011. Modeling reciprocity in social interactions with probabilistic latent space models. Natural Language Engineering 17:1  pp. 1 ff. DOI logo
Hickl, Andrew
2008. Proceedings of the 17th ACM conference on Information and knowledge management,  pp. 1261 ff. DOI logo
Nielsen, Rodney D., Wayne Ward & James H. Martin
2008. Soft Computing in Intelligent Tutoring Systems and Educational Assessment. In Soft Computing Applications in Business [Studies in Fuzziness and Soft Computing, 230],  pp. 201 ff. DOI logo
Connor, Michael & Dan Roth
2007. Context Sensitive Paraphrasing with a Global Unsupervised Classifier. In Machine Learning: ECML 2007 [Lecture Notes in Computer Science, 4701],  pp. 104 ff. DOI logo
Glickman, Oren, Ido Dagan & Moshe Koppel
2006. A Lexical Alignment Model for Probabilistic Textual Entailment. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment [Lecture Notes in Computer Science, 3944],  pp. 287 ff. DOI logo
Inkpen, Diana Zaiu, Ol’ga Feiguina & Graeme Hirst
2006. Generating More-Positive and More-Negative Text. In Computing Attitude and Affect in Text: Theory and Applications [The Information Retrieval Series, 20],  pp. 187 ff. DOI logo
Kozareva, Zornitsa & Andrés Montoyo
2006. Paraphrase Identification on the Basis of Supervised Machine Learning Techniques. In Advances in Natural Language Processing [Lecture Notes in Computer Science, 4139],  pp. 524 ff. DOI logo
Pazienza, Maria Teresa, Marco Pennacchiotti & Fabio Massimo Zanzotto
2006. Learning Textual Entailment on a Distance Feature Space. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment [Lecture Notes in Computer Science, 3944],  pp. 240 ff. DOI logo
Pazienza, Maria Teresa, Marco Pennacchiotti & Fabio Massimo Zanzotto
2006. Discovering Verb Relations in Corpora: Distributional Versus Non-distributional Approaches. In Advances in Applied Artificial Intelligence [Lecture Notes in Computer Science, 4031],  pp. 1042 ff. DOI logo
Paşca, Marius
2005. Mining Paraphrases from Self-anchored Web Sentence Fragments. In Knowledge Discovery in Databases: PKDD 2005 [Lecture Notes in Computer Science, 3721],  pp. 193 ff. DOI logo
Paşca, Marius & Péter Dienes
2005. Aligning Needles in a Haystack: Paraphrase Acquisition Across the Web. In Natural Language Processing – IJCNLP 2005 [Lecture Notes in Computer Science, 3651],  pp. 119 ff. DOI logo

This list is based on CrossRef data as of 9 july 2024. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.