Chapter published in:Recent Advances in Computational Terminology
Edited by Didier Bourigault, Christian Jacquemin and Marie-Claude L'Homme
[Natural Language Processing 2] 2001
► pp. 261–278
Term extraction using a similarity-based approach
Traditional methods of multi-word term extraction have used hybrid methods combining linguistic and statistical information. The linguistic part of these applications is often underexploited and consists of very shallow knowledge in the form of a simple syntactic filter. In most cases no interpretation of terms is undertaken and recognition does not involve distinguishing between different senses of terms, although ambiguity can be a serious problem for applications such as ontology building and machine translation. The approach described uses both statistical and linguistic information, combining syntax and semantics to identify, rank and disambiguate terms. We describe a new thesaurus-based similarity measure, which uses semantic information to calculate the importance of different parts of the context in relation to the term. Results show that making use of semantic information is beneficial for both theoretical and practical aspects of terminology.
Published online: 15 June 2001
Cited by 7 other publications
No author info given
Ge, Jike, Yuhui Qiu, Shiqun Yin & Zuqin Chen
Melo Mora, Luis Felipe & Yannick Toussaint
Zan, Hongying, Guocheng Duan & Ming Fan
This list is based on CrossRef data as of 25 july 2021. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.