Definition of an evaluation grid for term-extraction software
Marie-Claude L'Homme | University of Montreal, University of Metz, University of Nice
Loubna Benali | University of Montreal, University of Metz, University of Nice
Claudine Bertrand | University of Montreal, University of Metz, University of Nice
Patricia Lauduique | University of Montreal, University of Metz, University of Nice
This paper examines evaluation criteria for term-extraction software. These tools have gained popularity over the past few years, but they come in all sorts of structures and their performance cannot be compared (qualitatively) to that of humans performing the same task. The lists obtained after an automated extraction must always be filtered by users. The evaluation form proposed here consists of a certain number of preprocessing criteria (such as the language analyzed by the software, identification strategies used, etc.) and a postprocessing criterion (performance of software) that users must take into account before they start using such systems. Each criterion is defined and illustrated with examples. Commercial tools have also been tested.
Published online: 01 January 1996
Cited by 4 other publications
Bonadonna, Maria Francesca, Marianna Lisi, F. Neveu, G. Bergounioux, M.-H. Côté, J.-M. Fournier, L. Hriba & S. Prévost
Jacquemin, Christian, Béatrice Daille, Jean Royauté & Xavier Polanco
Rigouts Terryn, Ayla, Véronique Hoste & Els Lefever
This list is based on CrossRef data as of 07 february 2022. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.