Vol. 3:2 (1996) ► pp.291–312
Definition of an evaluation grid for term-extraction software
This paper examines evaluation criteria for term-extraction software. These tools have gained popularity over the past few years, but they come in all sorts of structures and their performance cannot be compared (qualitatively) to that of humans performing the same task. The lists obtained after an automated extraction must always be filtered by users. The evaluation form proposed here consists of a certain number of preprocessing criteria (such as the language analyzed by the software, identification strategies used, etc.) and a postprocessing criterion (performance of software) that users must take into account before they start using such systems. Each criterion is defined and illustrated with examples. Commercial tools have also been tested.
Cited by 4 other publications
This list is based on CrossRef data as of 14 may 2023. Please note that it may not be complete. Sources presented here have been supplied by the respective publishers. Any errors therein should be reported to them.