Semi-automatic determination of citation relevancy: User evaluation

作者:

Highlights:

摘要

Online bibliographic, database searches typically produce hundreds of retrieved citations with only about 20–40% relevant to the search topic and/or problem statement. Significant amounts of time are required to categorize and select the relevant citations. A software system—SORT-AID/SABRE—has been developed which ranks the citations in terms of relevance. This paper presents the results of a comprehensive user evaluation of the relevance ranking procedures. Test results show that the software generated distributions approach the ideal distribution—all relevant citations at the beginning of the collection—in 22% of the cases, are 23% better than the random distribution—relevant citations distributed uniformly throughout the collection—on average and are poorer than the random distribution in 4% of the cases.

论文关键词:

论文评审过程:Received 30 September 1988, Accepted 9 April 1989, Available online 19 July 2002.

论文官网地址:https://doi.org/10.1016/0306-4573(90)90032-W