A novel measure for evaluating classifiers

作者:

Highlights:

摘要

Evaluating classifier performances is a crucial problem in pattern recognition and machine learning. In this paper, we propose a new measure, i.e. confusion entropy, for evaluating classifiers. For each class cli of an (N+1)-class problem, the misclassification information involves both the information of how the samples with true class label cli have been misclassified to the other N classes and the information of how the samples of the other N classes have been misclassified to class cli. The proposed measure exploits the class distribution information of such misclassifications of all classes. Both theoretical analysis and statistical experiments show the proposed measure is more precise than accuracy and RCI. Experimental results on some benchmark data sets further confirm the theoretical analysis and statistical results and show that the new measure is feasible for evaluating classifier performances.

论文关键词:Performance evaluation,Entropy,Accuracy,Classification

论文评审过程:Available online 13 November 2009.

论文官网地址:https://doi.org/10.1016/j.eswa.2009.11.040