Additive estimators for probabilities of correct classification

作者:

Highlights:

摘要

Several methods for estimating a sample-based discriminant's probability of correct classification are compared with respect to bias, variance, robustness, and computation cost. “Smooth” modification of the counting estimator, or sample success proportion, is recommended to reduce bias and variance while retaining robustness. Also the “bootstrap” method of Efron(8) can approximately correct an additive estimator's bias using an ancillary computer simulation. In contrast, bias reduction achieved by the popular “leave-one-out” modification of counting method is vitiated by corresponding increase in variance.

论文关键词:Error or success probability in statistical classification or discrimination,Plug-in,Counting,Smoothed,Leave-one-out,Esimators bootstrap,Bias,Variance,Robustness

论文评审过程:Received 3 January 1977, Revised 17 November 1978, Available online 19 May 2003.

论文官网地址:https://doi.org/10.1016/0031-3203(78)90029-8