The fundamental theory of optimal “Anti-Bayesian” parametric pattern classification using order statistics criteria

作者:

Highlights:

摘要

The gold standard for a classifier is the condition of optimality attained by the Bayesian classifier. Within a Bayesian paradigm, if we are allowed to compare the testing sample with only a single point in the feature space from each class, the optimal Bayesian strategy would be to achieve this based on the (Mahalanobis) distance from the corresponding means. The reader should observe that, in this context, the mean, in one sense, is the most central point in the respective distribution. In this paper, we shall show that we can obtain optimal results by operating in a diametrically opposite way, i.e., a so-called “anti-Bayesian” manner. Indeed, we assert a completely counter-intuitive result that by working with a very few points distant from the mean, one can obtain remarkable classification accuracies. The number of points can sometimes be as small as two. Further, if these points are determined by the order statistics of the distributions, the accuracy of our method, referred to as Classification by Moments of Order Statistics (CMOS), attains the optimal Bayes' bound. This claim, which is totally counter-intuitive, has been proven for many uni-dimensional, and some multi-dimensional distributions within the exponential family, and the theoretical results have been verified by rigorous experimental testing. Apart from the fact that these results are quite fascinating and pioneering in their own right, they also give a theoretical foundation for the families of Border Identification (BI) algorithms reported in the literature.

论文关键词:Pattern classification,Order statistics,Reduction of training patterns,Prototype reduction schemes,Classification by moments of order statistics

论文评审过程:Received 5 March 2012, Revised 3 July 2012, Accepted 7 July 2012, Available online 17 July 2012.

论文官网地址:https://doi.org/10.1016/j.patcog.2012.07.004