Optimal classifiers with minimum expected error within a Bayesian framework — Part II: Properties and performance analysis

作者:

Highlights:

摘要

In part I of this two-part study, we introduced a new optimal Bayesian classification methodology that utilizes the same modeling framework proposed in Bayesian minimum-mean-square error (MMSE) error estimation. Optimal Bayesian classification thus completes a Bayesian theory of classification, where both the classifier error and our estimate of the error may be simultaneously optimized and studied probabilistically within the assumed model. Having developed optimal Bayesian classifiers in discrete and Gaussian models in part I, here we explore properties of optimal Bayesian classifiers, in particular, invariance to invertible transformations, convergence to the Bayes classifier, and a connection to Bayesian robust classifiers. We also explicitly derive optimal Bayesian classifiers with non-informative priors, and explore relationships to linear and quadratic discriminant analysis (LDA and QDA), which may be viewed as plug-in rules under Gaussian modeling assumptions. Finally, we present several simulations addressing the robustness of optimal Bayesian classifiers to false modeling assumptions. Companion website: http://gsp.tamu.edu/Publications/supplementary/dalton12a.

论文关键词:Bayesian estimation,Classification,Error estimation,Genomics,Minimum mean-square estimation,Small samples

论文评审过程:Received 1 August 2012, Revised 20 September 2012, Accepted 21 October 2012, Available online 2 November 2012.

论文官网地址:https://doi.org/10.1016/j.patcog.2012.10.019