Perturbation LDA: Learning the difference between the class empirical mean and its expectation

作者:

Highlights:

摘要

Fisher's linear discriminant analysis (LDA) is popular for dimension reduction and extraction of discriminant features in many pattern recognition applications, especially biometric learning. In deriving the Fisher's LDA formulation, there is an assumption that the class empirical mean is equal to its expectation. However, this assumption may not be valid in practice. In this paper, from the “perturbation” perspective, we develop a new algorithm, called perturbation LDA (P-LDA), in which perturbation random vectors are introduced to learn the effect of the difference between the class empirical mean and its expectation in Fisher criterion. This perturbation learning in Fisher criterion would yield new forms of within-class and between-class covariance matrices integrated with some perturbation factors. Moreover, a method is proposed for estimation of the covariance matrices of perturbation random vectors for practical implementation. The proposed P-LDA is evaluated on both synthetic data sets and real face image data sets. Experimental results show that P-LDA outperforms the popular Fisher's LDA-based algorithms in the undersampled case.

论文关键词:Fisher criterion,Perturbation analysis,Face recognition

论文评审过程:Received 24 September 2006, Revised 9 July 2008, Accepted 22 September 2008, Available online 8 October 2008.

论文官网地址:https://doi.org/10.1016/j.patcog.2008.09.012