Extensions of LDA by PCA mixture model and class-wise features

作者:

Highlights:

摘要

Linear discriminant analysis (LDA) is a data discrimination technique that seeks transformation to maximize the ratio of the between-class scatter and the within-class scatter. While it has been successfully applied to several applications, it has two limitations, both concerning the underfitting problem. First, it fails to discriminate data with complex distributions since all data in each class are assumed to be distributed in the Gaussian manner. Second, it can lose class-wise information, since it produces only one transformation over the entire range of classes. We propose three extensions of LDA to overcome the above problems. The first extension overcomes the first problem by modelling the within-class scatter using a PCA mixture model that can represent more complex distribution. The second extension overcomes the second problem by taking different transformation for each class in order to provide class-wise features. The third extension combines these two modifications by representing each class in terms of the PCA mixture model and taking different transformation for each mixture component. It is shown that all our proposed extensions of LDA outperform LDA concerning classification errors for synthetic data classification, hand-written digit recognition, and alphabet recognition.

论文关键词:Linear discriminant analysis,Extension of LDA,PCA mixture model,Class-wise features

论文评审过程:Received 12 July 2001, Revised 8 April 2002, Accepted 6 June 2002, Available online 13 December 2002.

论文官网地址:https://doi.org/10.1016/S0031-3203(02)00163-2