Feature extraction for one-class classification problems: Enhancements to biased discriminant analysis

作者:

Highlights:

摘要

In many one-class classification problems such as face detection and object verification, the conventional linear discriminant analysis sometimes fails because it makes an inappropriate assumption on negative samples that they are distributed according to a Gaussian distribution. In addition, it sometimes cannot extract sufficient number of features because it merely makes use of the mean value of each class. In order to resolve these problems, in this paper, we extend the biased discriminant analysis (BDA) which was originally developed for one-class classification problems. The BDA makes no assumption on the distribution of negative samples and tries to separate each negative sample as far away from the center of positive samples as possible. The first extension uses a saturation technique to suppress the influence of the samples which are located far away from the decision boundary. The second one utilizes the L1 norm instead of the L2 norm. Also we present a method to extend BDA and its variants to multi-class classification problems. Our approach is considered useful in the sense that without much complexity, it successfully reduces the negative effect of negative samples which are far away from the center of positive samples, resulting in better classification performances. We have applied the proposed methods to several classification problems and compared the performance with conventional methods.

论文关键词:Classification,One-class,One-against-rest,BDA

论文评审过程:Received 9 August 2007, Revised 30 April 2008, Accepted 8 July 2008, Available online 17 July 2008.

论文官网地址:https://doi.org/10.1016/j.patcog.2008.07.002