Distance metric learning by minimal distance maximization

作者:

Highlights:

摘要

Classic linear dimensionality reduction (LDR) methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are known not to be robust against outliers. Following a systematic analysis of the multi-class LDR problem in a unified framework, we propose a new algorithm, called minimal distance maximization (MDM), to address the non-robustness issue. The principle behind MDM is to maximize the minimal between-class distance in the output space. MDM is formulated as a semi-definite program (SDP), and its dual problem reveals a close connection to “weighted” LDR methods. A soft version of MDM, in which LDA is subsumed as a special case, is also developed to deal with overlapping centroids. Finally, we drop the homoscedastic Gaussian assumption made in MDM by extending it in a non-parametric way, along with a gradient-based convex approximation algorithm to significantly reduce the complexity of the original SDP. The effectiveness of our proposed methods are validated on two UCI datasets and two face datasets.

论文关键词:Linear dimensionality reduction (LDR),Metric learning,Convex optimization,Minimal distance maximization

论文评审过程:Received 22 July 2008, Revised 25 November 2009, Accepted 29 September 2010, Available online 7 October 2010.

论文官网地址:https://doi.org/10.1016/j.patcog.2010.09.019