Scaled rotation regularization

作者:

Highlights:

摘要

A new regularization method - a scaled rotation - is proposed and compared with the standard linear regularized discriminant analysis. A sense of the method consists in the singular value decomposition S=TDT′ of a sample covariance matrix S and a use of the following representation of an inverse of the covariance matrix S−1=Tα(D+λI)−1Tα′. For certain data structures the scaled rotation helps to reduce the generalization error in small learning-set and high dimensionality cases. Efficacy of the scaled rotation increases if one transforms the data by y=(D+λI)−1/2Tα′x and uses an optimally stopped single layer perceptron classifier afterwards.

论文关键词:Regularized discriminant analysis,Learning-set size,Dimensionality,Single-layer perceptron,Generalization,Scaled rotation

论文评审过程:Received 15 December 1998, Accepted 12 July 1999, Available online 7 June 2001.

论文官网地址:https://doi.org/10.1016/S0031-3203(99)00183-1