Multi-step Training of a Generalized Linear Classifier

作者:Kanishka Tyagi, Michael Manry

摘要

We propose a multi-step training method for designing generalized linear classifiers. First, an initial multi-class linear classifier is found through regression. Then validation error is minimized by pruning of unnecessary inputs. Simultaneously, desired outputs are improved via a method similar to the Ho-Kashyap rule. Next, the output discriminants are scaled to be net functions of sigmoidal output units in a generalized linear classifier. This classifier is trained via Newton’s algorithm. Performance gains are demonstrated at each step. Using widely available datasets, the final network’s tenfold testing error is shown to be less than that of several other linear and generalized linear classifiers reported in the literature.

论文关键词:Linear classifiers, Nonlinear functions, Pruning, Orthogonal least squares, Newton’s algorithm

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-018-9915-4