A New Sparse Learning Machine

作者:Mojtaba Nayyeri, Alaleh Maskooki, Reza Monsefi

摘要

Many algorithms have been proposed so far for pruning and sparse approximation of feedforward neural networks with random weights in order to obtain compact networks which are fast and robust on various datasets. One drawback of the randomization process is that the resulted weight vectors might be highly correlated. It has been shown that ensemble classifiers’ error depends on the amount of error correlation between them. Thus, decrease in correlation between output vectors must lead to generation of more efficient hidden nodes. In this research a new learning algorithm called New Sparse Learning Machine (NSLM) for single-hidden layer feedforward networks is proposed for regression and classification. In the first phase, the algorithm creates hidden layer with small correlation among nodes by orthogonalizing the columns of the output matrix. Then in the second phase, using \(L_1\)-norm minimization problem, NSLM makes the components of the solution vector become zero as many as possible. The resulted network has higher degree of sparsity while the accuracy is maintained or improved. Therefore, the method leads to a new network with a better generalization performance. Numerical comparisons on several classification and regression datasets confirm the expected improvement in comparison to the basic network.

论文关键词:Feedforward neural networks, Sparse learning machine, Approximation algorithm, \(L_1\)-norm minimization, Error correlation, Gram–Schmidt orthogonalization

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-016-9566-2