Improving the approximation and convergence capabilities of projection pursuit learning

作者:Tin-Yau Kwok, Dit-Yan Yeung

摘要

One nonparametric regression technique that has been successfully applied to high-dimensional data is projection pursuit regression (PPR). In this method, the regression surface is approximated by a sum of empirically determined univariate functions of linear combinations of the predictors. Projection pursuit learning (PPL) proposed by Hwanget al. formulates PPR using a two-layer feedforward neural network. One of the main differences between PPR and PPL is that the smoothers in PPR are nonparametric, whereas those in PPL are based on Hermite functions of some predefined highest orderR. While the convergence property of PPR is already known, that for PPL has not been thoroughly studied. In this paper, we demonstrate that PPL networks do not have the universal approximation and strong convergence properties for any finiteR. But, by including a bias term in each linear combination of the predictor variables, PPL networks can regain these capabilities, independent of the exact choice ofR. It is also shown experimentally that this modification improves the generalization performance in regression problems, and creates smoother decision surfaces for classification problems.

论文关键词:Strong Convergence, Convergence Property, Feedforward Neural Network, Regression Problem, Nonparametric Regression

论文评审过程:

论文官网地址:https://doi.org/10.1007/BF02311575