Fast Convergent Generalized Back-Propagation Algorithm with Constant Learning Rate

作者:S.C. Ng, S.H. Leung, A. Luk

摘要

The conventional back-propagation algorithm is basically a gradient-descent method, it has the problems of local minima and slow convergence. A new generalized back-propagation algorithm which can effectively speed up the convergence rate and reduce the chance of being trapped in local minima is introduced. The new back-propagation algorithm is to change the derivative of the activation function so as to magnify the backward propagated error signal, thus the convergence rate can be accelerated and the local minimum can be escaped. In this letter, we also investigate the convergence of the generalized back-propagation algorithm with constant learning rate. The weight sequences in generalized back-propagation algorithm can be approximated by a certain ordinary differential equation (ODE). When the learning rate tends to zero, the interpolated weight sequences of generalized back-propagation converge weakly to the solution of associated ODE.

论文关键词:generalized back-propagation, gradient descent algorithm, feedforward neural networks, convergence, constant learning rate

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1018611626332