Towards the Optimal Learning Rate for Backpropagation

作者:Danilo P. Mandic, Jonathon A. Chambers

摘要

A backpropagation learning algorithm for feedforward neural networks withan adaptive learning rate is derived. The algorithm is based uponminimising the instantaneous output error and does not include anysimplifications encountered in the corresponding Least Mean Square (LMS)algorithms for linear adaptive filters. The backpropagation algorithmwith an adaptive learning rate, which is derived based upon the Taylorseries expansion of the instantaneous output error, is shown to exhibitbehaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed,the derived optimal adaptive learning rate of a neural network trainedby backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisationeffects to the traditional backpropagation learning algorithm.

论文关键词:adaptive learning rate, backpropagation, feedforward neural networks, optimal gradient learning

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1009686825582