Convergence of a Batch Gradient Algorithm with Adaptive Momentum for Neural Networks

作者:Hongmei Shao, Dongpo Xu, Gaofeng Zheng

摘要

In this paper, a batch gradient algorithm with adaptive momentum is considered and a convergence theorem is presented when it is used for two-layer feedforward neural networks training. Simple but necessary sufficient conditions are offered to guarantee both weak and strong convergence. Compared with existing general requirements, we do not restrict the error function to be quadratic or uniformly convex. A numerical example is supplied to illustrate the performance of the algorithm and support our theoretical finding.

论文关键词:Neural network, Gradient algorithm, Adaptive momentum, Convergence

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-011-9193-x