Strong global convergence of an adaptive nonmonotone memory gradient method

作者:

Highlights:

摘要

In this paper, we develop an adaptive nonmonotone memory gradient method for unconstrained optimization. The novelty of this method is that the stepsize can be adjusted according to the characteristics of the objective function. We show the strong global convergence of the proposed method without requiring Lipschitz continuous of the gradient. Our numerical experiments indicate the method is very encouraging.

论文关键词:Unconstrained optimization,Memory gradient method,Adaptive nonmonotone technique,Global convergence

论文评审过程:Available online 1 September 2006.

论文官网地址:https://doi.org/10.1016/j.amc.2006.07.075