A new class of supermemory gradient methods

作者:

Highlights:

摘要

In this paper we propose a new class of supermemory gradient methods for unconstrained optimization problems. Trust region approach is used in the new algorithms to guarantee the global convergence. In each iteration, the new algorithms generate a suitable trust region radius automatically and obtain the next iterate by solving a simple subproblem. These algorithms converge stably and averagely due to using more iterative information at each iteration, and can be reduced to quasi-Newton methods when the iterate is close to the optimal solution. Numerical results show that this new class of supermemory gradient methods is effective in practical computation.

论文关键词:Unconstrained optimization,Supermemory gradient method,Global convergence

论文评审过程:Available online 31 July 2006.

论文官网地址:https://doi.org/10.1016/j.amc.2006.05.079