A reduced-space line-search method for unconstrained optimization via random descent directions

作者:

Highlights:

摘要

In this paper, we propose an iterative method based on reduced-space approximations for unconstrained optimization problems. The method works as follows: among iterations, samples are taken about the current solution by using, for instance, a Normal distribution; for all samples, gradients are computed (approximated) in order to build reduced-spaces onto which descent directions of cost functions are estimated. By using such directions, intermediate solutions are updated. The overall process is repeated until some stopping criterion is satisfied. The convergence of the proposed method is theoretically proven by using classic assumptions in the line search context. Experimental tests are performed by using well-known benchmark optimization problems and a non-linear data assimilation problem. The results reveal that, as the number of sample points increase, gradient norms go faster towards zero and even more, in the data assimilation context, error norms are decreased by several order of magnitudes with regard to prior errors when the assimilation step is performed by means of the proposed formulation.

论文关键词:Reduced-space optimization,Line search,Random descent directions

论文评审过程:Received 26 April 2018, Revised 12 July 2018, Accepted 13 August 2018, Available online 21 September 2018, Version of Record 21 September 2018.

论文官网地址:https://doi.org/10.1016/j.amc.2018.08.020