Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM

作者:

Highlights:

摘要

This paper is concerned with the composite convex minimization problem where the cost function consists of a smooth convex function with Lipschitz gradient and a closed proper convex function with non-polyhedral epigraph. For this class of non-polyhedral convex optimization problems, we establish the locally Lipschitzian type error bounds for estimating the distance to the solution set from any feasible point near the solution set, and the globally Lipschitzian type error bounds for the smooth function with a special structure, with the help of the local weak sharpness of minima or the calmness of appropriate multifunctions, and also provide verifiable regularity conditions to guarantee them to hold. Although the derived local error bounds are weaker than the one used in Luo and Tseng (1992,1993) and Wei and Jen (2014), when applying them to an inexact feasible descent method (FDM) and proximal gradient method (PGM), respectively, we still achieve the asymptotic Q-linear convergence and R-linear convergence of the objective value sequence and the iterate sequence, respectively. As an illustration of these results, we obtain the linear convergence of the PGM for the trace norm regularized least squares problem under a regularity condition of the linear operator.

论文关键词:Local error bounds,Non-polyhedral convex optimization,Convergence rate,Inexact feasible descent method,Inexact proximal gradient method

论文评审过程:Received 2 December 2018, Revised 30 March 2019, Accepted 15 April 2019, Available online 8 May 2019, Version of Record 8 May 2019.

论文官网地址:https://doi.org/10.1016/j.amc.2019.04.048