Least squares solution with the minimum-norm to general matrix equations via iteration

作者:

Highlights:

摘要

Two iterative algorithms are presented in this paper to solve the minimal norm least squares solution to a general linear matrix equations including the well-known Sylvester matrix equation and Lyapunov matrix equation as special cases. The first algorithm is based on the gradient based searching principle and the other one can be viewed as its dual form. Necessary and sufficient conditions for the step sizes in these two algorithms are proposed to guarantee the convergence of the algorithms for arbitrary initial conditions. Sufficient condition that is easy to compute is also given. Moreover, two methods are proposed to choose the optimal step sizes such that the convergence speeds of the algorithms are maximized. Between these two methods, the first one is to minimize the spectral radius of the iteration matrix and explicit expression for the optimal step size is obtained. The second method is to minimize the square sum of the F-norm of the error matrices produced by the algorithm and it is shown that the optimal step size exits uniquely and lies in an interval. Several numerical examples are given to illustrate the efficiency of the proposed approach.

论文关键词:Iterative algorithm,Gradient,General linear matrix equations,Minimal norm least squares,Optimal step size,Convergence rate

论文评审过程:Available online 4 November 2009.

论文官网地址:https://doi.org/10.1016/j.amc.2009.10.052