Novel DCA based algorithms for a special class of nonconvex problems with application in machine learning

作者:

Highlights:

• We address the problem of minimizing the sum of a nonconvex, differentiable function and composite functions by DC (Difference of Convex functions) programming and DCA (DC Algorithm).

• We first develop a standard DCA scheme especially dealing with the very specific structure of this problem.

• Furthermore, we extend DCA to give rise to the so-named DCA-Like, which is based on a new and efficient way to approximate the DC objective function without knowing a DC decomposition.

• We further improve DCA based algorithms by incorporating the Nesterov’s acceleration technique into them.

• Finally, we investigate the proposed algorithms for an important problem in machine learning: the t-distributed stochastic neighbor embedding.

摘要

•We address the problem of minimizing the sum of a nonconvex, differentiable function and composite functions by DC (Difference of Convex functions) programming and DCA (DC Algorithm).•We first develop a standard DCA scheme especially dealing with the very specific structure of this problem.•Furthermore, we extend DCA to give rise to the so-named DCA-Like, which is based on a new and efficient way to approximate the DC objective function without knowing a DC decomposition.•We further improve DCA based algorithms by incorporating the Nesterov’s acceleration technique into them.•Finally, we investigate the proposed algorithms for an important problem in machine learning: the t-distributed stochastic neighbor embedding.

论文关键词:DC programming,DCA,DCA-Like,Accelerated DCA,Accelerated DCA-Like,Sum of nonconvex function and composite functions,Dimensionality reduction,t-distributed Stochastic Neighbor Embedding (t-SNE)

论文评审过程:Received 24 March 2020, Revised 5 November 2020, Accepted 13 December 2020, Available online 28 December 2020, Version of Record 11 July 2021.

论文官网地址:https://doi.org/10.1016/j.amc.2020.125904