Distributed block-diagonal approximation methods for regularized empirical risk minimization

作者:Ching-pei Lee, Kai-Wei Chang

摘要

In recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description and comparison of existing methods. Our approach requires only approximate solutions of the sub-problems involved in the optimization process, and is versatile to be applied on many large-scale machine learning problems including classification, regression, and structured prediction. We show that our framework enjoys global linear convergence for a broad class of non-strongly-convex problems, and some specific choices of the sub-problems can even achieve much faster convergence than existing approaches by a refined analysis. This improved convergence rate is also reflected in the superior empirical performance of our method.

论文关键词:Distributed optimization, Large-scale learning, Empirical risk minimization, Dual method, Inexact method

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10994-019-05859-2