Efficient gradient descent algorithm for sparse models with application in learning-to-rank

作者:

Highlights:

摘要

Recently, learning-to-rank has attracted considerable attention. Although significant research efforts have been focused on learning-to-rank, it is not the case for the problem of learning sparse models for ranking. In this paper, we consider the sparse learning-to-rank problem. We formulate it as an optimization problem with the ℓ1 regularization, and develop a simple but efficient iterative algorithm to solve the optimization problem. Experimental results on four benchmark datasets demonstrate that the proposed algorithm shows (1) superior performance gain compared to several state-of-the-art learning-to-rank algorithms, and (2) very competitive performance compared to FenchelRank that also learns a sparse model for ranking.

论文关键词:Sparse learning-to-rank,Sparse models,Information retrieval

论文评审过程:Received 9 December 2011, Revised 25 May 2013, Accepted 1 June 2013, Available online 9 June 2013.

论文官网地址:https://doi.org/10.1016/j.knosys.2013.06.001