Robust active representation via ℓ2,p-norm constraints

作者:

Highlights:

摘要

Active learning maximizes the performance of the current learning model by soliciting benefits from unlabeled data. In its early stage with insufficient labels, finding representations that maintain a consistent hypothesis with the entire unlabeled pool is an intelligent paradigm. From a matrix perspective, this paradigm can be transformed into a sparse representation of the input matrix by the ℓ2,1-norm constraint. However, the ℓ2,1-norm constraint is used in the loss function, which may lead to the mean accumulation problem, resulting in sub-optimal mean-centering and low robustness to outliers. In this paper, to solve the aforementioned problems, we generalize the ℓ2,1-norm into the ℓ2,p-norm constraint, where 0

论文关键词:Active representation,Early sampling,Optimal mean-centering,ℓ2,p-norm

论文评审过程:Received 19 March 2021, Revised 20 October 2021, Accepted 21 October 2021, Available online 26 October 2021, Version of Record 1 November 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107639