Multi-complementary and unlabeled learning for arbitrary losses and models

作者:

Highlights:

• We propose the novel MCL framework that allows unbiased risk estimation from samples with arbitrary number of complementary labels and arbitrary losses and models (linear and deep models), which shows the practicality of the proposed framework.

• We further propose the MCUL framework to utilize the easily accessible unlabeled samples and validate the benefits of the incorporation of unlabeled samples experimentally. The rigorous convergence analysis on the statistical error bounds also shows the reliability of proposed frameworks.

• The previous complementary-label learning framework and ordinary classification problems are proven to be special cases of the MCUL framework, which shows the comprehensiveness of the MCUL framework as a weakly-supervised leaning framework.

• We further integrate the class-prior information into the risk estimator and experimentally show its efficiency.

• We propose a adaptive risk correction scheme for alleviating over-fitting and show its consistency under mild assumptions. The experimental results show its validity on improving classification accuracy.

摘要

•We propose the novel MCL framework that allows unbiased risk estimation from samples with arbitrary number of complementary labels and arbitrary losses and models (linear and deep models), which shows the practicality of the proposed framework.•We further propose the MCUL framework to utilize the easily accessible unlabeled samples and validate the benefits of the incorporation of unlabeled samples experimentally. The rigorous convergence analysis on the statistical error bounds also shows the reliability of proposed frameworks.•The previous complementary-label learning framework and ordinary classification problems are proven to be special cases of the MCUL framework, which shows the comprehensiveness of the MCUL framework as a weakly-supervised leaning framework.•We further integrate the class-prior information into the risk estimator and experimentally show its efficiency.•We propose a adaptive risk correction scheme for alleviating over-fitting and show its consistency under mild assumptions. The experimental results show its validity on improving classification accuracy.

论文关键词:Multi-complementary,Unlabeled learning,Empirical risk minimization,Unbiased estimator,Classification

论文评审过程:Received 20 July 2020, Revised 2 November 2021, Accepted 22 November 2021, Available online 24 November 2021, Version of Record 6 December 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.108447