Robust boosting via self-sampling
作者:
Highlights:
•
摘要
Boosting is a widely used ensemble meta-algorithm due to its excellent performance in combining weak learners into a strong learner. However, vanilla boosting methods are proved to be sensitive to noise due to the lack of restriction to the always misclassified samples during iterations. In this work, we present a new aspect to overcome the sensitiveness problem by combining the self-sampling learning framework, which can pursue reliable samples and smooth the training process based on the sample reliability measurement designed for the boosting procedure. Experimental results on the synthetic data and several real-world datasets show that the self-sampling regime can automatically optimize an appropriate training subset in different noise environments, and robust boosting algorithms we proposed outperform the state-of-the-art methods.
论文关键词:Boosting,Loss function,Robustness,Self-sampling
论文评审过程:Received 13 December 2018, Revised 30 July 2019, Accepted 22 December 2019, Available online 24 December 2019, Version of Record 7 March 2020.
论文官网地址:https://doi.org/10.1016/j.knosys.2019.105424