The value of agreement a new boosting algorithm

作者:

Highlights:

摘要

In the past few years unlabeled examples and their potential advantage have received a lot of attention. In this paper a new boosting algorithm is presented where unlabeled examples are used to enforce agreement between several different learning algorithms. Not only do the learning algorithms learn from the given training set but they are supposed to do so while agreeing on the unlabeled examples. Similar ideas have been proposed before (for example, the Co-Training algorithm by Mitchell and Blum), but without a proof or under strong assumptions. In our setting, it is only assumed that all learning algorithms are equally adequate for the tasks. A new generalization bound is presented where the use of unlabeled examples results in a better ratio between training-set size and the resulting classifier's quality and thus reduce the number of labeled examples necessary for achieving it. The extent of this improvement depends on the diversity of the learners—a more diverse group of learners will result in a larger improvement whereas using two copies of a single algorithm gives no advantage at all. As a proof of concept, the algorithm, named Agreement Boost, is applied to two test problems. In both cases, using Agreement Boost results in an up to 40% reduction in the number of labeled examples.

论文关键词:Machine learning,Boosting,Co-training,Semi-supervised learning,Unlabeled data

论文评审过程:Received 1 November 2004, Revised 1 May 2005, Available online 12 June 2007.

论文官网地址:https://doi.org/10.1016/j.jcss.2007.06.005