A Real generalization of discrete AdaBoost

作者:

Highlights:

摘要

Scaling discrete AdaBoost to handle real-valued weak hypotheses has often been done under the auspices of convex optimization, but little is generally known from the original boosting model standpoint. We introduce a novel generalization of discrete AdaBoost which departs from this mainstream of algorithms. From the theoretical standpoint, it formally displays the original boosting property, as it brings fast improvements of the accuracy of a weak learner up to arbitrary high levels; furthermore, it brings interesting computational and numerical improvements that make it significantly easier to handle “as is”. Conceptually speaking, it provides a new and appealing scaling to R of some well known facts about discrete (ada)boosting. Perhaps the most popular is an iterative weight modification mechanism, according to which examples have their weights decreased iff they receive the right class by the current discrete weak hypothesis. In our generalization, this property does not hold anymore, as examples that receive the right class can still be reweighted higher with real-valued weak hypotheses. From the experimental standpoint, our generalization displays the ability to produce low error formulas with particular cumulative margin distribution graphs, and it provides a nice handling of those noisy domains that represent Achilles' heel for common Adaptive Boosting algorithms.

论文关键词:AdaBoost,Boosting,Ensemble learning

论文评审过程:Received 1 June 2006, Revised 16 October 2006, Accepted 16 October 2006, Available online 21 November 2006.

论文官网地址:https://doi.org/10.1016/j.artint.2006.10.014