On the Boosting Ability of Top–Down Decision Tree Learning Algorithms

作者:

Highlights:

摘要

We analyze the performance of top–down algorithms for decision tree learning, such as those employed by the widely used C4.5 and CART software packages. Our main result is a proof that such algorithms areboostingalgorithms. By this we mean that if the functions that label the internal nodes of the decision tree can weakly approximate the unknown target function, then the top–down algorithms we study will amplify this weaks advantage to build a tree achieving any desired level of accuracy. The bounds we obtain for this amplification show an interesting dependence on thesplitting criterionused by the top–down algorithm. More precisely, if the functions used to label the internal nodes have error 1/2−γas approximations to the target function, then for the splitting criteria used by CART and C4.5, trees of size (1/ε)O(1/γ2ε2)and (1/ε)O(log(1/ε)/γ2)(respectively) suffice to drive the error belowε. Thus (for example), a small constant advantage over random guessing is amplified to any larger constant advantage with trees of constant size. For a new splitting criterion suggested by our analysis, the much stronger bound of (1/ε)O(1/γ2)which is polynomial in 1/ε) is obtained, which is provably optimal for decision tree algorithms. The differing bounds have a natured explanation in terms of concavity properties of the splitting criterion. The primary contribution of this work is in proving that some popular and empirically successful heuristics that are base on first principles meet the criteria of an independently motivated theoretical model.

论文关键词:

论文评审过程:Received 30 May 1997, Available online 25 May 2002.

论文官网地址:https://doi.org/10.1006/jcss.1997.1543