Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets

作者:

Highlights:

摘要

We present attribute bagging (AB), a technique for improving the accuracy and stability of classifier ensembles induced using random subsets of features. AB is a wrapper method that can be used with any learning algorithm. It establishes an appropriate attribute subset size and then randomly selects subsets of features, creating projections of the training set on which the ensemble classifiers are built. The induced classifiers are then used for voting. This article compares the performance of our AB method with bagging and other algorithms on a hand-pose recognition dataset. It is shown that AB gives consistently better results than bagging, both in accuracy and stability. The performance of ensemble voting in bagging and the AB method as a function of the attribute subset size and the number of voters for both weighted and unweighted voting is tested and discussed. We also demonstrate that ranking the attribute subsets by their classification accuracy and voting using only the best subsets further improves the resulting performance of the ensemble.

论文关键词:Ensemble learning,Classifier ensembles,Voting,Feature subset selection,Bagging,Attribute bagging,Hand-pose recognition

论文评审过程:Received 2 November 2001, Accepted 12 June 2002, Available online 14 December 2002.

论文官网地址:https://doi.org/10.1016/S0031-3203(02)00121-8