Constructing support vector machine ensemble

作者:

Highlights:

摘要

Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensemble with bagging (bootstrap aggregating) or boosting. In bagging, each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. In boosting, each individual SVM is trained using the training samples chosen according to the sample's probability distribution that is updated in proportional to the errorness of the sample. In both bagging and boosting, the trained individual SVMs are aggregated to make a collective decision in several ways such as the majority voting, least-squares estimation-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognition, and the fraud detection show that the proposed SVM ensemble with bagging or boosting outperforms a single SVM in terms of classification accuracy greatly.

论文关键词:SVM,SVM ensemble,Bagging,Boosting,Iris and hand-written digit recognition,Fraud detection

论文评审过程:Received 13 June 2002, Revised 29 April 2003, Accepted 29 April 2003, Available online 11 July 2003.

论文官网地址:https://doi.org/10.1016/S0031-3203(03)00175-4