A priori synthetic over-sampling methods for increasing classification sensitivity in imbalanced data sets

作者:

Highlights:

• Compare OUPS and Safe Level OUPS against popular SMOTE generalizations.

• Safe Level OUPS resulted in the highest sensitivity and g-mean.

• OUPS modification did perform moderately well within neural networks.

• Safe Level OUPS improves prediction of noisy minority members using Linear SVM.

摘要

•Compare OUPS and Safe Level OUPS against popular SMOTE generalizations.•Safe Level OUPS resulted in the highest sensitivity and g-mean.•OUPS modification did perform moderately well within neural networks.•Safe Level OUPS improves prediction of noisy minority members using Linear SVM.

论文关键词:SMOTE,OUPS,Class imbalance,Classification

论文评审过程:Received 17 May 2016, Revised 4 August 2016, Accepted 6 September 2016, Available online 9 September 2016, Version of Record 14 September 2016.

论文官网地址:https://doi.org/10.1016/j.eswa.2016.09.010