Exploiting probabilistic topic models to improve text categorization under class imbalance

作者:

Highlights:

摘要

In text categorization, it is quite often that the numbers of documents in different categories are different, i.e., the class distribution is imbalanced. We propose a unique approach to improve text categorization under class imbalance by exploiting the semantic context in text documents. Specifically, we generate new samples of rare classes (categories with relatively small amount of training data) by using global semantic information of classes represented by probabilistic topic models. In this way, the numbers of samples in different categories can become more balanced and the performance of text categorization can be improved using this transformed data set. Indeed, the proposed method is different from traditional re-sampling methods, which try to balance the number of documents in different classes by re-sampling the documents in rare classes. Such re-sampling methods can cause overfitting. Another benefit of our approach is the effective handling of noisy samples. Since all the new samples are generated by topic models, the impact of noisy samples is dramatically reduced. Finally, as demonstrated by the experimental results, the proposed methods can achieve better performance under class imbalance and is more tolerant to noisy samples.

论文关键词:Class imbalance,Rare class analysis,Text categorization,Probabilistic topic model,Noisy data

论文评审过程:Received 6 October 2009, Revised 25 July 2010, Accepted 29 July 2010, Available online 1 September 2010.

论文官网地址:https://doi.org/10.1016/j.ipm.2010.07.003