Using micro-documents for feature selection: The case of ordinal text classification

作者:

Highlights:

摘要

Most popular feature selection methods for text classification such as information gain (also known as “mutual information”), chi-square, and odds ratio, are based on binary information indicating the presence/absence of the feature (or “term”) in each training document. As such, these methods do not exploit a rich source of information, namely, the information concerning how frequently the feature occurs in the training document (term frequency). In order to overcome this drawback, when doing feature selection we logically break down each training document of length k into k training “micro-documents”, each consisting of a single word occurrence and endowed with the same class information of the original training document. This move has the double effect of (a) allowing all the original feature selection methods based on binary information to be still straightforwardly applicable, and (b) making them sensitive to term frequency information. We study the impact of this strategy in the case of ordinal text classification, a type of text classification dealing with classes lying on an ordinal scale, and recently made popular by applications in customer relationship management, market research, and Web 2.0 mining. We run experiments using four recently introduced feature selection functions, two learning methods of the support vector machines family, and two large datasets of product reviews. The experiments show that the use of this strategy substantially improves the accuracy of ordinal text classification.

论文关键词:Text classification,Supervised learning,Ordinal regression,Feature selection

论文评审过程:Available online 27 February 2013.

论文官网地址:https://doi.org/10.1016/j.eswa.2013.02.010