ML-KNN: A lazy learning approach to multi-label learning

作者:

Highlights:

摘要

Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named ML-KNN is presented, which is derived from the traditional K-nearest neighbor (KNN) algorithm. In detail, for each unseen instance, its K nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that ML-KNN achieves superior performance to some well-established multi-label learning algorithms.

论文关键词:Machine learning,Multi-label learning,Lazy learning,K-nearest neighbor,Functional genomics,Natural scene classification,Text categorization,KNN, K-nearest neighbor,ML-KNN, multi-label K-nearest neighbor,MAP, maximum a posteriori,PMM, parametric mixture model

论文评审过程:Received 29 June 2005, Revised 11 May 2006, Accepted 15 December 2006, Available online 13 January 2007.

论文官网地址:https://doi.org/10.1016/j.patcog.2006.12.019