A new general nearest neighbor classification based on the mutual neighborhood information

作者:

Highlights:

摘要

The nearest neighbor (NN) rule is effective for many applications in pattern classification, such as the famous k-nearest neighbor (kNN) classifier. However, NN-based classifiers perform a one-sided classification by finding the nearest neighbors simply according to the neighborhood of the testing sample. In this paper, we propose a new selection method of nearest neighbors based on a two-sided mode, called general nearest neighbor (GNN) rule. The mutual neighborhood information of both testing sample and training sample is considered, then the overlapping of the above neighborhoods is used to decide the general nearest neighbors of the testing sample. To verify the effectiveness of the GNN rule in pattern classification, a k-general nearest neighbor (kGNN) classifier is proposed by applying the k-neighborhood information of each sample to find the general nearest neighbors. Extensive experiments on twenty real-world datasets from UCI and KEEL repository and two Gaussian artificial datasets of the I-I and Ness dataset prove that the kGNN classifier outperforms the kNN classifier and seven other state-of-the-art NN-based classifiers, particularly in the situations of small training sample size.

论文关键词:Nearest neighbor classification,K-nearest neighbor rule,Small training sample size,Neighborhood selection method,Mutual neighborhood information

论文评审过程:Received 22 June 2016, Revised 28 December 2016, Accepted 16 January 2017, Available online 17 January 2017, Version of Record 21 February 2017.

论文官网地址:https://doi.org/10.1016/j.knosys.2017.01.021