Task decomposition and modular single-hidden-layer perceptron classifiers for multi-class learning problems

作者:

Highlights:

摘要

One of keys for multilayer perceptrons (MLPs) to solve the multi-class learning problems is how to make them get good convergence and generalization performances merely through learning small-scale subsets, i.e., a small part of the original larger-scale data sets. This paper first decomposes an n-class problem into n two-class problems, and then uses n class-modular MLPs to solve them one by one. A class-modular MLP is responsible for forming the decision boundaries of its represented class, and thus can be trained only by the samples from the represented class and some neighboring ones. When solving a two-class problem, an MLP has to face with such unfavorable situations as unbalanced training data, locally sparse and weak distribution regions, and open decision boundaries. One of solutions is that the samples from the minority classes or in the thin regions are virtually reinforced by suitable enlargement factors. And next, the effective range of an MLP is localized by a correction coefficient related to the distribution of its represented class. In brief, this paper focuses on the formation of economic learning subsets, the virtual balance of imbalanced training sets, and the localization of generalization regions of MLPs. The results for the letter and the extended handwritten digital recognitions show that the proposed methods are effective.

论文关键词:Task decomposition,Multi-class learning data sets,Modular multilayer perceptrons,Unbalanced classes,Weak distribution regions,Output amendment

论文评审过程:Received 4 January 2006, Revised 24 November 2006, Accepted 2 January 2007, Available online 26 January 2007.

论文官网地址:https://doi.org/10.1016/j.patcog.2007.01.002