Growing methods for constructing recursive deterministic perceptron neural networks and knowledge extraction

作者:

摘要

The Recursive Deterministic Perceptron (RDP) feedforward multilayer neural network is a generalization of the single layer perceptron topology (SLPT). This new model is capable of solving any two-class classification problem, as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable (LS) sets (two subsets X and Y of Rd are said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of Rd delimited by this hyperplane). For all classification problems, the construction of an RDP is done automatically and thus, the convergence to a solution is always guaranteed. We propose three growing methods for constructing an RDP neural network. These methods perform, respectively, batch, incremental, and modular learning. We also show how the knowledge embedded in an RDP neural network model can always be expressed, transparently, as a finite union of open polytopes. The combination of the decision region of RDP models, by using boolean operations, is also discussed.

论文关键词:Batch learning,Boolean operations,Growing methods,Incremental learning,Knowledge extraction,Linear separability,Modular learning,Recursive Deterministic Perceptron

论文评审过程:Received 6 February 1998, Available online 3 December 1998.

论文官网地址:https://doi.org/10.1016/S0004-3702(98)00057-5