A comparison of neural net classifiers and linear tree classifiers: Their similarities and differences

作者:

Highlights:

摘要

Both neural net classifiers utilizing multilayer perceptron and linear tree classifiers composed of hierarchically structured linear discriminant functions can form arbitrarily complex decision boundaries in the feature space and have similar decision-making processes. The structure of the linear tree classifier can be easily mapped to that of the neural nets having two hidden layers by using the hyperplanes produced by the linear tree classifier. A new method for mapping the linear tree classifier to the neural nets having one hidden layer is presented with theoretical basis of mapping the convex decision regions produced by the linear tree classifier to the neurons in the neural nets. This mapping may be useful for choosing appropriately sized neural nets having one or two hidden layers, as well as for initializing the connection weights to speed up the learning rate and avoid the local trapping of the backpropagation algorithm. Also the internal operations of the hidden layer neurons in the three-layer feedforward nets could be well described by this mapping. Experimental results on both synthetic and real data suggest that this mapping is effective, and that there exists no significant difference in the classification accuracy of the neural net classifiers having one and two hidden layers. Substantial advantages of the neural net classifiers over the linear tree classifier were not found in the experiment, while the latter is much faster in both training and classifying stages.

论文关键词:Neural net classifier,Linear tree classifier,Mapping,Multilayer perceptron,Hyperplane,Backpropagation

论文评审过程:Received 28 July 1993, Accepted 20 April 1994, Available online 19 May 2003.

论文官网地址:https://doi.org/10.1016/0031-3203(94)90127-9