Analysis of the dimensionality of neural networks for pattern recognition

作者:

Highlights:

摘要

Dimensionality is a key issue in designing a pattern recognition system. This paper presents an analysis and an empirical study of the dimensionality of the artificial neural network. The learning behavior and performance of neural networks of various dimensions were studied under different assumptions concerning the dependence among features used for classification. The assumptions include the case of statistically independent features, the case of features forming the first-order Markov chain, and the case of arbitrary features. Analysis of the degree of freedom for classification is based on Bayes decision theory. The study shows that the performance of a neural network as a pattern classifier could be improved by using statistically independent features. It also shows that the number of independent probabilistic factors underlying classification may provide a limited hint of the appropriate dimensions of the neural network that achieves optimum performance. Furthermore, the study suggests that the dimensionality of a neural network is determined by both the number of its connections and the number of input units. The results are discussed from the perspectives of pattern recognition and machine learning.

论文关键词:Neural network,Dimensionality,Pattern recognition,Classification,Bayes decision theory

论文评审过程:Received 6 June 1989, Revised 29 November 1989, Available online 19 May 2003.

论文官网地址:https://doi.org/10.1016/0031-3203(90)90008-9