Computing with discrete multi-valued neurons

作者:

Highlights:

摘要

Analog computers are inherently inaccurate due to imperfections in fabrication and fluctuations in operating temperature. The classical solution to this problem uses extra hardware to enforce discrete behaviour. However, the brain appears to compute reliably with inaccurate components without necessarily resorting to discrete techniques. The continuous neural network is a computational model based upon certain observed features of the brain. Experimental evidence has shown continuous neural networks to be extremely fault-tolerant; in particular, their performance does not appear to be significantly impaired when precision is limited. Continuous neurons with limited precision essentially compute k-ary weighted multilinear threshold functions, which divide Rn into k regions with k-1 hyperplanes. The behaviour of k-ary neural networks is investigated. There is no canonical set of threshold values for k > 3, although they exist for binary and ternary neural networks. The weights can be made integers of only O((z+k)log(z+k)) bits, where z is the number of processors, without increasing hardware or running time. The weights can be made ±1 while increasing running time by a constant multiple and hardware by a small polynomial in z and k. Binary neurons can be used if the running time is allowed to increase by a larger constant multiple and the hardware is allowed to increase by a slightly larger polynomial in z and k. Any symmetric k-ary function can be computed in constant depth and size O(nk−1k−2)!), and any k-ary function can be computed in constant depth and size O(nkn). The alternating neural networks of Olafsson and Abu-Mostafa, and the quantized neural networks of Fleisher are closely related to this model.

论文关键词:

论文评审过程:Received 1 July 1990, Available online 2 December 2003.

论文官网地址:https://doi.org/10.1016/0022-0000(92)90035-H