Representing Probabilistic Rules with Networks of Gaussian Basis Functions

作者:Volker Tresp, Jürgen Hollatz, Subutai Ahmad

摘要

There is great interest in understanding the intrinsic knowledge neural networks have acquired during training. Most work in this direction is focussed on the multi-layer perceptron architecture. The topic of this paper is networks of Gaussian basis functions which are used extensively as learning systems in neural computation. We show that networks of Gaussian basis functions can be generated from simple probabilistic rules. Also, if appropriate learning rules are used, probabilistic rules can be extracted from trained networks. We present methods for the reduction of network complexity with the goal of obtaining concise and meaningful rules. We show how prior knowledge can be refined or supplemented using data by employing either a Bayesian approach, by a weighted combination of knowledge bases, or by generating artificial training data representing the prior knowledge. We validate our approach using a standard statistical data set.

论文关键词:Neural networks, theory refinement, knowledge-based neural networks, probability density estimation, knowledge extraction, mixture densities, combining knowledge bases, Bayesian learning

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1007381408604