Extracting finite structure from infinite language

作者:

Highlights:

摘要

This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map with laterally interconnected neurons. A derivation of functional-equivalence theory is used that allows the model to exploit similarities between the future context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally related neurons together to form states. Results show that the model is able to learn the Reber grammar perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set.

论文关键词:Artificial neural networks,Grammar induction,Natural language processing,Self-organizing map,STORM

论文评审过程:Received 26 October 2004, Accepted 30 October 2004, Available online 31 May 2005.

论文官网地址:https://doi.org/10.1016/j.knosys.2004.10.010