Processing and Transmission of Confidence in Recurrent Neural Hierarchies

作者:Alexander Gepperth

摘要

This article addresses the construction of hierarchies from dynamic attractor networks. We claim that such networks, e.g., dynamic neural fields (DNFs), contain a data model which is encoded in their lateral connections, and which describes typical properties of afferent inputs. This allows to infer the most likely interpretation of inputs, robustly expressed through the position of the attractor state. The principal problem resides in the fact that positions of attractor states alone do not reflect the quality of match between input and data model, termed decision confidence. In hierarchies, this inevitably leads to final decisions which are not Bayes-optimal when inputs exhibit different degrees of ambiguity or conflict, since the resulting differences in confidence will be ignored by downstream layers. We demonstrate a solution to this problem by showing that a correctly parametrized DNF layer can encode decision confidence into the latency of the attractor state in a well-defined way. Conversely, we show that input stimuli gain competitive advantages w.r.t. each other as a function of their relative latency, thus allowing downstream layers to decode attractor latency in an equally well-defined way. Putting these encoding and decoding mechanisms together, we construct a three-stage hierarchy of DNF layers and show that the top-level layer can take Bayes-optimal decisions when the decisions in the lowest hierarchy levels have variable degrees of confidence. In the discussion, we generalize these findings, suggesting a novel possibility to represent and manipulate probabilistic information in recurrent networks without any need for log-encoding, just using the biologically well-founded effect of response latency as an additional coding dimension.

论文关键词:Recurrent neural networks, Neural coding, Bayesian inference, Response latency

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-013-9311-z