Modeling dopamine activity by Reinforcement Learning methods: implications from two recent models

作者:Patrick Horgan, Fred Cummins

摘要

We compare and contrast two recent computational models of dopamine activity in the human central nervous system at the level of single cells. Both models implement reinforcement learning using the method of temporal differences (TD). To address drawbacks with earlier models, both models employ internal models. The principal difference between the internal models lies in the degree to which they implement the properties of the environment. One employs a partially observable semi-Markov environment; the other uses a form of transition matrix in an iterative manner to generate the sum of future predictions. We show that the internal models employ fundamentally different assumptions and that the assumptions are problematic in each case. Both models lack specification regarding their biological implementation to different degrees. In addition, the model employing the partially observable semi-Markov environment seems to have redundant features. In contrast, the alternate model appears to lack generalizability.

论文关键词:Computational, Dopamine, Learning, Model, Reinforcement

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10462-007-9036-3