Training parsers by inverse reinforcement learning

作者:Gergely Neu, Csaba Szepesvári

摘要

One major idea in structured prediction is to assume that the predictor computes its output by finding the maximum of a score function. The training of such a predictor can then be cast as the problem of finding weights of the score function so that the output of the predictor on the inputs matches the corresponding structured labels on the training set. A similar problem is studied in inverse reinforcement learning (IRL) where one is given an environment and a set of trajectories and the problem is to find a reward function such that an agent acting optimally with respect to the reward function would follow trajectories that match those in the training set. In this paper we show how IRL algorithms can be applied to structured prediction, in particular to parser training. We present a number of recent incremental IRL algorithms in a unified framework and map them to parser training algorithms. This allows us to recover some existing parser training algorithms, as well as to obtain a new one. The resulting algorithms are compared in terms of their sensitivity to the choice of various parameters and generalization ability on the Penn Treebank WSJ corpus.

论文关键词:Reinforcement learning, Inverse reinforcement learning, Parsing, PCFG, Discriminative parser training, Parser training, Parsing as behavior

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10994-009-5110-1