Verification and repair of control policies for safe reinforcement learning

作者:Shashank Pathak, Luca Pulina, Armando Tacchella

摘要

Reinforcement Learning is a well-known AI paradigm whereby control policies of autonomous agents can be synthesized in an incremental fashion with little or no knowledge about the properties of the environment. We are concerned with safety of agents whose policies are learned by reinforcement, i.e., we wish to bound the risk that, once learning is over, an agent damages either the environment or itself. We propose a general-purpose automated methodology to verify, i.e., establish risk bounds, and repair policies, i.e., fix policies to comply with stated risk bounds. Our approach is based on probabilistic model checking algorithms and tools, which provide theoretical and practical means to verify risk bounds and repair policies. Considering a taxonomy of potential repair approaches tested on an artificially-generated parametric domain, we show that our methodology is also more effective than comparable ones.

论文关键词:Robust AI, Reinforcement learning, Probabilistic model checking

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-017-0999-8