Build complementary models on human feedback for simulation to the real world

作者:

Highlights:

摘要

Using simulators is a cost-effective way to meet human needs. Nevertheless, inevitable errors derived from the gap between simulation and the real world sometimes cause great losses and must be taken seriously. This paper focuses on one cause of the gap, which is the incomplete state representation in simulation, and proposes a supervised learning approach, correcting human-unacceptable policies calculated by simulators, based on human feedback. The approach first detects the related blind spots by classifiers which are trained on data from aggregation of noisy human feedback. Then, it corrects the human-unacceptable policies through the complementary model presented based on linear function approximation (LFA) and a policy iteration algorithm FRU-SADPP that uses radial basis functions (RBFs). We evaluate our approach on two simulated domains and demonstrate its higher accuracy of policies than two baselines, in terms of three typical kinds of human suboptimality and human errors, and three types of human feedback. Experiments also show the scalability of our approach.

论文关键词:Safe reinforcement learning,Human-in-the-loop reinforcement learning,Markov decision processes,Supervised learning

论文评审过程:Received 25 November 2020, Revised 2 February 2021, Accepted 5 February 2021, Available online 9 February 2021, Version of Record 15 February 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.106854