Counterfactual state explanations for reinforcement learning agents via generative deep learning

作者:

Highlights:

• Deep learning models can generate counterfactual states for game-playing agents.

• Generative counterfactual states are sufficiently realistic according to humans.

• Counterfactual states can be used for identifying a flawed agent.

• Generated counterfactual states outperform nearest neighbors in flaw detection.

摘要

Counterfactual explanations, which deal with “why not?” scenarios, can provide insightful explanations to an AI agent's behavior [Miller [38]]. In this work, we focus on generating counterfactual explanations for deep reinforcement learning (RL) agents which operate in visual input environments like Atari. We introduce counterfactual state explanations, a novel example-based approach to counterfactual explanations based on generative deep learning. Specifically, a counterfactual state illustrates what minimal change is needed to an Atari game image such that the agent chooses a different action. We also evaluate the effectiveness of counterfactual states on human participants who are not machine learning experts. Our first user study investigates if humans can discern if the counterfactual state explanations are produced by the actual game or produced by a generative deep learning approach. Our second user study investigates if counterfactual state explanations can help non-expert participants identify a flawed agent; we compare against a baseline approach based on a nearest neighbor explanation which uses images from the actual game. Our results indicate that counterfactual state explanations have sufficient fidelity to the actual game images to enable non-experts to more effectively identify a flawed RL agent compared to the nearest neighbor baseline and to having no explanation at all.

论文关键词:Deep learning,Reinforcement learning,Explainable AI,Interpretable AI

论文评审过程:Received 20 March 2020, Revised 27 October 2020, Accepted 20 January 2021, Available online 27 January 2021, Version of Record 1 February 2021.

论文官网地址:https://doi.org/10.1016/j.artint.2021.103455