“That's (not) the output I expected!” On the role of end user expectations in creating explanations of AI systems

作者:

摘要

Research in the social sciences has shown that expectations are an important factor in explanations as used between humans: rather than explaining the cause of an event per se, the explainer will often address another event that did not occur but that the explainee might have expected. For AI-powered systems, this finding suggests that explanation-generating systems may need to identify such end user expectations. In general, this is a challenging task, not the least because users often keep them implicit; there is thus a need to investigate the importance of such an ability.

论文关键词:Expectations,Explanations,Factual,Counterfactual,Contrastive,Explainable AI,Mental models,Machine behaviour,Human-AI interaction

论文评审过程:Received 30 April 2020, Revised 14 April 2021, Accepted 15 April 2021, Available online 20 April 2021, Version of Record 28 April 2021.

论文官网地址:https://doi.org/10.1016/j.artint.2021.103507