Explanation in AI and law: Past, present and future

作者:

摘要

Explanation has been a central feature of AI systems for legal reasoning since their inception. Recently, the topic of explanation of decisions has taken on a new urgency, throughout AI in general, with the increasing deployment of AI tools and the need for lay users to be able to place trust in the decisions that the support tools are recommending. This paper provides a comprehensive review of the variety of techniques for explanation that have been developed in AI and Law. We summarise the early contributions and how these have since developed. We describe a number of notable current methods for automated explanation of legal reasoning and we also highlight gaps that must be addressed by future systems to ensure that accurate, trustworthy, unbiased decision support can be provided to legal professionals. We believe that insights from AI and Law, where explanation has long been a concern, may provide useful pointers for future development of explainable AI.

论文关键词:Explainable AI,AI and law,Computational models of argument,Case-based reasoning

论文评审过程:Received 28 February 2020, Revised 3 September 2020, Accepted 12 September 2020, Available online 16 September 2020, Version of Record 23 September 2020.

论文官网地址:https://doi.org/10.1016/j.artint.2020.103387