Explaining Deep Learning using examples: Optimal feature weighting methods for twin systems using post-hoc, explanation-by-example in XAI

作者:

Highlights:

摘要

In this paper, the twin-systems approach is reviewed, implemented, and competitively tested as a post-hoc explanation-by-example solution to the eXplainable Artificial Intelligence (XAI) problem. In twin-systems, an opaque artificial neural network (ANN) is explained by “twinning” it with a more interpretable case-based reasoning (CBR) system, by mapping the feature weights from the former to the latter. Extensive comparative tests are performed, over four experiments, to determine the optimal feature-weighting method for such twin-systems. Twin-systems for traditional multilayer perceptron (MLP) networks (MLP–CBR twins), convolutional neural networks (CNNs; CNN–CBR twins), and transformers for NLP (BERT–CBR twins) are examined. In addition, Feature Activation Maps (FAMs) are explored to enhance explainability by providing an additional layer of explanatory insight. The wider implications of this research on XAI is discussed, and a code library is provided to ease replicability.

论文关键词:Explainable AI,Explanation-by-example,Artificial neural networks,Case-based reasoning,Deep Learning,Computer vision,Natural Language Processing

论文评审过程:Received 3 June 2021, Revised 19 September 2021, Accepted 20 September 2021, Available online 24 September 2021, Version of Record 6 October 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107530