An explainable ensemble feedforward method with Gaussian convolutional filter

作者:

Highlights:

摘要

The emerging deep learning technologies are leading to a new wave of artificial intelligence, but in some critical applications such as medical image processing, deep learning is inapplicable due to the lack of interpretation, which is essential for a critical application. This work develops an explainable feedforward model with Gaussian kernels, in which the Gaussian mixture model is leveraged to extract representative features. To make the error within the allowable range, we calculate the lower bound of the number of samples through the Chebyshev inequality. In the training processing, we discuss both the deterministic and stochastic feature representations, and investigate the performance of them and the ensemble model. Additionally, we use Shapely additive explanations to analyze the experiment results. The proposed method is interpretable, so it can replace the deep neural network by working with shallow machine learning technologies, such as the Support Vector Machine and Random Forest. We compare our method with baseline methods on Brain Tumor and Mitosis dataset. The experimental results show our method outperforms the RAM (Recurrent Attention Model), VGG19 (Visual Geometry Group 19), LeNET-5, and Explainable Prediction Framework while having strong interpretability.

论文关键词:Explainable artificial intelligence,Medical image processing,Shapely additive explanation

论文评审过程:Received 6 December 2020, Revised 19 March 2021, Accepted 28 April 2021, Available online 30 April 2021, Version of Record 11 May 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107103