Does explainable machine learning uncover the black box in vision applications?

作者:

Highlights:

摘要

Machine learning (ML) in general and deep learning (DL) in particular has become an extremely popular tool in several vision applications (like object detection, super resolution, segmentation, object tracking etc.). Almost in parallel, the issue of explainability in ML (i.e. the ability to explain/elaborate the way a trained ML model arrived at its decision) in vision has also received fairly significant attention from various quarters. However, we argue that the current philosophy behind explainable ML suffers from certain limitations, and the resulting explanations may not meaningfully uncover black box ML models. To elaborate our assertion, we first raise a few fundamental questions which have not been adequately discussed in the corresponding literature. We also provide perspectives on how explainablity in ML can benefit by relying on more rigorous principles in the related areas.

论文关键词:Explainable machine learning,Deep learning,Vision,Signal processing

论文评审过程:Received 6 December 2021, Accepted 7 December 2021, Available online 13 December 2021, Version of Record 18 December 2021.

论文官网地址:https://doi.org/10.1016/j.imavis.2021.104353