Pruning by explaining: A novel criterion for deep neural network pruning
作者:
Highlights:
• A novel criterion to efficiently prune convolutional neural networks inspired by explaining nonlinear classification decisions in terms of input variables is introduced.
• The method is inspired by neural network interpretability: Layer-wise Relevance Propagation.
• This is the first report to link the two disconnected lines of interpretability and model compression research.
• The method is tested on two popular convolutional neural network families and a broad range of benchmark datasets under two different scenarios.
摘要
•A novel criterion to efficiently prune convolutional neural networks inspired by explaining nonlinear classification decisions in terms of input variables is introduced.•The method is inspired by neural network interpretability: Layer-wise Relevance Propagation.•This is the first report to link the two disconnected lines of interpretability and model compression research.•The method is tested on two popular convolutional neural network families and a broad range of benchmark datasets under two different scenarios.
论文关键词:Pruning,Layer-wise relevance propagation (LRP),Convolutional neural network (CNN),Interpretation of models,Explainable AI (XAI)
论文评审过程:Received 18 December 2019, Revised 28 January 2021, Accepted 8 February 2021, Available online 22 February 2021, Version of Record 3 March 2021.
论文官网地址:https://doi.org/10.1016/j.patcog.2021.107899