Interpretable Relative Squeezing bottleneck design for compact convolutional neural networks model

作者:

Highlights:

摘要

Convolutional neural networks (CNN) are mainly used for image recognition tasks. However, some huge models are infeasible for mobile devices because of limited computing and memory resources. In this paper, feature maps of DenseNet and CondenseNet are visualized. It could be observed that there are some feature channels in locked state and some have similar distribution property, which could be compressed further. Thus, in this work, a novel architecture — RSNet is introduced to improve the computing efficiency of CNNs. This paper proposes Relative-Squeezing (RS) bottleneck design, where the output is the weighted percentage of input channels. Besides, RSNet also contains multiple compression layers and learned group convolutions (LGCs). By eliminating superfluous feature maps, relative squeezing and compression layers only transmit the most significant features to the next layer. Less parameters are employed and much computation is saved. The proposed model is evaluated on three benchmark datasets: CIFAR-10, CIFAR-100 and ImageNet. Experiment results show that RSNet performs better with less parameters and FLOPs, compared to the state-of-the-art baseline, including CondenseNet, MobileNet and ShuffleNet.

论文关键词:Image recognition,Compact CNN,Relative-Squeezing bottleneck,Learned group convolutions

论文评审过程:Received 16 June 2019, Accepted 24 June 2019, Available online 7 July 2019, Version of Record 23 August 2019.

论文官网地址:https://doi.org/10.1016/j.imavis.2019.06.006