Macro unit-based convolutional neural network for very light-weight deep learning

作者:

Highlights:

摘要

When implementing a deep neural network in an embedded system or SoC for mobile devices, its large parameter size can be a significant burden on the internal memory design. In this paper, we propose a new deep neural network that reduces computation and the number of model parameters but maintains reasonable performance. The configuration of the proposed network is as follows: First, we present a macro unit (MU) to reduce heavy computations and also to learn sufficient feature maps. Second, we employ asymmetric convolution of the well-known Inception network to further efficiently manipulate feature maps within the MU. Third, all the feature maps produced from MU(s) of each layer are concatenated and then the grouped feature map is distributed to all the MUs of the next layer for transferring richer information. Experimental results show that the proposed network achieves about 10% higher performance than DenseNet-BC in case of extremely small parameter size for CIFAR-100. The proposed network also has very few learning parameters and smaller floating point operations per second (FLOPS) than the other networks optimized for mobile devices such as MobileNet V2.

论文关键词:Deep neural networks,Light-weight deep learning,Macro-unit

论文评审过程:Received 9 February 2019, Accepted 27 February 2019, Available online 15 March 2019, Version of Record 20 May 2019.

论文官网地址:https://doi.org/10.1016/j.imavis.2019.02.008