Lightweight multi-scale residual networks with attention for image super-resolution

作者:

Highlights:

摘要

In recent years, constructing various deep convolutional neural networks (CNNs) for single-image super-resolution (SISR) tasks has made significant progress. Despite their high performance, numerous CNNs are limited in practical applications, owing to the requirement of heavy computation. This paper proposes a lightweight network for SISR, known as attention-based multi-scale residual network (AMSRN). In detail, a residual atrous spatial pyramid pooling (ASPP) block as well as a spatial and channel-wise attention residual (SCAR) block is stacked alternately to support the main framework of the entire network. The residual ASPP block utilizes parallel dilated convolutions of different dilation rates to achieve the purpose of capturing multi-scale features. The SCAR block adds the channel attention (CA) and spatial attention (SA) mechanisms based on a double-layer convolution residual block. In addition, group convolution is introduced in the SCAR block to further reduce the parameters while preventing over-fitting. Moreover, a multi-scale feature attention module is designed to provide instructive multi-scale attention information for shallow features. Particularly, we propose a novel upscale module, which adopts dual paths to upscale the features by jointly using sub-pixel convolution and nearest interpolation layers, instead of using deconvolution layer or sub-pixel convolution layer alone. The experimental results demonstrate that our method achieves comparable performance to the state-of-the-art methods, both quantitatively and qualitatively.

论文关键词:Super-resolution,Deep convolutional neural networks,Dilated convolutions,Attention mechanism,Residual networks

论文评审过程:Received 26 November 2019, Revised 7 May 2020, Accepted 1 June 2020, Available online 10 June 2020, Version of Record 15 June 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.106103