LCRCA: image super-resolution using lightweight concatenated residual channel attention networks

作者:Changmeng Peng, Pei Shu, Xiaoyang Huang, Zhizhong Fu, Xiaofeng Li

摘要

Images that are more similar to the original high-resolution images can be generated by deep neural network-based super-resolution methods than the non-learning-based ones, but the huge and sometimes redundant network structure and parameters make them unbearable. To get high-quality super-resolution results in computation resource-limited scenarios, we propose a lightweight skip concatenated residual channel attention network, LCRCA for image super-resolution. Specifically, we design a light but efficient deep residual block (DRB) which can generate more precise residual information by using more convolution layers under the same computation budget. To enhance the feature maps of DRB, an improved channel attention mechanism named statistical channel attention (SCA) is proposed by introducing channel statistics. Besides, compared with the commonly used skip connections, we propose to use skip concatenation (SC) to build information flows for feature maps of different layers. Finally, DRB, SCA, and SC are efficiently used to form the proposed network LCRCA. Experiments on four test sets show that our method can gain up to 3.2 dB and 0.12 dB over the bicubic interpolation and the representative lightweight method FERN, respectively, and can recover image details more accurately than the compared algorithms. Code can be found at https://github.com/pengcm/LCRCA.

论文关键词:Super-resolution, Deep learning, Residual block, Statistical channel attention, Skip concatenation

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-021-02891-5