LKASR: Large kernel attention for lightweight image super-resolution

作者:

Highlights:

摘要

Image super-resolution, aims to recover a corresponding high-resolution image from a given low-resolution image. While most state-of-the-art methods only consider using fixed small-size convolution kernels (e.g., 1 × 1, 3 × 3) to extract image features, few works have been made to large-size convolution kernels for image super-resolution (SR). In this paper, we propose a novel lightweight baseline model LKASR based on large kernel attention (LKA). LKASR consists of three parts, shallow feature extraction, deep feature extraction and high-quality image reconstruction. In particular, the deep feature extraction module consists of multiple cascaded visual attention modules (VAM), each of which consists of a 1 × 1 convolution, a large kernel attention (acts as Transformer) and a feature refinement module (FRM, acts as CNN). Specifically, VAM applies lightweight architecture like swin transformer to realize iterative extraction of global and local features of images, which greatly improves the effectiveness of SR method (0.049s in Urban100 dataset). For different scales ( × 2,  × 3,  × 4), extensive experimental results on benchmark demonstrate that LKASR outperforms most lightweight SR methods by up to dB, while the total of parameters and FLOPs remains lightweight.

论文关键词:Image super-resolution,Large kernel attention,Feature refinement

论文评审过程:Received 13 April 2022, Revised 17 June 2022, Accepted 2 July 2022, Available online 8 July 2022, Version of Record 13 July 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.109376