Attentive frequency learning network for super-resolution

作者:Fenghai Li, Qiang Cai, Haisheng Li, Yifan Chen, Jian Cao, Shanshan Li

摘要

Benefiting from the strong capability of capturing long-range dependencies, a series of self-attention based single image super-resolution (SISR) methods have achieved promising performance. However, the existing self-attention mechanisms generally suffer great computational costs both in training and inference. In this study, we propose an innovative attentive frequency learning network (AFLN) for single image super-resolution. Our AFLN can greatly reduce computational costs of self-attention mechanism yet well capture long-range dependencies in SISR tasks. Specifically, our AFLN mainly consists of a series of extensive attentive frequency learning blocks (AFLB). In each AFLB, we firstly integrate the hierarchical features by residual dense connections and decompose the original features into low- and high-frequency domains with a half size of original features via discrete wavelet transform (DWT). Then, we adopt self-attention to explore long-range dependency relations in low- and high-frequency feature domains, respectively. In this way, we can model the self-attention in the quarter size of original input image, greatly reducing computational costs. In addition, the separating attention from low- and high-frequency domain can effectively maintain detailed information. Finally, we adopt the inverse discrete wavelet transform (IDWT) to reconstruct these attentive features. Extensive experiments on publicly available datasets demonstrate the efficiency and effectiveness of our AFLN against the state-of-the-art methods.

论文关键词:Super-resolution, Self-attention, Wavelet transform, Frequency domain

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-021-02703-w