Two-stream deep sparse network for accurate and efficient image restoration

作者:

Highlights:

摘要

Deep convolutional neural network (CNN) has achieved great success in image restoration. However, previous methods ignore the complementarity between low-level and high-level features, thereby leading to limited image reconstruction quality. In this paper, we propose a two-stream sparse network (TSSN) to explicitly learn shallow and deep features to enforce their respective contribution to image restoration. The shallow stream learns shallow features (e.g., texture edges), and the deep stream learns deep features (e.g., salient semantics). In each stream, sparse residual block (SRB) is proposed to efficiently aggregate hierarchical features by constructing sparse connections among layers in the local block. Spatial-wise and channel-wise attention are used to fuse the shallow and deep stream which recalibrates features by weight assignment in both spatial and channel dimensions. A novel loss function called Softmax-L1 loss is proposed to increase penalties of pixels that have large L1 loss (i.e., hard pixels). TSSN is evaluated with three representative IR applications, i.e., single image super-resolution, image denoising and JPEG deblocking. Extensive experiments demonstrate that TSSN outperforms most of state-of-the-art methods on benchmark datasets on both quantitative metric and visual quality.

论文关键词:

论文评审过程:Received 31 December 2019, Revised 18 June 2020, Accepted 23 June 2020, Available online 29 June 2020, Version of Record 6 July 2020.

论文官网地址:https://doi.org/10.1016/j.cviu.2020.103029