Bi-branch network for dynamic scene deblurring

作者:

Highlights:

摘要

We present a bi-branch network for efficient dynamic scene deblurring. The challenge is to simultaneously reduce the computational cost and enhance the restoration accuracy. The proposed network conduct heterogeneous transformations on motion and RGB content in an encoder–decoder structure with skip connections. The computational efficiency is achieved by explicitly decomposing the intertwined mapping of spatiotemporal and cross-channel correlations into the motion branch that processes grayscale frames with our proposed pseudo depth-wise separable 3D convolution and the color branch that conducts depth-wise separable 2D convolution on RGB content. We refine features captured by the motion branch and the color branch by incorporating a lightweight nonlocal fusion layer that adapts the double attention operation to aggregate heterogeneous transformations and generate for each location in the feature space an output based on its correlation with the entire video clip. Our nonlocal fusion maintains low computational cost in processing high-resolution frames and operates in a patch-based manner during inference. The proposed architecture strikes the right balance between complexity and accuracy for dynamic scene deblurring. In comparison with state-of-the-art methods, the proposed network is compact and shows competitive restoration accuracy with a significant reduction in computational cost.

论文关键词:

论文评审过程:Received 18 December 2019, Revised 1 September 2020, Accepted 3 September 2020, Available online 5 September 2020, Version of Record 7 September 2020.

论文官网地址:https://doi.org/10.1016/j.cviu.2020.103100