Spatio-Temporal Learning for Video Deblurring based on Two-Stream Generative Adversarial Network

作者:Liyao Song, Quan Wang, Haiwei Li, Jiancun Fan, Bingliang Hu

摘要

Video-deblurring has achieved excellent results by using deep learning approaches. How to capture the dynamic spatio-temporal information in the videos is crucial on deblurring. In this paper, we propose a two-stream DeblurGAN which combines a 3D stream with a 2D stream to deblur. The 3D convolution provides spatial and temporal invariance to restore the foreground of frames, while the 2D convolution is sufficient to deal with spatial features, given a relatively consistant background. Thus, our model takes advantage of the great processing power of the 3D stream to handle the foreground which usually contains more dynamical motion blur, and the advantage of the simplicity of the 2D stream to handle the mostly consistent background. We have the full advantage of combining both the 3D convolution and the 2D convolution. Then we take the two-stream model as the generator and adopt the adversarial learning. We test our model on the VideoDeblurring and GOPRO datasets, and compare with other methods which we have listed. Our method outperforms others in the Peak Signal-to-Noise Ratio (PSNR), especially shows a good performance handling the foreground with obvious motion blur.

论文关键词:Two-stream, video-deblurring, spatio-temporal, generative adversarial network

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-021-10520-y