A physics based generative adversarial network for single image defogging

作者:

Highlights:

摘要

In the field of single image defogging, there are two main methods. One is the image restoration method based on the atmospheric scattering theory which can recover the image texture details well. The other is the image enhancement method based on Retinex theory which can improve the image contrast well. In practice, however, the former can easily lead to low contrast images; the latter is prone to losing texture details. Therefore, how to effectively combine the advantages of both to remove fog is a key issue in the field. In this paper, we have developed a physics based generative adversarial network (PBGAN) to exploit the advantages between those two methods in parallel. To our knowledge, it is the first learning defogging framework that incorporates these two methods and to enable them to work together and complement each other. Our method has two generative adversarial modules, the Contrast Enhancement (CE) module and the Texture Restoration (TR) module. To improve contrast in the CE module, we introduced a novel inversion-adversarial loss and a novel inversion-cycle consistency loss for training the generator. To improve the texture in the TR module, we introduced two convolutional neural networks to learn the atmospheric light coefficient and the transmission map, respectively. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed approach performs better than several state-of-the-art methods quantitatively and qualitatively.

论文关键词:Single image defogging,Image restoration,Image enhancement,CycleGAN,Subjective evaluation

论文评审过程:Received 3 October 2019, Accepted 6 October 2019, Available online 28 October 2019, Version of Record 7 November 2019.

论文官网地址:https://doi.org/10.1016/j.imavis.2019.10.001