CUFD: An encoder–decoder network for visible and infrared image fusion based on common and unique feature decomposition

作者:

Highlights:

摘要

In this paper, we propose a novel method for visible and infrared image fusion by decomposing feature information, which is termed as CUFD. It adopts two pairs of encoder–decoder networks to implement feature map extraction and decomposition, respectively. On the one hand, the shallow features of the image contain abundant information while the deep features focus more on extracting the thermal targets. Thus, we use an encoder–decoder network to extract both shallow and deep features. Unlike existing methods, both of the shallow and deep features are used for fusion and reconstruction with different emphases. On the other hand, the infrared and visible features of the same layer have both similarities and differences. Therefore, we train the other encoder–decoder network to decompose the feature maps into common and unique information based on their similarities and differences. After that, we apply different fusion rules according to the flexible requirements. This operation is more beneficial to retain the significant feature information in the fusion results. Qualitative and quantitative experiments on publicly available TNO and RoadScene datasets demonstrate the superiority of our CUFD over the state-of-the-art.

论文关键词:

论文评审过程:Received 30 July 2021, Revised 5 January 2022, Accepted 5 March 2022, Available online 16 March 2022, Version of Record 25 March 2022.

论文官网地址:https://doi.org/10.1016/j.cviu.2022.103407