Feature separation and double causal comparison loss for visible and infrared person re-identification

作者:

Highlights:

摘要

Visible and infrared cross-modal person re-identification (VI-ReID) is the task of retrieving person images from visible images and infrared images. In the past, most VI-ReID algorithms focused only on learning common representations of different modes. In contrast, we extract identity-related features from different modalities and filter out identity-independent interference, and we let our network learn the domain unchanged as a more effective feature representation. In this paper, a novel end-to-end feature separation and double causal comparison loss framework for VI-ReID (FSDCC) is proposed to address cross-modality ReID tasks. We first separate the features using the feature separation module (FSM) to obtain strong identity-related and weak identity-related features. Then, double causal comparison loss is used to guide the model training. This process effectively reduces the influence of identity-irrelevant information such as occlusion and background, and finally achieves enhanced expression of identity-relevant features. Simultaneously, we combine identity loss and weighted regularization TriHard loss in a progressive joint training manner. Additionally, to enhance the CNN’s ability to extract global semantic information and better establish the connection between two pixels with a certain distance on the image, we propose CNS non-local neural network (CNS non-local), finally improving VI-ReID accuracy. Extensive experiments on two cross-modality datasets demonstrate that the proposed method outperforms current state-of-the-art by a large margin, achieving rank-1/mAP accuracy 87.18%/79.10% on the RegDB dataset, and 68.79%/65.72% on the SYSU-MM01 dataset.

论文关键词:Visible-infrared retrieval,Cross-modality,Feature separation,Double causal comparison

论文评审过程:Received 8 August 2021, Revised 28 October 2021, Accepted 22 December 2021, Available online 31 December 2021, Version of Record 11 January 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.108042