Dual-modality hard mining triplet-center loss for visible infrared person re-identification

作者:

Highlights:

• We propose a dual-modality hard mining triplet-center loss (DTCL) to minimize the intra-class distance and enlarge the inter-class distance synchronously.

• We design a dual-path part-based feature learning network (DPFLN) framework to extract local features from two different modalities for visible infrared person re-identification.

• Experimental results demonstrate that our DPFLN with DTCL is more efficient and competitive than state-of-the-art methods.

摘要

•We propose a dual-modality hard mining triplet-center loss (DTCL) to minimize the intra-class distance and enlarge the inter-class distance synchronously.•We design a dual-path part-based feature learning network (DPFLN) framework to extract local features from two different modalities for visible infrared person re-identification.•Experimental results demonstrate that our DPFLN with DTCL is more efficient and competitive than state-of-the-art methods.

论文关键词:Deep learning,Visible infrared Person re-identification,Dual-modality hard mining triplet-center loss,Local feature

论文评审过程:Received 24 July 2020, Revised 7 November 2020, Accepted 9 January 2021, Available online 12 January 2021, Version of Record 15 January 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.106772