Fine-Grained Person Re-identification

作者:Jiahang Yin, Ancong Wu, Wei-Shi Zheng

摘要

Person re-identification (re-id) plays a critical role in tracking people via surveillance systems by matching people across non-overlapping camera views at different locations. Although most re-id methods largely depend on the appearance features of a person, such methods always assume that the appearance information (particularly color) is distinguishable. However, distinguishing people who dress in very similar clothes (especially the same type of clothes, e.g. uniform) is ineffective if relying only on appearance cues. We call this problem the fine-grained person re-identification (FG re-id) problem. To solve this problem, rather than relying on clothing color, we propose to exploit two types of local dynamic pose features: motion-attentive local dynamic pose feature and joint-specific local dynamic pose feature. They are complementary to each other and describe identity-specific pose characteristics, which are found to be more unique and discriminative against similar appearance between people. A deep neural network is formed to learn these local dynamic pose features and to jointly quantify motion and global visual cues. Due to the lack of a suitable benchmark dataset for evaluating the FG re-id problem, we also contribute a fine-grained person re-identification (FGPR) dataset, which contains 358 identities. Extensive evaluations on the FGPR dataset show that our proposed model achieves the best performance compared with related person re-id and fine-grained recognition methods for FG re-id. In addition, we verify that our method is still effective for conventional video-based person re-id.

论文关键词:Person re-identification, Fine-grained cross-view matching, Visual surveillance

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11263-019-01259-0