SPARE: Self-supervised part erasing for ultra-fine-grained visual categorization

作者:

Highlights:

• We propose a novel framework SPARE that segments parts using only image-level category labels and produces discriminative part feature representations for ultra-fine-grained visual categorization.

• A new self-supervised module is proposed to generate more diversified part segments with semantic meaning, and enhance the discriminability via predicting the contextual position of the erased parts.

• Very encouraging experimental results demonstrate the effectiveness of SPARE and the possibility in advancing research for ultra-fine-grained visual categorization.

摘要

•We propose a novel framework SPARE that segments parts using only image-level category labels and produces discriminative part feature representations for ultra-fine-grained visual categorization.•A new self-supervised module is proposed to generate more diversified part segments with semantic meaning, and enhance the discriminability via predicting the contextual position of the erased parts.•Very encouraging experimental results demonstrate the effectiveness of SPARE and the possibility in advancing research for ultra-fine-grained visual categorization.

论文关键词:Self-Supervised part erasing,Ultra-fine-grained visual categorization,Fine-grained visual categorization,Random part erasing,Weakly-supervised part segmentation

论文评审过程:Received 18 May 2021, Revised 23 February 2022, Accepted 3 April 2022, Available online 10 April 2022, Version of Record 15 April 2022.

论文官网地址:https://doi.org/10.1016/j.patcog.2022.108691