Multi-level similarity learning for image-text retrieval

作者:

Highlights:

摘要

Image-text retrieval task has been a popular research topic and attracts a growing interest due to it bridges computer vision and natural language processing communities and involves two different modalities. Although a lot of methods have made a great progress in image-text task, it remains challenging because of the difficulty to learn the correspondence between two heterogeneous modalities. In this paper, we propose a multi-level representation learning for image-text retrieval task, which utilizes semantic-level, structural-level and contextual information to improve the quality of visual and textual representation. To utilize semantic-level information, we firstly extract the nouns, adjectives and number with high frequency as the semantic labels and adopt multi-label convolutional neural network framework to encode the semantic-level information. To explore the structure-level information of image-text pair, we firstly construct two graphs to encode the visual and textual information with respect to the corresponding modality and then, we apply graph matching with triplet loss to reduce the cross-modality discrepancy. To further improve the retrieval results, we utilize the contextual-level information from two modalities to refine the rank list and enhance the retrieval quality. Extensive experiments on Flickr30k and MSCOCO, which are two commonly datasets for image-text retrieval, have demonstrated the superiority of our proposed method.

论文关键词:Cross modal retrieval,Semantic extraction,Graph matching

论文评审过程:Received 31 July 2020, Revised 14 October 2020, Accepted 29 October 2020, Available online 23 November 2020, Version of Record 23 November 2020.

论文官网地址:https://doi.org/10.1016/j.ipm.2020.102432