Siamese graph convolutional network for content based remote sensing image retrieval

作者:

Highlights:

摘要

This paper deals with the problem of content-based image retrieval (CBIR) of very high resolution (VHR) remote sensing (RS) images using the notion of a novel Siamese graph convolution network (SGCN). The GCN model has recently gained popularity in learning representations for irregular domain data including graphs. In the same line, we argue the effectiveness of region adjacency graph (RAG) based image representations for VHR RS scenes in terms of localized regions. This technique captures important scene information which can further aid in a better image to image correspondence. However, standard GCN features, in general, lacks discriminative property for fine-grained classes. These features may not be optimal for the task of CBIR in many cases with coherent local characteristics. As a remedy, we propose the SGCN architecture for assessing the similarity between a pair of graphs which can be trained with the contrastive loss function. Given the RAG representations, the aim is to learn an embedding space that pulls semantically coherent images closer while pushing dissimilar samples far apart. In order to ensure a quick response while performing the retrieval using a given similarity measure, the embedding space is kept constrained. We implement the proposed embeddings for the task of CBIR for RS data on the popular UC-Merced dataset and the PatternNet dataset where improved performance can be observed.

论文关键词:

论文评审过程:Received 3 October 2018, Revised 6 March 2019, Accepted 12 April 2019, Available online 25 April 2019, Version of Record 28 May 2019.

论文官网地址:https://doi.org/10.1016/j.cviu.2019.04.004