Task-adaptive Asymmetric Deep Cross-modal Hashing

作者:

Highlights:

摘要

Supervised cross-modal hashing aims to embed the semantic correlations of heterogeneous modality data into the binary hash codes with discriminative semantic labels. It can support efficient large-scale cross-modal retrieval due to the fast retrieval speed and low storage cost. However, existing methods equally handle the cross-modal retrieval tasks, and simply learn the same couple of hash functions in a symmetric way. Under such circumstances, the characteristics of different cross-modal retrieval tasks are ignored and sub-optimal performance may be brought. Motivated by this, we present a Task-adaptive Asymmetric Deep Cross-modal Hashing (TA-ADCMH) method in this paper. It can learn task-adaptive hash functions for two sub-retrieval tasks via simultaneous modality representation and asymmetric hash learning. Different from previous cross-modal hashing methods, our learning framework jointly optimizes the semantic preserving from multi-modal features to the hash codes, and the semantic regression from query modality representation to the explicit labels. With our model, the learned hash codes can effectively preserve the multi-modal semantic correlations, and meanwhile, adaptively capture the query semantics. Besides, we design an efficient discrete optimization strategy to directly learn the binary hash codes, which alleviates the relaxing quantization errors. Extensive experiments demonstrate the state-of-the-art performance of the proposed TA-ADCMH from various aspects.

论文关键词:Cross-modal similarity retrieval,Task-adaptive,Asymmetric Deep Hashing Learning

论文评审过程:Received 4 July 2020, Revised 9 January 2021, Accepted 2 February 2021, Available online 15 February 2021, Version of Record 1 March 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.106851