Semantic consistency generative adversarial network for cross-modality domain adaptation in ultrasound thyroid nodule classification

作者:Jun Zhao, Xiaosong Zhou, Guohua Shi, Ning Xiao, Kai Song, Juanjuan Zhao, Rui Hao, Keqin Li

摘要

Deep convolutional networks have been widely used for various medical image processing tasks. However, the performance of existing learning-based networks is still limited due to the lack of large training datasets. When a general deep model is directly deployed to a new dataset with heterogeneous features, the effect of domain shifts is usually ignored, and performance degradation problems occur. In this work, by designing the semantic consistency generative adversarial network (SCGAN), we propose a new multimodal domain adaptation method for medical image diagnosis. SCGAN performs cross-domain collaborative alignment of ultrasound images and domain knowledge. Specifically, we utilize a self-attention mechanism for adversarial learning between dual domains to overcome visual differences across modal data and preserve the domain invariance of the extracted semantic features. In particular, we embed nested metric learning in the semantic information space, thus enhancing the semantic consistency of cross-modal features. Furthermore, the adversarial learning of our network is guided by a discrepancy loss for encouraging the learning of semantic-level content and a regularization term for enhancing network generalization. We evaluate our method on a thyroid ultrasound image dataset for benign and malignant diagnosis of nodules. The experimental results of a comprehensive study show that the accuracy of the SCGAN method for the classification of thyroid nodules reaches 94.30%, and the AUC reaches 97.02%. These results are significantly better than the state-of-the-art methods.

论文关键词:Cross-modality domain adaptation, Semantic consistency, Domain knowledge, Self-attention mechanism, Thyroid nodule classification

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-021-03025-7