A spatial–spectral semisupervised deep learning framework using siamese networks and angular loss

作者:

Highlights:

摘要

Deep learning has gained popularity in recent times in the field of feature-extraction, object-identification, object-tracking, change-detection, image-classification, spatio-temporal-data analysis, and hyperspectral imaging. Most of the supervised tasks using deep learning require a large number of labeled samples, barring which the model tends to overfit and do not generalize well to the test data. Semi-supervised learning is very beneficial for hyperspectral images which contain abundant unlabeled data samples in comparison to labeled data. Furthermore, it is known that for datasets in which samples are related to each other in all three dimensions such as videos, three-dimensional biological images and hyperspectral images, the use of spatial–spectral/spatial–temporal based deep learning strategies, which can exploit the relationship between pixels in all three-dimensions, has also seen a rise in the past few years. Moreover, to date, deep feature extraction and classification has been done using euclidean distance based metrics. Foray into the field of angular feature extraction and classification, which is known to work better when samples are impacted by resolution or illumination differences, has not yet been made. We propose a novel spatial–spectral semisupervised deep learning approach based on angular distances by projecting the deep features onto the surface of an l2-normalized unit hypersphere.

论文关键词:

论文评审过程:Received 18 September 2018, Revised 5 December 2019, Accepted 18 February 2020, Available online 4 March 2020, Version of Record 11 March 2020.

论文官网地址:https://doi.org/10.1016/j.cviu.2020.102943