Fully used reliable data and attention consistency for semi-supervised learning

作者:

Highlights:

摘要

Large labeled datasets represent human labor’s costly consumption of resources. Therefore, semi-supervised learning leverages a large amount of unlabeled data to improve the training results in limited labels. Many methods of semi-supervised learning utilize diverse data augmentations to improve model learning and the classification rule from these changes, requiring models to spend a lot of time to adapt to the changes. Besides, reducing the noise in trained unlabeled data is also an issue that is often discussed in semi-supervised learning so that the inference from error predictions can be reduced. It may define that the data, of which the probability predicted from the model is higher than a threshold, as confident and then only train on those high-confidence unlabeled data so that the model avoids the influence from deviation of the error caused by unlabeled data predictions. However, it also leads to the fact that many unlabeled data cannot be effectively used. Thus, this study proposes a semi-supervised framework, including Attention Consistency (AC) and One Supervised (OS) algorithms, which improves efficiency and performance of the model learning by guiding the model to pay attention to classified features and judging whether the model cannot be effectively trained in existing reliable data. This way, the model fully uses unlabeled data to train. The experiment results and comparisons show that similar results can be reached using other methods within a shorter training process. This paper also analyzes the distribution of feature results and proposes a new measurement to find out distribution information.

论文关键词:Deep learning,Semi-supervised learning,Attention consistency,Reliable data

论文评审过程:Received 5 January 2022, Revised 24 March 2022, Accepted 14 April 2022, Available online 25 April 2022, Version of Record 17 May 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.108837