Multi-instance semantic similarity transferring for knowledge distillation

作者:

Highlights:

摘要

Knowledge distillation is a popular paradigm for learning portable neural networks by transferring the knowledge from a large model into a smaller one. Most existing approaches enhance the student model by utilizing the similarity information between the categories of instance level provided by the teacher model. However, these works ignore the similarity correlation between different instances that plays an important role in confidence prediction. To tackle this issue, we propose a novel method in this paper, called multi-instance semantic similarity transferring for knowledge distillation (STKD), which aims to fully utilize the similarities between categories of multiple samples. Furthermore, we propose to better capture the similarity correlation between different instances by the mixup technique, which creates virtual samples by a weighted linear interpolation. Note that, our distillation loss can fully utilize the incorrect classes similarities by the mixed labels. The proposed approach promotes the performance of student model as the virtual sample created by multiple images produces a similar probability distribution in the teacher and student networks. Experiments and ablation studies on several public classification datasets including CIFAR-10, CIFAR-100, CINIC-10 and Tiny-ImageNet verify that this light-weight method can effectively boost the performance of the compact student model. It shows that STKD has substantially outperformed the vanilla knowledge distillation and achieved superior accuracy over the state-of-the-art knowledge distillation methods.

论文关键词:Deep neural networks,Image classification,Model compression,Knowledge distillation

论文评审过程:Received 24 April 2022, Revised 13 August 2022, Accepted 29 August 2022, Available online 3 September 2022, Version of Record 16 September 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.109832