SNIP-FSL: Finding task-specific lottery jackpots for few-shot learning

作者:

Highlights:

摘要

Foresight pruning is an effective approach for deriving a compact sub-network in resource-constrained scenarios. However, most existing methods neglect the fact that scarce resources usually imply insufficient trainable data, thereby considerably affecting the pruning performance. In this paper, we propose a novel single-shot pruning method, named SNIP-FSL, for these few-shot tasks to achieve a trade-off between resources and data scale. Considering the pretrained weights as a special initialization state, we argue that there are task-sensitive high-performance sparse subnetworks in the few-shot learning process, termed “task-specific lottery jackpots.” By designing an effective parameter significance criterion, we obtained these jackpots without additional time-consuming searches and iterations. Furthermore, SNIP-FSL is designed to distinguish task-specific lottery jackpots based on historical experience; thus, it can be easily integrated into most transfer- and meta-based methods. For example, we obtain a winning submodel that has only 30% parameters without compromising accuracy in vanilla ProtoNet. Extensive experimental results reveal that SNIP-FSL attains excellent performance compared with several state-of-the-art foresight pruning methods under both transfer- and meta-paradigms.

论文关键词:Network pruning,Meta learning,Few-shot learning,Fine-tuning,Transferable parameters

论文评审过程:Received 19 October 2021, Revised 7 February 2022, Accepted 8 February 2022, Available online 16 February 2022, Version of Record 29 April 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.108427