Improved prior selection using semantics in maximum a posteriori for few-shot learning

作者:

Highlights:

摘要

Few-shot learning is to recognize novel concepts with few labeled samples. Recently, significant progress has been made to address the overfitting caused by data scarcity, especially those on modeling the distribution of novel categories given a single point. However, they often deeply rely on the prior knowledge from base set, which is generally hard to define, and its selection can easily bias the learning. A popular pipeline is to pretrain a feature extractor with base set and generate statistics from them as prior information. Since pretrained feature extractor cannot extract accurate representations for categories have never seen, and there is only 1 or 5 support images from novel categories, making it hard to acquire accurate priors, especially when they are far away from the class center. To address these issues, in this paper, we base our network on Maximum a posteriori (MAP), proposing a strategy for better prior selection from base set. We specially introduce semantic information, which are learned from unsupervised text corpora and easily available, to alleviate biases caused by unrepresentative support samples. Our intuition is that when the support from visual information is biased, semantics can provide strong prior knowledge to assist learning. Experimental results on four few-shot benchmarks also show that it outperforms the state-of-the-art methods by a large margin, improves around 2.08%12.68% than the best results in each dataset on both 1- and 5-shot tasks.

论文关键词:Few-shot learning,Maximum a posteriori,Semantics,Prior selection

论文评审过程:Received 26 July 2021, Revised 27 September 2021, Accepted 2 November 2021, Available online 15 November 2021, Version of Record 10 January 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107688