Secure collaborative few-shot learning

作者:

Highlights:

摘要

Few-shot learning aims at training a model that can effectively recognize novel classes with extremely limited training examples. Few-shot learning via meta-learning can improve the performance on novel tasks by leveraging previously acquired knowledge as a prior when the training examples are extremely limited. However, most of these existing few-shot learning methods involve parameter transfer, which usually requires sharing models trained on the examples for specific tasks, thus posing a potential threat to the privacy of data owners. To tackle this, we design a novel secure collaborative few-shot learning framework. More specifically, we incorporate differential privacy into few-shot learning through adding the calibrated Gaussian noise to its optimization process to prevent sensitive information in the training set from being leaked. To prevent potential privacy disclosure to other participants and the central server, homomorphic encryption is integrated while calculating global loss functions and interacting with a central server. Furthermore, we implement our framework on the classical few-shot learning methods such as MAML and Reptile, and extensively evaluate its performance on Omniglot, Mini-ImageNet and Fewshot-CIFAR100 datasets. The experimental results demonstrate the effectiveness of our framework in both utility and privacy.

论文关键词:Few-shot learning,Meta-learning,Differential privacy,Additively homomorphic encryption

论文评审过程:Received 17 January 2020, Revised 8 May 2020, Accepted 16 June 2020, Available online 17 June 2020, Version of Record 22 June 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.106157