Defending local poisoning attacks in multi-party learning via immune system

作者:

Highlights:

摘要

Multi-party learning provides effective solutions for building a jointly-trained model with scattered data while meeting user privacy, data security and government regulations. Nevertheless, since the knowledge sharing process among multiple participants is conducted by uploading model parameters instead of individual private data, local poisoning attacks in the multi-party setting could be more covert and destructive. In this paper, we propose a novel immune system deployed in the multi-party learning scenario (MPIS) to defend against local poisoning attacks. We investigate the commonality between the biological immunity and the defense against poisoning attacks, and analogize the secure defense framework as an immune system. The approach achieves antigen recognition, immune response and immunological memory with an adversarial pipeline, which is not limited by the number of compromised clients or the duration of their involvement, and it is capable of adaptively determine the aggregation weight of different local models. Extensive experimental results on image and text datasets with different neural networks demonstrate the superiority of the MPIS framework in both model performance and robustness against poisoning attacks.

论文关键词:Multi-party learning,Poisoning attacks,Immune system,Secure artificial intelligence

论文评审过程:Received 27 September 2021, Revised 15 November 2021, Accepted 1 December 2021, Available online 11 December 2021, Version of Record 23 December 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107850