EGNN: Constructing explainable graph neural networks via knowledge distillation

作者:

Highlights:

• Simultaneously optimizing multiple objectives of distilling knowledge from the pretrained model and cross-entropy loss, the proposed EGNN model exhibits superior performance.

• The transparent and interpretable neighbor selection strategy is designed. The neighbor nodes are selected in a sliced layer way, thus the neighbors’ contributions of user’s representation are traceable.

• Four state-of-the-art GNN models are selected as teacher networks. Experimental results on three real-world datasets show the effectiveness of the proposed framework.

摘要

•Simultaneously optimizing multiple objectives of distilling knowledge from the pretrained model and cross-entropy loss, the proposed EGNN model exhibits superior performance.•The transparent and interpretable neighbor selection strategy is designed. The neighbor nodes are selected in a sliced layer way, thus the neighbors’ contributions of user’s representation are traceable.•Four state-of-the-art GNN models are selected as teacher networks. Experimental results on three real-world datasets show the effectiveness of the proposed framework.

论文关键词:Graph neural network,Interpretability,Aggregation method,Knowledge distillation

论文评审过程:Received 26 May 2021, Revised 27 January 2022, Accepted 28 January 2022, Available online 4 February 2022, Version of Record 15 February 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.108345