Structured self-attention architecture for graph-level representation learning

作者:

Highlights:

• We develop a Structured Self-attention Architecture for graph-level representation. Compared with previous GNN variants, the architecture proposed in this paper can focus more effectively on the influential part of the input graph.

• The proposed architecture’s readout can be incorporated into any existing node-level GNNs and provide effective features for graph-level representation. Compared with pooling readout, the proposed architecture shows its superior performance.

• Extensive experiments on two types of graph datasets illustrate the effectiveness of our proposed architecture. Combining our architecture’s readout with popular graph convolutional networks have validated the feasibility of structured self-attention.

摘要

•We develop a Structured Self-attention Architecture for graph-level representation. Compared with previous GNN variants, the architecture proposed in this paper can focus more effectively on the influential part of the input graph.•The proposed architecture’s readout can be incorporated into any existing node-level GNNs and provide effective features for graph-level representation. Compared with pooling readout, the proposed architecture shows its superior performance.•Extensive experiments on two types of graph datasets illustrate the effectiveness of our proposed architecture. Combining our architecture’s readout with popular graph convolutional networks have validated the feasibility of structured self-attention.

论文关键词:Neural self-attention mechanism,Graph neural networks,Graph classification

论文评审过程:Received 17 May 2019, Revised 30 August 2019, Accepted 15 October 2019, Available online 2 November 2019, Version of Record 9 November 2019.

论文官网地址:https://doi.org/10.1016/j.patcog.2019.107084