Multi-view graph convolutional networks with attention mechanism

作者:

摘要

Recent advances in graph convolutional networks (GCNs), which mainly focus on how to exploit information from different hops of neighbors in an efficient way, have brought substantial improvement to many graph data modeling tasks. Most of the existing GCN-based models however are built on the basis of a fixed adjacency matrix, i.e., a single view topology of the underlying graph. That inherently limits the expressive power of the developed models especially when the raw graphs are often noisy or even incomplete due to the inevitably error-prone data measurement or collection. In this paper, we propose a novel framework, termed Multi-View Graph Convolutional Networks with Attention Mechanism (MAGCN), by incorporating multiple views of topology and an attention-based feature aggregation strategy into the computation of graph convolution. As an advanced variant of GCNs, MAGCN is fed with multiple “trustable” topologies, which already exist for a given task or are empirically generated by some classical graph construction methods, which has good potential to produce a better learning representation for downstream tasks. Furthermore, we present some theoretical analysis about the expressive power and flexibility of MAGCN, which provides a general explanation as to why multi-view based methods can potentially outperform those relying on a single view. Our experimental study demonstrates the state-of-the-art accuracies of MAGCN on Cora, Citeseer, and Pubmed datasets. Robustness analysis is also undertaken to show the advantage of MAGCN in handling some uncertainty issues in node classification tasks.

论文关键词:Graph neural networks,Multi-view learning,Attention mechanism,Semi-supervised learning

论文评审过程:Received 5 July 2020, Revised 5 August 2021, Accepted 12 March 2022, Available online 18 March 2022, Version of Record 23 March 2022.

论文官网地址:https://doi.org/10.1016/j.artint.2022.103708