Siamese Pre-Trained Transformer Encoder for Knowledge Base Completion

作者:Mengyao Li, Bo Wang, Jing Jiang

摘要

In this paper, we aim at leveraging a Siamese textual encoder to efficiently and effectively tackle knowledge base completion problem. Traditional graph embedding-based methods straightforwardly learn the embeddings by considering a knowledge base’s structure but are inherently vulnerable to the graph’s sparsity or incompleteness issue. In contrast, previous textual encoding-based methods capture such structured knowledge from a semantic perspective and employ deep neural textual encoder to model graph triples in semantic space, but they fail to trade off the contextual features with model’s efficiency. Therefore, in this paper we propose a Siamese textual encoder operating on each graph triple from the knowledge base, where the contextual features between a head/tail entity and a relation are well-captured to highlight relation-aware entity embedding while a Siamese structure is also adapted to avoid combinatorial explosion during inference. In the experiments, the proposed method reaches state-of-the-art or comparable performance on several link prediction datasets. Further analyses demonstrate that the proposed method is much more efficient than its baseline with similar evaluating results.

论文关键词:Knowledge base completion, Pre-trained transformer encoder, Siamese network

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-021-10586-8