Enhancing pretrained language models with structured commonsense knowledge for textual inference

作者:

Highlights:

• Our approach effectively utilize sparse knowledge bases to enhance language models.

• We devise a novel generative strategy to utilize the sparse knowledge bases.

• Our approach can generate the missing structural information for arbitrary nodes.

• Our approach can consistently improve the performance of PLMs on several textual inference tasks.

摘要

•Our approach effectively utilize sparse knowledge bases to enhance language models.•We devise a novel generative strategy to utilize the sparse knowledge bases.•Our approach can generate the missing structural information for arbitrary nodes.•Our approach can consistently improve the performance of PLMs on several textual inference tasks.

论文关键词:Textual inference,Pretrained language model,Knowledge base,Mix strategy,Graph neural network

论文评审过程:Received 21 January 2022, Revised 14 July 2022, Accepted 16 July 2022, Available online 2 August 2022, Version of Record 27 August 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.109488