Distinguishing between fake news and satire with transformers

作者:

Highlights:

• Transformers excel at fake news vs satire classification (FvS).

• Domain language adaptation via pre-training is crucial for FvS performance.

• Pre-training neural attention models even with tiny datasets can confer improvements.

• Greater granularity via multiple information aggregator tokens in text benefits FvS.

• Performance retained after dividing FvS into propaganda, clickbait, hoax, and satire.

摘要

•Transformers excel at fake news vs satire classification (FvS).•Domain language adaptation via pre-training is crucial for FvS performance.•Pre-training neural attention models even with tiny datasets can confer improvements.•Greater granularity via multiple information aggregator tokens in text benefits FvS.•Performance retained after dividing FvS into propaganda, clickbait, hoax, and satire.

论文关键词:Fake news,Satire,Sarcasm,Deep learning,Transformers,BERT,DistilBERT,Classification

论文评审过程:Received 26 May 2021, Revised 25 July 2021, Accepted 27 August 2021, Available online 11 September 2021, Version of Record 13 September 2021.

论文官网地址:https://doi.org/10.1016/j.eswa.2021.115824