A quantitative argumentation-based Automated eXplainable Decision System for fake news detection on social media

作者:

Highlights:

摘要

Social media is flooded with rumors, which make fake news detection a pressing problem. Many black-box approaches have been proposed to automatically predict the veracity of claims. These methods are lack of interpretability. Thus, we propose a Quantitative Argumentation-based Automated eXplainable Decision-making System (QA-AXDS) to tackle this problem and provide users with explanations about the results. The system is fully data-driven in its processes, which allows our models to make greater use of data and be more automatic and scalable than other quantitative framework models. In terms of interpretability, the system can automatically acquire human-level knowledge, and interact with users in the form of dialog trees through explanatory models, thus helping them understand the internal reasoning process of the system. The experimental results show that our system has better transparency and interpretability than other approaches based on the pure machine learning methods, and performs competitively in accuracy among others. In addition, the explanation model provides a way to improve the algorithms when some problems are identified by checking the explanations.

论文关键词:eXplainable artificial intelligence,Quantitative argumentation,Fake news detection

论文评审过程:Received 1 November 2021, Revised 11 January 2022, Accepted 4 February 2022, Available online 10 February 2022, Version of Record 21 February 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.108378