Investigating Facebook’s interventions against accounts that repeatedly share misinformation

作者:

Highlights:

摘要

Like many web platforms, Facebook is under pressure to regulate misinformation. According to the company, users that repeatedly share misinformation (‘repeat offenders’) will have their distribution reduced, but little is known about the implementation or the impacts of this measure. The first contribution of this paper is to offer a methodology to investigate the implementation and consequences of this measure, which relies on an analysis combining fact-checking and engagement metrics data. Using a Science Feedback and a Social Science One (Condor) datasets, we identified a set of public accounts (groups and pages) that have shared misinformation repeatedly during the 2019–2020 period. We find that the engagement per post decreased significantly for Facebook pages after they shared two or more ‘false news’. The median decrease for pages identified with the Science Feedback dataset is −43%, while this value reaches −62% for pages identified using the Condor dataset. In a different approach, we identified a set of pages claiming to be under ‘reduced distribution’ for repeatedly sharing misinformation and having received a notification from Facebook. With this set of pages, we observed a median decrease of −25% in engagement per post averaged over 30 days after receiving the notification minus 30 days before. We show that this ‘repeat offenders’ penalty did not apply to Facebook groups. Instead, we discover that groups have been affected in a different way with a sudden drop in their average engagement per post that occurred around June 9, 2020. While this drop has cut the groups’ engagement per post in about half, this decrease was compensated by the fact that these accounts have doubled their number of posts between early 2019 and summer 2020. The net result is that the total engagement on posts from ‘repeat offender’ accounts (including both pages and groups) returned to its early 2019 levels. Overall, Facebook’s policy thus appears to be able to contain the increase in misinformation shared by ‘repeat offenders’ rather than to decrease it.

论文关键词:Misinformation,Content moderation,Algorithmic transparency,Facebook,Fact-checking,Social media analysis

论文评审过程:Received 20 January 2021, Revised 26 September 2021, Accepted 23 October 2021, Available online 26 November 2021, Version of Record 26 November 2021.

论文官网地址:https://doi.org/10.1016/j.ipm.2021.102804