Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Interpretable Rumor Detection in Microblogs by Attending to User Interactions

About

We address rumor detection by learning to differentiate between the community's response to real and fake claims in microblogs. Existing state-of-the-art models are based on tree models that model conversational trees. However, in social media, a user posting a reply might be replying to the entire thread rather than to a specific user. We propose a post-level attention model (PLAN) to model long distance interactions between tweets with the multi-head attention mechanism in a transformer network. We investigated variants of this model: (1) a structure aware self-attention model (StA-PLAN) that incorporates tree structure information in the transformer network, and (2) a hierarchical token and post-level attention model (StA-HiTPLAN) that learns a sentence representation with token-level self-attention. To the best of our knowledge, we are the first to evaluate our models on two rumor detection data sets: the PHEME data set as well as the Twitter15 and Twitter16 data sets. We show that our best models outperform current state-of-the-art models for both data sets. Moreover, the attention mechanism allows us to explain rumor detection predictions at both token-level and post-level.

Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, Jing Jiang• 2020

Related benchmarks

TaskDatasetResultRank
Stance DetectionRumorEval S (PH)
Micro F180
15
Rumour ClassificationPHEME and Twitter 15 16
F1 Score42
12
Turnaround IdentificationPHEME and Twitter15/16 (various)
Turnaround Accuracy (A)0.09
12
Rumor VerificationSemEval-8 (Public Holdout (PH))
Micro F179.4
11
Showing 4 of 4 rows

Other info

Follow for update