Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLM-Based Adversarial Persuasion Attacks on Fact-Checking Systems

About

Automated fact-checking (AFC) systems are susceptible to adversarial attacks, enabling false claims to evade detection. Existing adversarial frameworks typically rely on injecting noise or altering semantics, yet no existing framework exploits the adversarial potential of persuasion techniques, which are widely used in disinformation campaigns to manipulate audiences. In this paper, we introduce a novel class of persuasive adversarial attacks on AFCs by employing a generative LLM to rephrase claims using persuasion techniques. Considering 15 techniques grouped into 6 categories, we study the effects of persuasion on both claim verification and evidence retrieval using a decoupled evaluation strategy. Experiments on the FEVER and FEVEROUS benchmarks show that persuasion attacks can substantially degrade both verification performance and evidence retrieval. Our analysis identifies persuasion techniques as a potent class of adversarial attacks, highlighting the need for more robust AFC systems.

Jo\~ao A. Leite, Olesya Razuvayevskaya, Kalina Bontcheva, Carolina Scarton• 2026

Related benchmarks

TaskDatasetResultRank
Fact CheckingFEVEROUS
F1 Macro72.2
14
Fact CheckingFEVER
F1 Macro76.4
14
Showing 2 of 2 rows

Other info

Follow for update