Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Style Augmentation via Large Language Model for Robust Fake News Detection

About

The spread of fake news harms individuals and presents a critical social challenge that must be addressed. Although numerous algorithmic and insightful features have been developed to detect fake news, many of these features can be manipulated with style-conversion attacks, especially with the emergence of advanced language models, making it more difficult to differentiate from genuine news. This study proposes adversarial style augmentation, AdStyle, designed to train a fake news detector that remains robust against various style-conversion attacks. The primary mechanism involves the strategic use of LLMs to automatically generate a diverse and coherent array of style-conversion attack prompts, enhancing the generation of particularly challenging prompts for the detector. Experiments indicate that our augmentation strategy significantly improves robustness and detection performance when evaluated on fake news benchmark datasets.

Sungwon Park, Sungwon Han, Xing Xie, Jae-Gil Lee, Meeyoung Cha• 2024

Related benchmarks

TaskDatasetResultRank
Fake News DetectionPolitiFact
Accuracy89.9
53
Fake News DetectionGossipcop
Accuracy85.7
48
Fake News DetectionWeibo
Accuracy91.8
32
Showing 3 of 3 rows

Other info

Follow for update