Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Humanizing Machine-Generated Content: Evading AI-Text Detection through Adversarial Attack

About

With the development of large language models (LLMs), detecting whether text is generated by a machine becomes increasingly challenging in the face of malicious use cases like the spread of false information, protection of intellectual property, and prevention of academic plagiarism. While well-trained text detectors have demonstrated promising performance on unseen test data, recent research suggests that these detectors have vulnerabilities when dealing with adversarial attacks such as paraphrasing. In this paper, we propose a framework for a broader class of adversarial attacks, designed to perform minor perturbations in machine-generated content to evade detection. We consider two attack settings: white-box and black-box, and employ adversarial learning in dynamic scenarios to assess the potential enhancement of the current detection model's robustness against such attacks. The empirical results reveal that the current detection models can be compromised in as little as 10 seconds, leading to the misclassification of machine-generated text as human-written content. Furthermore, we explore the prospect of improving the model's robustness over iterative adversarial learning. Although some improvements in model robustness are observed, practical applications still face significant challenges. These findings shed light on the future development of AI-text detectors, emphasizing the need for more accurate and robust detection methods.

Ying Zhou, Ben He, Le Sun• 2024

Related benchmarks

TaskDatasetResultRank
Adversarial Evasion AttackMGTBench Reuters
ASR1
24
Adversarial Evasion AttackMGTBench Essay
ASR4
24
Adversarial Evasion AttackMGTBench WP
ASR15
24
Adversarial Evasion AttackMGT-Academic Humanity
ASR3
22
Adversarial Evasion AttackMGT-Academic Social Science
Attack Success Rate (ASR)7
22
Adversarial Evasion AttackMGT Academic STEM
ASR3
22
Adversarial AttackMGT Detector Evaluation Set XLM-RoBERTa-Base (test)
ASR (%)42.6
15
Adversarial AttackFast-DetectGPT
ASR10.97
6
Adversarial AttackBinoculars
ASR0.3735
6
Adversarial AttackGPT2 F.t.
ASR (%)42.66
6
Showing 10 of 10 rows

Other info

Follow for update