Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Paraphrasing Adversarial Attack on LLM-as-a-Reviewer

About

The use of large language models (LLMs) in peer review systems has attracted growing attention, making it essential to examine their potential vulnerabilities. Prior attacks rely on prompt injection, which alters manuscript content and conflates injection susceptibility with evaluation robustness. We propose the Paraphrasing Adversarial Attack (PAA), a black-box optimization method that searches for paraphrased sequences yielding higher review scores while preserving semantic equivalence and linguistic naturalness. PAA leverages in-context learning, using previous paraphrases and their scores to guide candidate generation. Experiments across five ML and NLP conferences with three LLM reviewers and five attacking models show that PAA consistently increases review scores without changing the paper's claims. Human evaluation confirms that generated paraphrases maintain meaning and naturalness. We also find that attacked papers exhibit increased perplexity in reviews, offering a potential detection signal, and that paraphrasing submissions can partially mitigate attacks.

Masahiro Kaneko• 2026

Related benchmarks

TaskDatasetResultRank
Review Score GenerationACL 2025
Avg Review Score3.8
10
Review Score GenerationNeurIPS 2025
Avg Review Score4.8
10
Review Score GenerationICML 2025
Avg Review Score3.7
10
Review Score GenerationICLR 2025
Average Review Score6.4
10
Review Score GenerationAAAI 2025
Average Review Score5.7
10
Showing 5 of 5 rows

Other info

Follow for update