Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Poisoning Retrieval Corpora by Injecting Adversarial Passages

About

Dense retrievers have achieved state-of-the-art performance in various information retrieval tasks, but to what extent can they be safely deployed in real-world applications? In this work, we propose a novel attack for dense retrieval systems in which a malicious user generates a small number of adversarial passages by perturbing discrete tokens to maximize similarity with a provided set of training queries. When these adversarial passages are inserted into a large retrieval corpus, we show that this attack is highly effective in fooling these systems to retrieve them for queries that were not seen by the attacker. More surprisingly, these adversarial passages can directly generalize to out-of-domain queries and corpora with a high success attack rate -- for instance, we find that 50 generated passages optimized on Natural Questions can mislead >94% of questions posed in financial documents or online forums. We also benchmark and compare a range of state-of-the-art dense retrievers, both unsupervised and supervised. Although different systems exhibit varying levels of vulnerability, we show they can all be successfully attacked by injecting up to 500 passages, a small fraction compared to a retrieval corpus of millions of passages.

Zexuan Zhong, Ziqing Huang, Alexander Wettig, Danqi Chen• 2023

Related benchmarks

TaskDatasetResultRank
Retrieval Attack DefenseNatural Questions (NQ)--
99
Open-domain Question AnsweringMS Marco--
48
Text-to-SQLEHRSQL
Execution Accuracy85
37
Question AnsweringHotpotQA
Accuracy4.2
37
SQL GenerationEHR SQL closed-weight models
Accuracy75.5
35
Knowledge-intensive QAStrategyQA
ACC57.1
24
Healthcare Record ManagementEHRAgent
Accuracy (ACC)67.9
24
Autonomous DrivingAgent-Driver
Accuracy (ACC)87.5
24
Question AnsweringNQ
ATR98
16
Retrieval-Augmented Question AnsweringHotpotQA
ATR100
16
Showing 10 of 28 rows

Other info

Follow for update