Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bias Injection Attacks on RAG Databases and Sanitization Defenses

About

This paper explores attacks and defenses on vector databases in retrieval-augmented generation (RAG) systems. Prior work on knowledge poisoning attacks primarily inject false or toxic content, which fact-checking or linguistic analysis easily detects. We reveal a new and subtle threat: bias injection attacks, which insert factually correct yet semantically biased passages into the knowledge base to covertly influence the ideological framing of answers generated by large language models (LLMs). We demonstrate that these adversarial passages, though linguistically coherent and truthful, can systematically crowd out opposing views from the retrieved context and steer LLM answers toward the attacker's intended perspective. We precisely characterize this class of attacks and then develop a post-retrieval filtering defense, BiasDef. We construct a comprehensive benchmark based on public question answering datasets to evaluate them. Our results show that: (1) the proposed attack induces significant perspective shifts in LLM answers, effectively evading existing retrieval-based sanitization defenses; and (2) BiasDef outperforms existing methods by reducing adversarial passages retrieved by 15\% which mitigates perspective shift by 6.2\times in answers, while enabling the retrieval of 62\% more benign passages.

Hao Wu, Prateek Saxena• 2025

Related benchmarks

TaskDatasetResultRank
Information RetrievalWIKI-BALANCE (test)--
28
Information RetrievalWIKI-BALANCE--
28
Perspective Shift Analysis of Retrieved PassagesWIKI-BALANCE--
24
Question AnsweringWIKI-BALANCE (test)--
24
Adversarial Bias MitigationWIKI-BALANCE
Avg |PS| (Unattacked)0.068
18
Adversarial Bias MitigationReddit-Dialogues
Unattacked Avg |PS|0.025
6
Adversarial Bias MitigationHotpotQA
Unattacked Avg |PS|-0.015
6
Showing 7 of 7 rows

Other info

Follow for update