Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Defending Against Knowledge Poisoning Attacks During Retrieval-Augmented Generation

About

Retrieval-Augmented Generation (RAG) has emerged as a powerful approach to boost the capabilities of large language models (LLMs) by incorporating external, up-to-date knowledge sources. However, this introduces a potential vulnerability to knowledge poisoning attacks, where attackers can compromise the knowledge source to mislead the generation model. One such attack is the PoisonedRAG in which the injected adversarial texts steer the model to generate an attacker-chosen response to a target question. In this work, we propose novel defense methods, FilterRAG and ML-FilterRAG, to mitigate the PoisonedRAG attack. First, we propose a new property to uncover distinct properties to differentiate between adversarial and clean texts in the knowledge data source. Next, we employ this property to filter out adversarial texts from clean ones in the design of our proposed approaches. Evaluation of these methods using benchmark datasets demonstrate their effectiveness, with performances close to those of the original RAG systems.

Kennedy Edemacu, Vinay M. Shashidhar, Micheal Tuape, Dan Abudu, Beakcheol Jang, Jong Wook Kim• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringHotpotQA
Accuracy90.5
37
Question AnsweringNQ
ATR3
16
Retrieval-Augmented Question AnsweringHotpotQA
ATR3
16
Retrieval-Augmented Question AnsweringMS Marco
ATR15.5
16
Retrieval-Augmented Question AnsweringNQ
ATR4
16
Showing 5 of 5 rows

Other info

Follow for update