Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RPO: Retrieval Preference Optimization for Robust Retrieval-Augmented Generation

About

While Retrieval-Augmented Generation (RAG) has exhibited promise in utilizing external knowledge, its generation process heavily depends on the quality and accuracy of the retrieved context. Large language models (LLMs) struggle to evaluate the correctness of non-parametric knowledge retrieved externally when it differs from internal memorization, leading to knowledge conflicts during response generation. To this end, we introduce the Retrieval Preference Optimization (RPO), a lightweight and effective alignment method to adaptively leverage multi-source knowledge based on retrieval relevance. An implicit representation of retrieval relevance is derived and incorporated into the reward model to integrate retrieval evaluation and response generation into a single model, solving the problem that previous methods necessitate the additional procedure to assess the retrieval quality. Notably, RPO is the only RAG-dedicated alignment approach that quantifies the awareness of retrieval relevance in training, overcoming mathematical obstacles. Experiments on four datasets demonstrate that RPO outperforms RAG by 4-10% in accuracy without any extra component, exhibiting its robust generalization.

Shi-Qi Yan, Quan Liu, Zhen-Hua Ling• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringTriviaQA (test)
Accuracy74.4
121
Question AnsweringNQ (test)--
66
Question AnsweringPopQA (test)
Accuracy65.4
39
Question AnsweringRGB (test)
Accuracy100
11
Showing 4 of 4 rows

Other info

Follow for update