Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Teaching Large Language Models to Maintain Contextual Faithfulness via Synthetic Tasks and Reinforcement Learning

About

Teaching large language models (LLMs) to be faithful in the provided context is crucial for building reliable information-seeking systems. Therefore, we propose a systematic framework, CANOE, to reduce faithfulness hallucinations of LLMs across different downstream tasks without human annotations. Specifically, we first synthesize short-form question-answering (QA) data with four diverse tasks to construct high-quality and easily verifiable training data without human annotation. Also, we propose Dual-GRPO, a rule-based reinforcement learning method that includes three tailored rule-based rewards derived from synthesized short-form QA data, while simultaneously optimizing both short-form and long-form response generation. Notably, Dual-GRPO eliminates the need to manually label preference data to train reward models and avoids over-optimizing short-form generation when relying only on the synthesized short-form QA data. Experimental results show that CANOE greatly improves the faithfulness of LLMs across 11 different tasks, even outperforming the most advanced LLMs, e.g., GPT-4o and OpenAI o1.

Shuzheng Si, Haozhe Zhao, Cheng Gao, Yuzhuo Bai, Zhitong Wang, Bofei Gao, Kangyang Luo, Wenhao Li, Yufei Huang, Gang Chen, Fanchao Qi, Minjia Zhang, Baobao Chang, Maosong Sun• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD
F163.2
134
Faithfulness EvaluationFaithEval
F1 Score71.6
42
Multiple-choice Question AnsweringConFiQA MC
F1 Score87.2
42
Question AnsweringSQuAD KRE-curated version
F1 Score69.4
36
Multi-step Reasoning Question AnsweringConFiQA MR (test)
F1 Score84.7
36
Open-ended Question AnsweringConFiQA (test)
F1 Score92.5
36
Question AnsweringConFiQA MR
F1 Score75.2
6
Question AnsweringConFiQA
F1 Score74.3
6
Showing 8 of 8 rows

Other info

Follow for update