Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Generate Answers with Citations via Factual Consistency Models

About

Large Language Models (LLMs) frequently hallucinate, impeding their reliability in mission-critical situations. One approach to address this issue is to provide citations to relevant sources alongside generated content, enhancing the verifiability of generations. However, citing passages accurately in answers remains a substantial challenge. This paper proposes a weakly-supervised fine-tuning method leveraging factual consistency models (FCMs). Our approach alternates between generating texts with citations and supervised fine-tuning with FCM-filtered citation data. Focused learning is integrated into the objective, directing the fine-tuning process to emphasise the factual unit tokens, as measured by an FCM. Results on the ALCE few-shot citation benchmark with various instruction-tuned LLMs demonstrate superior performance compared to in-context learning, vanilla supervised fine-tuning, and state-of-the-art methods, with an average improvement of $34.1$, $15.5$, and $10.5$ citation F$_1$ points, respectively. Moreover, in a domain transfer setting we show that the obtained citation generation ability robustly transfers to unseen datasets. Notably, our citation improvements contribute to the lowest factual error rate across baselines.

Rami Aly, Zhiqiang Tang, Samson Tan, George Karypis• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringASQA (test)
Correctness EM Recall40
29
Citation-based Question AnsweringALCE-ASQA v1 (test)
EM Recall41.7
14
Citation-aware Question AnsweringALCE ASQA
EM Recall41.7
13
Citation-based Question AnsweringALCE-ELI5 v1 (test)
EM Rec.19.5
13
Citation-aware Question AnsweringALCE ELI5
EM Recall18.4
12
Factuality EvaluationBio (test)
FS Score88.9
8
Attributed Question AnsweringELI5 (test)
Rouge-L21.2
5
Showing 7 of 7 rows

Other info

Code

Follow for update