Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Fine-Grained Grounded Citations for Attributed Large Language Models

About

Despite the impressive performance on information-seeking tasks, large language models (LLMs) still struggle with hallucinations. Attributed LLMs, which augment generated text with in-line citations, have shown potential in mitigating hallucinations and improving verifiability. However, current approaches suffer from suboptimal citation quality due to their reliance on in-context learning. Furthermore, the practice of citing only coarse document identifiers makes it challenging for users to perform fine-grained verification. In this work, we introduce FRONT, a training framework designed to teach LLMs to generate Fine-Grained Grounded Citations. By grounding model outputs in fine-grained supporting quotes, these quotes guide the generation of grounded and consistent responses, not only improving citation quality but also facilitating fine-grained verification. Experiments on the ALCE benchmark demonstrate the efficacy of FRONT in generating superior grounded responses and highly supportive citations. With LLaMA-2-7B, the framework significantly outperforms all the baselines, achieving an average of 14.21% improvement in citation quality across all datasets, even surpassing ChatGPT.

Lei Huang, Xiaocheng Feng, Weitao Ma, Yuxuan Gu, Weihong Zhong, Xiachong Feng, Weijiang Yu, Weihua Peng, Duyu Tang, Dandan Tu, Bing Qin• 2024

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA distractor setting
Conciseness27.76
21
Multi-hop Question AnsweringMusiQue answerable setting
Conciseness6.34
21
AttributionASQA
Precision73.2
15
AttributionALCE Average
Avg. F150.3
15
AttributionELI5
Precision51.9
15
AttributionQAMPARI
Precision31.9
15
Showing 6 of 6 rows

Other info

Follow for update