Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Faithful and Robust LLM Specialists for Evidence-Based Question-Answering

About

Advances towards more faithful and traceable answers of Large Language Models (LLMs) are crucial for various research and practical endeavors. One avenue in reaching this goal is basing the answers on reliable sources. However, this Evidence-Based QA has proven to work insufficiently with LLMs in terms of citing the correct sources (source quality) and truthfully representing the information within sources (answer attributability). In this work, we systematically investigate how to robustly fine-tune LLMs for better source quality and answer attributability. Specifically, we introduce a data generation pipeline with automated data quality filters, which can synthesize diversified high-quality training and testing data at scale. We further introduce four test sets to benchmark the robustness of fine-tuned specialist models. Extensive evaluation shows that fine-tuning on synthetic data improves performance on both in- and out-of-distribution. Furthermore, we show that data quality, which can be drastically improved by proposed quality filters, matters more than quantity in improving Evidence-Based QA.

Tobias Schimanski, Jingwei Ni, Mathias Kraus, Elliott Ash, Markus Leippold• 2024

Related benchmarks

TaskDatasetResultRank
AttributionASQA
Precision47.4
15
AttributionQAMPARI
Precision26.2
15
AttributionALCE Average
Avg. F132.8
15
AttributionELI5
Precision26.3
15
Evidence-Based QASYNSCIQA (test)--
4
Evidence-Based QAGENSEARCH (test)--
4
Evidence-Based QACHATREPORT (test)--
4
Evidence-Based QACLIMATEQA (test)--
4
Showing 8 of 8 rows

Other info

Follow for update