Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Localizing and Mitigating Errors in Long-form Question Answering

About

Long-form question answering (LFQA) aims to provide thorough and in-depth answers to complex questions, enhancing comprehension. However, such detailed responses are prone to hallucinations and factual inconsistencies, challenging their faithful evaluation. This work introduces HaluQuestQA, the first hallucination dataset with localized error annotations for human-written and model-generated LFQA answers. HaluQuestQA comprises 698 QA pairs with 1.8k span-level error annotations for five different error types by expert annotators, along with preference judgments. Using our collected data, we thoroughly analyze the shortcomings of long-form answers and find that they lack comprehensiveness and provide unhelpful references. We train an automatic feedback model on this dataset that predicts error spans with incomplete information and provides associated explanations. Finally, we propose a prompt-based approach, Error-informed refinement, that uses signals from the learned feedback model to refine generated answers, which we show reduces errors and improves answer quality across multiple models. Furthermore, humans find answers generated by our approach comprehensive and highly prefer them (84%) over the baseline answers.

Rachneet Sachdeva, Yixiao Song, Mohit Iyyer, Iryna Gurevych• 2024

Related benchmarks

TaskDatasetResultRank
Long-form Question AnsweringASQA--
15
Long-form Question Answering refinementHQ^2A (test)
Error Rate (%)0.0065
6
Long-form Question Answering refinementASQA (test)
Error Rate (%)16.63
5
Long-form Question Answering refinementELI5 (test)
Error Rate0.0381
5
Long-form Question AnsweringHQ2A
Comprehensiveness100
3
Sentence-level Error DetectionHQ2A 1.0 (test)--
1
Showing 6 of 6 rows

Other info

Code

Follow for update