Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FRACTAL: Fine-Grained Scoring from Aggregate Text Labels

About

Large language models (LLMs) are being increasingly tuned to power complex generation tasks such as writing, fact-seeking, querying and reasoning. Traditionally, human or model feedback for evaluating and further tuning LLM performance has been provided at the response level, enabling faster and more cost-effective assessments. However, recent works (Amplayo et al. [2022], Wu et al. [2023]) indicate that sentence-level labels may provide more accurate and interpretable feedback for LLM optimization. In this work, we introduce methods to disaggregate response-level labels into sentence-level (pseudo-)labels. Our approach leverages multiple instance learning (MIL) and learning from label proportions (LLP) techniques in conjunction with prior information (e.g., document-sentence cosine similarity) to train a specialized model for sentence-level scoring. We also employ techniques which use model predictions to pseudo-label the train-set at the sentence-level for model training to further improve performance. We conduct extensive evaluations of our methods across six datasets and four tasks: retrieval, question answering, summarization, and math reasoning. Our results demonstrate improved performance compared to multiple baselines across most of these tasks. Our work is the first to develop response-level feedback to sentence-level scoring techniques, leveraging sentence-level prior information, along with comprehensive evaluations on multiple tasks as well as end-to-end finetuning evaluation showing performance comparable to a model trained on fine-grained human annotated labels.

Yukti Makhija, Priyanka Agrawal, Rishi Saket, Aravindan Raghuveer• 2024

Related benchmarks

TaskDatasetResultRank
Instance-level EvaluationMultiSpanQA
AUC-ROC69.3
7
Math ReasoningPRM800K
AUC-ROC0.597
5
SummarizationAquaMuSe
AUC-ROC0.814
5
Question Answering FeedbackQA-Feedback
AUC ROC0.532
5
Relevance AssessmentFiRA
MAE0.294
4
SummarizationWikiCatSum
AUC-ROC0.645
4
Instance-level EvaluationWikiCatSum
AUC-ROC0.642
3
Instance-level EvaluationAquaMuSe--
2
Instance-level EvaluationQA Preference Feedback--
1
Instance-level EvaluationPRM800K--
1
Showing 10 of 10 rows

Other info

Follow for update