FRACTAL: Fine-Grained Scoring from Aggregate Text Labels
About
Large language models (LLMs) are being increasingly tuned to power complex generation tasks such as writing, fact-seeking, querying and reasoning. Traditionally, human or model feedback for evaluating and further tuning LLM performance has been provided at the response level, enabling faster and more cost-effective assessments. However, recent works (Amplayo et al. [2022], Wu et al. [2023]) indicate that sentence-level labels may provide more accurate and interpretable feedback for LLM optimization. In this work, we introduce methods to disaggregate response-level labels into sentence-level (pseudo-)labels. Our approach leverages multiple instance learning (MIL) and learning from label proportions (LLP) techniques in conjunction with prior information (e.g., document-sentence cosine similarity) to train a specialized model for sentence-level scoring. We also employ techniques which use model predictions to pseudo-label the train-set at the sentence-level for model training to further improve performance. We conduct extensive evaluations of our methods across six datasets and four tasks: retrieval, question answering, summarization, and math reasoning. Our results demonstrate improved performance compared to multiple baselines across most of these tasks. Our work is the first to develop response-level feedback to sentence-level scoring techniques, leveraging sentence-level prior information, along with comprehensive evaluations on multiple tasks as well as end-to-end finetuning evaluation showing performance comparable to a model trained on fine-grained human annotated labels.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instance-level Evaluation | MultiSpanQA | AUC-ROC69.3 | 7 | |
| Math Reasoning | PRM800K | AUC-ROC0.597 | 5 | |
| Summarization | AquaMuSe | AUC-ROC0.814 | 5 | |
| Question Answering Feedback | QA-Feedback | AUC ROC0.532 | 5 | |
| Relevance Assessment | FiRA | MAE0.294 | 4 | |
| Summarization | WikiCatSum | AUC-ROC0.645 | 4 | |
| Instance-level Evaluation | WikiCatSum | AUC-ROC0.642 | 3 | |
| Instance-level Evaluation | AquaMuSe | -- | 2 | |
| Instance-level Evaluation | QA Preference Feedback | -- | 1 | |
| Instance-level Evaluation | PRM800K | -- | 1 |