Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ExperienceWeaver: Optimizing Small-sample Experience Learning for LLM-based Clinical Text Improvement

About

Clinical text improvement is vital for healthcare efficiency but remains difficult due to limited high-quality data and the complex constraints of medical documentation. While Large Language Models (LLMs) show promise, current approaches struggle in small-sample settings: supervised fine-tuning is data-intensive and costly, while retrieval-augmented generation often provides superficial corrections without capturing the reasoning behind revisions. To address these limitations, we propose ExperienceWeaver, a hierarchical framework that shifts the focus from data retrieval to experience learning. Instead of simply recalling past examples, ExperienceWeaver distills noisy, multi-dimensional feedback into structured, actionable knowledge. Specifically, error-specific Tips and high-level Strategies. By injecting this distilled experience into an agentic pipeline, the model learns "how to revise" rather than just "what to revise". Extensive evaluations across four clinical datasets demonstrate that ExperienceWeaver consistently improves performance, surpassing state-of-the-art models such as Gemini-3 Pro in small-sample settings.

Ziyan Xiao, Yinghao Zhu, Liang Peng, Lequan Yu• 2026

Related benchmarks

TaskDatasetResultRank
Clinical text revisionMIMIC Chest X-ray Radiology Report
Mistral Score0.775
11
Clinical text revisionMIMIC Free Text
Mistral Score0.375
11
Clinical text revisionMIMIC Discharge Report
Mistral Score58.8
11
Clinical text revisionIn house Radiology Report
Mistral Score0.425
11
Showing 4 of 4 rows

Other info

Follow for update