Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MedReflect: Teaching Medical LLMs to Self-Improve via Reflective Correction

About

Medical problem-solving demands expert knowledge and intricate reasoning. Recent studies of large language models (LLMs) attempt to ease this complexity by introducing external knowledge verification through retrieval-augmented generation or by training on reasoning datasets. However, these approaches suffer from drawbacks such as retrieval overhead and high annotation costs, and they heavily rely on substituted external assistants to reach limited performance in medical field. In this paper, we introduce MedReflect, a generalizable framework designed to inspire LLMs with a physician-like reflective thinking mode. MedReflect generates a single-pass reflection chain that includes initial hypothesis generation, self-questioning, self-answering and decision refinement. This self-verified and self-reflective nature releases large language model's latent capability in medical problem-solving without external retrieval or heavy annotation. We demonstrate that MedReflect enables cost-efficient medical dataset construction. With only a minimal subset of randomly sampled training examples and lightweight fine-tuning, this approach achieves notable absolute accuracy improvements across a series of medical benchmarks while significantly cutting annotation requirements. Our results provide evidence that LLMs can learn to solve specialized medical problems via self-reflection and self-improvement, reducing reliance on external supervision and extensive task-specific fine-tuning data.

Yue Huang, Yanyuan Chen, Dexuan Xu, Chenzhuo Zhao, Weihua Yue, Yu Huang• 2025

Related benchmarks

TaskDatasetResultRank
Drug RecommendationMIMIC-IV (target)
F1 Score37.65
18
Clinical Drug RecommendationHPH (real-world unstructured cohort)
F1 Score (%)73.11
10
Showing 2 of 2 rows

Other info

Follow for update