Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Med-PRM: Medical Reasoning Models with Stepwise, Guideline-verified Process Rewards

About

Large language models have shown promise in clinical decision making, but current approaches struggle to localize and correct errors at specific steps of the reasoning process. This limitation is critical in medicine, where identifying and addressing reasoning errors is essential for accurate diagnosis and effective patient care. We introduce Med-PRM, a process reward modeling framework that leverages retrieval-augmented generation to verify each reasoning step against established medical knowledge bases. By verifying intermediate reasoning steps with evidence retrieved from clinical guidelines and literature, our model can precisely assess the reasoning quality in a fine-grained manner. Evaluations on five medical QA benchmarks and two open-ended diagnostic tasks demonstrate that Med-PRM achieves state-of-the-art performance, with improving the performance of base models by up to 13.50% using Med-PRM. Moreover, we demonstrate the generality of Med-PRM by integrating it in a plug-and-play fashion with strong policy models such as Meerkat, achieving over 80\% accuracy on MedQA for the first time using small-scale models of 8 billion parameters. Our code and data are available at: https://med-prm.github.io/

Jaehoon Yun, Jiwoong Sohn, Jungwoo Park, Hyunjae Kim, Xiangru Tang, Yanjun Shao, Yonghoe Koo, Minhyeok Ko, Qingyu Chen, Mark Gerstein, Michael Moor, Jaewoo Kang• 2025

Related benchmarks

TaskDatasetResultRank
Medical Question AnsweringMedMCQA
Accuracy63
346
Question AnsweringMedQA
Accuracy62.2
96
Medical Question AnsweringMMLU Med
Accuracy83.33
61
Medical DiagnosisMIMIC-IV diagnostic evaluation set (test)
Accuracy66
54
Medical Question AnsweringDDXPlus
Accuracy79
43
Agent VerificationMIMIC-IV Diverticulitis
AUROC0.8099
24
Agent VerificationMIMIC-IV Cholecystitis
AUROC0.7519
24
Agent VerificationMIMIC-IV Pancreatitis
AUROC61.13
24
Medical Question AnsweringMedQA USMLE-style
Accuracy71.43
15
Medical Question AnsweringGeneral Medical Tasks (BioASQ, MedMCQA, MedQA, MedXpertqa, MMLU-Med, PubMedQA, NEJM, Lancet, Medbullets) (test)
BioASQ Accuracy79.83
13
Showing 10 of 12 rows

Other info

Follow for update