Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert Reasoner

About

Recent advances in vision language models (VLMs) have enabled broad progress in the general medical field. However, pathology still remains a more challenging subdomain, with current pathology specific VLMs exhibiting limitations in both diagnostic accuracy and reasoning plausibility. Such shortcomings are largely attributable to the nature of current pathology datasets, which are primarily composed of image description pairs that lack the depth and structured diagnostic paradigms employed by real world pathologists. In this study, we leverage pathology textbooks and real world pathology experts to construct high-quality, reasoning-oriented datasets. Building on this, we introduce Patho-R1, a multimodal RL-based pathology Reasoner, trained through a three-stage pipeline: (1) continued pretraining on 3.5 million image-text pairs for knowledge infusion; (2) supervised fine-tuning on 500k high-quality Chain-of-Thought samples for reasoning incentivizing; (3) reinforcement learning using Group Relative Policy Optimization and Decoupled Clip and Dynamic sAmpling Policy Optimization strategies for multimodal reasoning quality refinement. To further assess the alignment quality of our dataset, we propose Patho-CLIP, trained on the same figure-caption corpus used for continued pretraining. Comprehensive experimental results demonstrate that both Patho-CLIP and Patho-R1 achieve robust performance across a wide range of pathology-related tasks, including zero-shot classification, cross-modal retrieval, Visual Question Answering, and Multiple Choice Question. Our project is available at the Patho-R1 repository: https://github.com/Wenchuan-Zhang/Patho-R1.

Wenchuan Zhang, Penghao Zhang, Jingru Guo, Tao Cheng, Jie Chen, Shuwan Zhang, Zhang Zhang, Yuhao Yi, Hong Bu• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringSlideBench-VQA TCGA
Microscopy Score63.61
32
Visual Question AnsweringWSI-VQA
Overall Accuracy44.28
25
Visual Question AnsweringSlideBench-VQA BCNB
Overall31.43
25
Visual Question AnsweringPathMMU Tiny 1.0 (test)
Overall Accuracy69.53
24
Visual Question AnsweringPathMMU 1.0 (ALL test)
Overall Score63.37
22
Open-ended Pathology AnalysisPathReasoner (test)
BLEU0.182
14
Whole-slide image visual-question answeringSlideBench TCGA
Accuracy52.34
14
Whole-slide image visual-question answeringCPTAC
Accuracy32.5
14
Morphological AnalysisHepatoPathoBench
WSI-P66
7
Multi-scale AnalysisHepatoPathoBench
WSI Score55
7
Showing 10 of 11 rows

Other info

Follow for update