Step-CoT: Stepwise Visual Chain-of-Thought for Medical Visual Question Answering
About
Chain-of-thought (CoT) reasoning has advanced medical visual question answering (VQA), yet most existing CoT rationales are free-form and fail to capture the structured reasoning process clinicians actually follow. This work asks: Can traceable, multi-step reasoning supervision improve reasoning accuracy and the interpretability of Medical VQA? To this end, we introduce Step-CoT, a large-scale medical reasoning dataset with expert-curated, structured multi-step CoT aligned to clinical diagnostic workflows, implicitly grounding the model's reasoning in radiographic evidence. Step-CoT comprises more than 10K real clinical cases and 70K VQA pairs organized around diagnostic workflows, providing supervised intermediate steps that guide models to follow valid reasoning trajectories. To effectively learn from Step-CoT, we further introduce a teacher-student framework with a dynamic graph-structured focusing mechanism that prioritizes diagnostically informative steps while filtering out less relevant contexts. Our experiments show that using Step-CoT can improve reasoning accuracy and interpretability. Benchmark: github.com/hahaha111111/Step-CoT. Dataset Card: huggingface.co/datasets/fl-15o/Step-CoT
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Medical Diagnosis | Step-CoT (test) | Accuracy78.3 | 10 | |
| Detection | Clinical Expert Evaluation set (N=200) | Accuracy88.5 | 6 | |
| Anatomical location | Clinical Expert Evaluation set (N=200) | Accuracy72.8 | 3 | |
| Diagnosis | Clinical Expert Evaluation set (N=200) | Accuracy79.8 | 3 | |
| Lesion distribution | Clinical Expert Evaluation set (N=200) | Accuracy78.4 | 3 | |
| Morphologic feature | Clinical Expert Evaluation set (N=200) | Accuracy84.8 | 3 | |
| Stepwise Medical Reasoning | Step-CoT 200 cases sample (test) | Detection Score88.5 | 3 | |
| Secondary effects/associated signs | Clinical Expert Evaluation set (N=200) | Accuracy75.2 | 3 |