Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

How Well Do Multimodal Models Reason on ECG Signals?

About

While multimodal large language models offer a promising solution to the "black box" nature of health AI by generating interpretable reasoning traces, verifying the validity of these traces remains a critical challenge. Existing evaluation methods are either unscalable, relying on manual clinician review, or superficial, utilizing proxy metrics (e.g. QA) that fail to capture the semantic correctness of clinical logic. In this work, we introduce a reproducible framework for evaluating reasoning in ECG signals. We propose decomposing reasoning into two distinct, components: (i) Perception, the accurate identification of patterns within the raw signal, and (ii) Deduction, the logical application of domain knowledge to those patterns. To evaluate Perception, we employ an agentic framework that generates code to empirically verify the temporal structures described in the reasoning trace. To evaluate Deduction, we measure the alignment of the model's logic against a structured database of established clinical criteria in a retrieval-based approach. This dual-verification method enables the scalable assessment of "true" reasoning capabilities.

Maxwell A. Xu, Harish Haresamudram, Catherine W. Liu, Patrick Langer, Jathurshan Pradeepkumar, Wanting Mao, Sunita J. Ferns, Aradhana Verma, Jimeng Sun, Paul Schmiedmayer, Xin Liu, Daniel McDuff, Emily B. Fox, James M. Rehg• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal ECG ReasoningMIMIC IV--
5
Multimodal ECG ReasoningECG-QA Diagnosis--
5
Multimodal ECG ReasoningECG-QA Rhythm--
5
Multimodal ECG ReasoningECG-QA Form--
5
Showing 4 of 4 rows

Other info

Follow for update