Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Patient-Level Multimodal Question Answering from Multi-Site Auscultation Recordings

About

Auscultation is a vital diagnostic tool, yet its utility is often limited by subjective interpretation. While general-purpose Audio-Language Models (ALMs) excel in general domains, they struggle with the nuances of physiological signals. We propose a framework that aligns multi-site auscultation recordings directly with a frozen Large Language Model (LLM) embedding space via gated cross-attention. By leveraging the LLM's latent world knowledge, our approach moves beyond isolated classification toward holistic, patient-level assessment. On the CaReSound benchmark, our model achieves a state-of-the-art 0.865 F1-macro and 0.952 BERTScore. We demonstrate that lightweight, domain-specific encoders rival large-scale ALMs and that multi-site aggregation provides spatial redundancy that mitigates temporal truncation. This alignment of medical acoustics with text foundations offers a scalable path for bridging signal processing and clinical assessment.

Fan Wu, Tsai-Ning Wang, Nicolas Zumarraga, Ning Wang, Markus Kreft, Kevin O'Sullivan, Elgar Fleisch, Oliver Aalami, Paul Schmiedmayer, Robert Jakob, Patrick Langer• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Question AnsweringCaReSound
Yes/No Accuracy93.5
13
Showing 1 of 1 rows

Other info

Follow for update