Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Alleviating Textual Reliance in Medical Language-guided Segmentation via Prototype-driven Semantic Approximation

About

Medical language-guided segmentation, integrating textual clinical reports as auxiliary guidance to enhance image segmentation, has demonstrated significant improvements over unimodal approaches. However, its inherent reliance on paired image-text input, which we refer to as ``textual reliance", presents two fundamental limitations: 1) many medical segmentation datasets lack paired reports, leaving a substantial portion of image-only data underutilized for training; and 2) inference is limited to retrospective analysis of cases with paired reports, limiting its applicability in most clinical scenarios where segmentation typically precedes reporting. To address these limitations, we propose ProLearn, the first Prototype-driven Learning framework for language-guided segmentation that fundamentally alleviates textual reliance. At its core, we introduce a novel Prototype-driven Semantic Approximation (PSA) module to enable approximation of semantic guidance from textual input. PSA initializes a discrete and compact prototype space by distilling segmentation-relevant semantics from textual reports. Once initialized, it supports a query-and-respond mechanism which approximates semantic guidance for images without textual input, thereby alleviating textual reliance. Extensive experiments on QaTa-COV19, MosMedData+ and Kvasir-SEG demonstrate that ProLearn outperforms state-of-the-art language-guided methods when limited text is available.

Shuchang Ye, Usman Naseem, Mingyuan Meng, Jinman Kim• 2025

Related benchmarks

TaskDatasetResultRank
Medical Image SegmentationQaTa-COV19
Dice Score90.6
79
Medical Image SegmentationMosMedData+
Dice77.82
63
Medical Image SegmentationKvasir-Seg
Dice Coefficient0.904
28
Showing 3 of 3 rows

Other info

Follow for update