Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning

About

Self-supervised representation learning has been highly promising for histopathology image analysis with numerous approaches leveraging their patient-slide-patch hierarchy to learn better representations. In this paper, we explore how the combination of domain specific natural language information with such hierarchical visual representations can benefit rich representation learning for medical image tasks. Building on automated language description generation for features visible in histopathology images, we present a novel language-tied self-supervised learning framework, Hierarchical Language-tied Self-Supervision (HLSS) for histopathology images. We explore contrastive objectives and granular language description based text alignment at multiple hierarchies to inject language modality information into the visual representations. Our resulting model achieves state-of-the-art performance on two medical imaging benchmarks, OpenSRH and TCGA datasets. Our framework also provides better interpretability with our language aligned representation space. Code is available at https://github.com/Hasindri/HLSS.

Hasindri Watawana, Kanchana Ranasinghe, Tariq Mahmood, Muzammal Naseer, Salman Khan, Fahad Shahbaz Khan• 2024

Related benchmarks

TaskDatasetResultRank
ClassificationTCGA
Accuracy87.9
24
Patch ClassificationTCGA
Accuracy89.7
7
Slide ClassificationOpenSRH (val)
Accuracy89.5
7
Slide ClassificationTCGA
Accuracy92.9
7
Patch ClassificationOpenSRH (val)
Accuracy84.1
7
Patient classificationOpenSRH (val)
Accuracy91.7
7
Showing 6 of 6 rows

Other info

Code

Follow for update