PathFLIP: Fine-grained Language-Image Pretraining for Versatile Computational Pathology
About
While Vision-Language Models (VLMs) have achieved notable progress in computational pathology (CPath), the gigapixel scale and spatial heterogeneity of Whole Slide Images (WSIs) continue to pose challenges for multimodal understanding. Existing alignment methods struggle to capture fine-grained correspondences between textual descriptions and visual cues across thousands of patches from a slide, compromising their performance on downstream tasks. In this paper, we propose PathFLIP (Pathology Fine-grained Language-Image Pretraining), a novel framework for holistic WSI interpretation. PathFLIP decomposes slide-level captions into region-level subcaptions and generates text-conditioned region embeddings to facilitate precise visual-language grounding. By harnessing Large Language Models (LLMs), PathFLIP can seamlessly follow diverse clinical instructions and adapt to varied diagnostic contexts. Furthermore, it exhibits versatile capabilities across multiple paradigms, efficiently handling slide-level classification and retrieval, fine-grained lesion localization, and instruction following. Extensive experiments demonstrate that PathFLIP outperforms existing large-scale pathological VLMs on four representative benchmarks while requiring significantly less training data, paving the way for fine-grained, instruction-aware WSI interpretation in clinical practice.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | SlideBench-VQA TCGA | Microscopy Score86.11 | 32 | |
| Gene Mutation Prediction | CPTAC | BRCA PIK3CA AUC0.6575 | 15 | |
| WSI Captioning | SlideBench | BLEU-10.38 | 11 | |
| Image-Text Retrieval | SlideBench | Recall@115.13 | 10 | |
| Text-Image Retrieval | SlideBench | Recall@113.92 | 10 | |
| Image-Text Retrieval | Quilt | R@127.1 | 10 | |
| Text-Image Retrieval | Quilt | Recall@126.46 | 10 | |
| Visual Question Answering | BCNB | VQA Accuracy58.65 | 7 |