Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images

About

The visual examination of tissue biopsy sections is fundamental for cancer diagnosis, with pathologists analyzing sections at multiple magnifications to discern tumor cells and their subtypes. However, existing attention-based multiple instance learning (MIL) models used for analyzing Whole Slide Images (WSIs) in cancer diagnostics often overlook the contextual information of tumor and neighboring tiles, leading to misclassifications. To address this, we propose the Context-Aware Multiple Instance Learning (CAMIL) architecture. CAMIL incorporates neighbor-constrained attention to consider dependencies among tiles within a WSI and integrates contextual constraints as prior knowledge into the MIL model. We evaluated CAMIL on subtyping non-small cell lung cancer (TCGA-NSCLC) and detecting lymph node (CAMELYON16 and CAMELYON17) metastasis, achieving test AUCs of 97.5\%, 95.9\%, and 88.1\%, respectively, outperforming other state-of-the-art methods. Additionally, CAMIL enhances model interpretability by identifying regions of high diagnostic value.

Olga Fourkioti, Matt De Vries, Chen Jin, Daniel C. Alexander, Chris Bakal• 2023

Related benchmarks

TaskDatasetResultRank
Whole Slide Image classificationCAMELYON16 (test)
AUC0.959
163
Slide-level classificationTCGA NSCLC (test)
Accuracy91.6
96
WSI ClassificationCAMELYON17 (test)
AUC88.1
33
WSI subtypingCAMELYON 17
F1 Score63.3
24
WSI subtypingBRACS
F1 Score70.9
24
WSI subtypingCAMELYON-16
F1 Score93
24
Weakly-supervised tumor localizationCamelyon16
Dice Coefficient52.5
8
LocalizationCAMELYON16 (test)
Dice51.5
7
Cancerous region localizationCAMELYON-16
Success Rate49.78
6
Tumor localizationCAMELYON-16
Dice51.5
6
Showing 10 of 10 rows

Other info

Code

Follow for update