Multimodal Prototyping for cancer survival prediction
About
Multimodal survival methods combining gigapixel histology whole-slide images (WSIs) and transcriptomic profiles are particularly promising for patient prognostication and stratification. Current approaches involve tokenizing the WSIs into smaller patches (>10,000 patches) and transcriptomics into gene groups, which are then integrated using a Transformer for predicting outcomes. However, this process generates many tokens, which leads to high memory requirements for computing attention and complicates post-hoc interpretability analyses. Instead, we hypothesize that we can: (1) effectively summarize the morphological content of a WSI by condensing its constituting tokens using morphological prototypes, achieving more than 300x compression; and (2) accurately characterize cellular functions by encoding the transcriptomic profile with biological pathway prototypes, all in an unsupervised fashion. The resulting multimodal tokens are then processed by a fusion network, either with a Transformer or an optimal transport cross-alignment, which now operates with a small and fixed number of tokens without approximations. Extensive evaluation on six cancer types shows that our framework outperforms state-of-the-art methods with much less computation while unlocking new interpretability analyses.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Disease-Specific Survival prediction | LUAD (test) | C-index0.674 | 6 | |
| Disease-Specific Survival prediction | BRCA (test) | C-index0.75 | 6 | |
| Disease-Specific Survival prediction | BLCA (test) | C-Index0.656 | 6 | |
| Disease-Specific Survival prediction | KIRC (test) | C-index0.728 | 6 |