Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Knowledge-driven Subspace Fusion and Gradient Coordination for Multi-modal Learning

About

Multi-modal learning plays a crucial role in cancer diagnosis and prognosis. Current deep learning based multi-modal approaches are often limited by their abilities to model the complex correlations between genomics and histology data, addressing the intrinsic complexity of tumour ecosystem where both tumour and microenvironment contribute to malignancy. We propose a biologically interpretative and robust multi-modal learning framework to efficiently integrate histology images and genomics by decomposing the feature subspace of histology images and genomics, reflecting distinct tumour and microenvironment features. To enhance cross-modal interactions, we design a knowledge-driven subspace fusion scheme, consisting of a cross-modal deformable attention module and a gene-guided consistency strategy. Additionally, in pursuit of dynamically optimizing the subspace knowledge, we further propose a novel gradient coordination learning strategy. Extensive experiments demonstrate the effectiveness of the proposed method, outperforming state-of-the-art techniques in three downstream tasks of glioma diagnosis, tumour grading, and survival analysis. Our code is available at https://github.com/helenypzhang/Subspace-Multimodal-Learning.

Yupei Zhang, Xiaofei Wang, Fangliangzi Meng, Jin Tang, Chao Li• 2024

Related benchmarks

TaskDatasetResultRank
Glioma GradingTCGA GBM-LGG (3-fold val)
AUC88.37
48
Survival PredictionTCGA GBM-LGG Internal (test)
C-Index76.55
37
Survival PredictionCPTAC External (test)
C-Index55.53
27
DiagnosisTCGA GBM-LGG and IvyGAP (3-fold val)
AUC96.02
26
Showing 4 of 4 rows

Other info

Follow for update