Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

M2Diff: Multi-Modality Multi-Task Enhanced Diffusion Model for MRI-Guided Low-Dose PET Enhancement

About

Positron emission tomography (PET) scans expose patients to radiation, which can be mitigated by reducing the dose, albeit at the cost of diminished quality. This makes low-dose (LD) PET recovery an active research area. Previous studies have focused on standard-dose (SD) PET recovery from LD PET scans and/or multi-modal scans, e.g., PET/CT or PET/MRI, using deep learning. While these studies incorporate multi-modal information through conditioning in a single-task model, such approaches may limit the capacity to extract modality-specific features, potentially leading to early feature dilution. Although recent studies have begun incorporating pathology-rich data, challenges remain in effectively leveraging multi-modality inputs for reconstructing diverse features, particularly in heterogeneous patient populations. To address these limitations, we introduce a multi-modality multi-task diffusion model (M2Diff) that processes MRI and LD PET scans separately to learn modality-specific features and fuse them via hierarchical feature fusion to reconstruct SD PET. This design enables effective integration of complementary structural and functional information, leading to improved reconstruction fidelity. We have validated the effectiveness of our model on both healthy and Alzheimer's disease brain datasets. The M2Diff achieves superior qualitative and quantitative performance on both datasets.

Ghulam Nabi Ahmad Hassan Yar, Himashi Peiris, Victoria Mar, Cameron Dennis Pain, Zhaolin Chen• 2026

Related benchmarks

TaskDatasetResultRank
PET SynthesisDaCRA x100 DRF
SSIM0.9528
9
PET SynthesisDaCRA x20 DRF (test)
SSIM97.12
7
PET Image SynthesisADNI
SSIM93.7
6
Showing 3 of 3 rows

Other info

Follow for update