DiamondGAN: Unified Multi-Modal Generative Adversarial Networks for MRI Sequences Synthesis
About
Synthesizing MR imaging sequences is highly relevant in clinical practice, as single sequences are often missing or are of poor quality (e.g. due to motion). Naturally, the idea arises that a target modality would benefit from multi-modal input, as proprietary information of individual modalities can be synergistic. However, existing methods fail to scale up to multiple non-aligned imaging modalities, facing common drawbacks of complex imaging sequences. We propose a novel, scalable and multi-modal approach called DiamondGAN. Our model is capable of performing exible non-aligned cross-modality synthesis and data infill, when given multiple modalities or any of their arbitrary subsets, learning structured information in an end-to-end fashion. We synthesize two MRI sequences with clinical relevance (i.e., double inversion recovery (DIR) and contrast-enhanced T1 (T1-c)), reconstructed from three common sequences. In addition, we perform a multi-rater visual evaluation experiment and find that trained radiologists are unable to distinguish synthetic DIR images from real ones.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Many-to-one MRI Synthesis (T2, FLAIR -> T1) | BRATS (test) | PSNR23.92 | 21 | |
| Many-to-one MRI Synthesis (T1, FLAIR -> T2) | BRATS (test) | PSNR23.82 | 21 | |
| Nanoparticles distribution prediction | B16 tumor model dataset (external val) | SSIM (%)83.25 | 13 | |
| NPs distribution prediction | NPs distribution dataset 1.0 (Internal val) | SSIM88.14 | 13 | |
| MRI Synthesis (T1, T2 -> FLAIR) | BRATS (test) | PSNR22.18 | 13 |