Tada-DIP: Input-adaptive Deep Image Prior for One-shot 3D Image Reconstruction
About
Deep Image Prior (DIP) has recently emerged as a promising one-shot neural-network based image reconstruction method. However, DIP has seen limited application to 3D image reconstruction problems. In this work, we introduce Tada-DIP, a highly effective and fully 3D DIP method for solving 3D inverse problems. By combining input-adaptation and denoising regularization, Tada-DIP produces high-quality 3D reconstructions while avoiding the overfitting phenomenon that is common in DIP. Experiments on sparse-view X-ray computed tomography reconstruction validate the effectiveness of the proposed method, demonstrating that Tada-DIP produces much better reconstructions than training-data-free baselines and achieves reconstruction performance on par with a supervised network trained using a large dataset with fully-sampled volumes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sparse-view 3D CT Reconstruction | LDCT 30 views (test) | PSNR39.73 | 5 | |
| Sparse-view 3D CT Reconstruction | LDCT 15 views (test) | PSNR35.63 | 5 |