SGD-Net: Efficient Model-Based Deep Learning with Theoretical Guarantees
About
Deep unfolding networks have recently gained popularity in the context of solving imaging inverse problems. However, the computational and memory complexity of data-consistency layers within traditional deep unfolding networks scales with the number of measurements, limiting their applicability to large-scale imaging inverse problems. We propose SGD-Net as a new methodology for improving the efficiency of deep unfolding through stochastic approximations of the data-consistency layers. Our theoretical analysis shows that SGD-Net can be trained to approximate batch deep unfolding networks to an arbitrary precision. Our numerical results on intensity diffraction tomography and sparse-view computed tomography show that SGD-Net can match the performance of the batch network at a fraction of training and testing complexity.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sparse-View CT Reconstruction | (test) | SNR (dB)35.01 | 14 | |
| Image Reconstruction | IDT 15 dB Input SNR | SNR (dB)39.62 | 8 | |
| Image Reconstruction | IDT 20 dB Input SNR | SNR (dB)40.26 | 8 | |
| Image Reconstruction | IDT 25 dB Input SNR | SNR (dB)40.47 | 8 | |
| Image Reconstruction | MRI Set1 (10% sampling) | SNR (dB)23.37 | 7 | |
| Image Reconstruction | MRI Set1 20% sampling | SNR (dB)26.81 | 7 | |
| Image Reconstruction | MRI Set2 (10% sampling) | SNR (dB)26.37 | 7 |